Tag Archives: FDA

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

Xenotransplantation—organs for transplantation in human patients—it’s a business and a science

The last time (June 18, 2018 post) I mentioned xenotransplantation (transplanting organs from one species into another species; see more here), it was in the context of an art/sci (or sciart) event coming to Vancouver (Canada).,

Patricia Piccinini’s Curious Imaginings Courtesy: Vancouver Biennale [downloaded from http://dailyhive.com/vancouver/vancouver-biennale-unsual-public-art-2018/]

The latest edition of the Vancouver Biennale was featured in a June 6, 2018 news item on the Daily Hive (Vancouver),

Melbourne artist Patricia Piccinini’s Curious Imaginings is expected to be one of the most talked about installations of the exhibit. Her style of “oddly captivating, somewhat grotesque, human-animal hybrid creature” is meant to be shocking and thought-provoking.

Piccinini’s interactive [emphasis mine] experience will “challenge us to explore the social impacts of emerging biotechnology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.”

Piccinini’s work will be displayed in the 105-year-old Patricia Hotel in Vancouver’s Strathcona neighbourhood. The 90-day ticketed exhibition [emphasis mine] is scheduled to open this September [2018].

(The show opens on Sept. 14, 2018.)

At the time, I had yet to stumble across Ingfei Chen’s thoughtful dive into the topic in her May 9, 2018 article for Slate.com,

In the United States, the clock is ticking for more than 114,700 adults and children waiting for a donated kidney or other lifesaving organ, and each day, nearly 20 of them die. Researchers are devising a new way to grow human organs inside other animals, but the method raises potentially thorny ethical issues. Other conceivable futuristic techniques sound like dystopian science fiction. As we envision an era of regenerative medicine decades from now, how far is society willing to go to solve the organ shortage crisis?

I found myself pondering this question after a discussion about the promises of stem cell technologies veered from the intriguing into the bizarre. I was interviewing bioengineer Zev Gartner, co-director and research coordinator of the Center for Cellular Construction at the University of California, San Francisco, about so-called organoids, tiny clumps of organlike tissue that can self-assemble from human stem cells in a Petri dish. These tissue bits are lending new insights into how our organs form and diseases take root. Some researchers even hope they can nurture organoids into full-size human kidneys, pancreases, and other organs for transplantation.

Certain organoid experiments have recently set off alarm bells, but when I asked Gartner about it, his radar for moral concerns was focused elsewhere. For him, the “really, really thought-provoking” scenarios involve other emerging stem cell–based techniques for engineering replacement organs for people, he told me. “Like blastocyst complementation,” he said.

Never heard of it? Neither had I. Turns out it’s a powerful new genetic engineering trick that researchers hope to use for growing human organs inside pigs or sheep—organs that could be genetically personalized for transplant patients, in theory avoiding immune-system rejection problems. The science still has many years to go, but if it pans out, it could be one solution to the organ shortage crisis. However, the prospect of creating hybrid animals with human parts and killing them to harvest organs has already raised a slew of ethical questions. In 2015, the National Institutes of Health placed a moratorium on federal funding of this nascent research area while it evaluated and discussed the issues.

As Gartner sees it, the debate over blastocyst complementation research—work that he finds promising—is just one of many conversations that society needs to have about the ethical and social costs and benefits of future technologies for making lifesaving transplant organs. “There’s all these weird ways that we could go about doing this,” he said, with a spectrum of imaginable approaches that includes organoids, interspecies organ farming, and building organs from scratch using 3D bioprinters. But even if it turns out we can produce human organs in these novel ways, the bigger issue, in each technological instance, may be whether we should.

Gartner crystallized things with a downright creepy example: “We know that the best bioreactor for tissues and organs for humans are human beings,” he said. Hypothetically, “the best way to get you a new heart would be to clone you, grow up a copy of yourself, and take the heart out.” [emphasis mine] Scientists could probably produce a cloned person with the technologies we already have, if money and ethics were of no concern. “But we don’t want to go there, right?” he added in the next breath. “The ethics involved in doing it are not compatible with who we want to be as a society.”

This sounds like Gartner may have been reading some science fiction, specifically, Lois McMaster Bujold and her Barrayar series where she often explored the ethics and possibilities of bioengineering. At this point, some of her work seems eerily prescient.

As for Chen’s article, I strongly encourage you to read it in its entirety if you have the time.

Medicine, healing, and big money

At about the same time, there was a May 31, 2018 news item on phys.org offering a perspective from some of the leaders in the science and the business (Note: Links have been removed),

Over the past few years, researchers led by George Church have made important strides toward engineering the genomes of pigs to make their cells compatible with the human body. So many think that it’s possible that, with the help of CRISPR technology, a healthy heart for a patient in desperate need might one day come from a pig.

“It’s relatively feasible to change one gene in a pig, but to change many dozens—which is quite clear is the minimum here—benefits from CRISPR,” an acronym for clustered regularly interspaced short palindromic repeats, said Church, the Robert Winthrop Professor of Genetics at Harvard Medical School (HMS) and a core faculty member of Harvard’s Wyss Institute for Biologically Inspired Engineering. Xenotransplantation is “one of few” big challenges (along with gene drives and de-extinction, he said) “that really requires the ‘oomph’ of CRISPR.”

To facilitate the development of safe and effective cells, tissues, and organs for future medical transplantation into human patients, Harvard’s Office of Technology Development has granted a technology license to the Cambridge biotech startup eGenesis.

Co-founded by Church and former HMS doctoral student Luhan Yang in 2015, eGenesis announced last year that it had raised $38 million to advance its research and development work. At least eight former members of the Church lab—interns, doctoral students, postdocs, and visiting researchers—have continued their scientific careers as employees there.

“The Church Lab is well known for its relentless pursuit of scientific achievements so ambitious they seem improbable—and, indeed, [for] its track record of success,” said Isaac Kohlberg, Harvard’s chief technology development officer and senior associate provost. “George deserves recognition too for his ability to inspire passion and cultivate a strong entrepreneurial drive among his talented research team.”

The license from Harvard OTD covers a powerful set of genome-engineering technologies developed at HMS and the Wyss Institute, including access to foundational intellectual property relating to the Church Lab’s 2012 breakthrough use of CRISPR, led by Yang and Prashant Mali, to edit the genome of human cells. Subsequent innovations that enabled efficient and accurate editing of numerous genes simultaneously are also included. The license is exclusive to eGenesis but limited to the field of xenotransplantation.

A May 30, 2018 Harvard University news release by Caroline Petty, which originated the news item, explores some of the issues associated with incubating humans organs in other species,

The prospect of using living, nonhuman organs, and concerns over the infectiousness of pathogens either present in the tissues or possibly formed in combination with human genetic material, have prompted the Food and Drug Administration to issue detailed guidance on xenotransplantation research and development since the mid-1990s. In pigs, a primary concern has been that porcine endogenous retroviruses (PERVs), strands of potentially pathogenic DNA in the animals’ genomes, might infect human patients and eventually cause disease. [emphases mine]

That’s where the Church lab’s CRISPR expertise has enabled significant advances. In 2015, the lab published important results in the journal Science, successfully demonstrating the use of genome engineering to eliminate all 62 PERVs in porcine cells. Science later called it “the most widespread CRISPR editing feat to date.”

In 2017, with collaborators at Harvard, other universities, and eGenesis, Church and Yang went further. Publishing again in Science, they first confirmed earlier researchers’ fears: Porcine cells can, in fact, transmit PERVs into human cells, and those human cells can pass them on to other, unexposed human cells. (It is still unknown under what circumstances those PERVs might cause disease.) In the same paper, they corrected the problem, announcing the embryogenesis and birth of 37 PERV-free pigs. [Note: My July 17, 2018 post features research which suggests CRISPR-Cas9 gene editing may cause greater genetic damage than had been thought.]

“Taken together, those innovations were stunning,” said Vivian Berlin, director of business development in OTD, who manages the commercialization strategy for much of Harvard’s intellectual property in the life sciences. “That was the foundation they needed, to convince both the scientific community and the investment community that xenotransplantation might become a reality.”

“After hundreds of tests, this was a critical milestone for eGenesis — and the entire field — and represented a key step toward safe organ transplantation from pigs,” said Julie Sunderland, interim CEO of eGenesis. “Building on this study, we hope to continue to advance the science and potential of making xenotransplantation a safe and routine medical procedure.”

Genetic engineering may undercut human diseases, but also could help restore extinct species, researcher says. [Shades of the Jurassic Park movies!]

It’s not, however, the end of the story: An immunological challenge remains, which eGenesis will need to address. The potential for a patient’s body to outright reject transplanted tissue has stymied many previous attempts at xenotransplantation. Church said numerous genetic changes must be achieved to make porcine organs fully compatible with human patients. Among these are edits to several immune functions, coagulation functions, complements, and sugars, as well as the PERVs.

“Trying the straight transplant failed almost immediately, within hours, because there’s a huge mismatch in the carbohydrates on the surface of the cells, in particular alpha-1-3-galactose, and so that was a showstopper,” Church explained. “When you delete that gene, which you can do with conventional methods, you still get pretty fast rejection, because there are a lot of other aspects that are incompatible. You have to take care of each of them, and not all of them are just about removing things — some of them you have to humanize. There’s a great deal of subtlety involved so that you get normal pig embryogenesis but not rejection.

“Putting it all together into one package is challenging,” he concluded.

In short, it’s the next big challenge for CRISPR.

Not unexpectedly, there is no mention of the CRISPR patent fight between Harvard/MIT’s (Massachusetts Institute of Technology) Broad Institute and the University of California at Berkeley (UC Berkeley). My March 15, 2017 posting featured an outcome where the Broad Institute won the first round of the fight. As I recall, it was a decision based on the principles associated with King Solomon, i.e., the US Patent Office, divided the baby and UCBerkeley got the less important part of the baby. As you might expect the decision has been appealed. In an April 30, 2018 piece, Scientific American reprinted an article about the latest round in the fight written by Sharon Begley for STAT (Note: Links have been removed),

All You Need to Know for Round 2 of the CRISPR Patent Fight

It’s baaaaack, that reputation-shredding, stock-moving fight to the death over key CRISPR patents. On Monday morning in Washington, D.C., the U.S. Court of Appeals for the Federal Circuit will hear oral arguments in University of California v. Broad Institute. Questions?

How did we get here? The patent office ruled in February 2017 that the Broad’s 2014 CRISPR patent on using CRISPR-Cas9 to edit genomes, based on discoveries by Feng Zhang, did not “interfere” with a patent application by UC based on the work of UC Berkeley’s Jennifer Doudna. In plain English, that meant the Broad’s patent, on using CRISPR-Cas9 to edit genomes in eukaryotic cells (all animals and plants, but not bacteria), was different from UC’s, which described Doudna’s experiments using CRISPR-Cas9 to edit DNA in a test tube—and it was therefore valid. The Patent Trial and Appeal Board concluded that when Zhang got CRISPR-Cas9 to work in human and mouse cells in 2012, it was not an obvious extension of Doudna’s earlier research, and that he had no “reasonable expectation of success.” UC appealed, and here we are.

For anyone who may not realize what the stakes are for these institutions, Linda Williams in a March 16, 1999 article for the LA Times had this to say about universities, patents, and money,

The University of Florida made about $2 million last year in royalties on a patent for Gatorade Thirst Quencher, a sports drink that generates some $500 million to $600 million a year in revenue for Quaker Oats Co.

The payments place the university among the top five in the nation in income from patent royalties.

Oh, but if some people on the Gainesville, Fla., campus could just turn back the clock. “If we had done Gatorade right, we would be getting $5 or $6 million (a year),” laments Donald Price, director of the university’s office of corporate programs. “It is a classic example of how not to handle a patent idea,” he added.

Gatorade was developed in 1965 when many universities were ill equipped to judge the commercial potential of ideas emerging from their research labs. Officials blew the university’s chance to control the Gatorade royalties when they declined to develop a professor’s idea.

The Gatorade story does not stop there and, even though it’s almost 20 years old, this article stands the test of time. I strongly encourage you to read it if the business end of patents and academia interest you or if you would like to develop more insight into the Broad Institute/UC Berkeley situation.

Getting back to the science, there is that pesky matter of diseases crossing over from one species to another. While, Harvard and eGenesis claim a victory in this area, it seems more work needs to be done.

Infections from pigs

An August 29, 2018 University of Alabama at Birmingham news release (also on EurekAlert) by Jeff Hansen, describes the latest chapter in the quest to provide more organs for transplantion,

A shortage of organs for transplantation — including kidneys and hearts — means that many patients die while still on waiting lists. So, research at the University of Alabama at Birmingham and other sites has turned to pig organs as an alternative. [emphasis mine]

Using gene-editing, researchers have modified such organs to prevent rejection, and research with primates shows the modified pig organs are well-tolerated.

An added step is needed to ensure the safety of these inter-species transplants — sensitive, quantitative assays for viruses and other infectious microorganisms in donor pigs that potentially could gain access to humans during transplantation.

The U.S. Food and Drug Administration requires such testing, prior to implantation, of tissues used for xenotransplantation from animals to humans. It is possible — though very unlikely — that an infectious agent in transplanted tissues could become an emerging infectious disease in humans.

In a paper published in Xenotransplantation, Mark Prichard, Ph.D., and colleagues at UAB have described the development and testing of 30 quantitative assays for pig infectious agents. These assays had sensitivities similar to clinical lab assays for viral loads in human patients. After validation, the UAB team also used the assays on nine sows and 22 piglets delivered from the sows through caesarian section.

“Going forward, ensuring the safety of these organs is of paramount importance,” Prichard said. “The use of highly sensitive techniques to detect potential pathogens will help to minimize adverse events in xenotransplantation.”

“The assays hold promise as part of the screening program to identify suitable donor animals, validate and release transplantable organs for research purposes, and monitor transplant recipients,” said Prichard, a professor in the UAB Department of Pediatrics and director of the Department of Pediatrics Molecular Diagnostics Laboratory.

The UAB researchers developed quantitative polymerase chain reaction, or qPCR, assays for 28 viruses sometimes found in pigs and two groups of mycoplasmas. They established reproducibility, sensitivity, specificity and lower limit of detection for each assay. All but three showed features of good quantitative assays, and the lower limit of detection values ranged between one and 16 copies of the viral or bacterial genetic material.

Also, the pig virus assays did not give false positives for some closely related human viruses.

As a start to understanding the infectious disease load in normal healthy animals and ensuring the safety of pig tissues used in xenotransplantation research, the researchers then screened blood, nasal swab and stool specimens from nine adult sows and 22 of their piglets delivered by caesarian section.

Mycoplasma species and two distinct herpesviruses were the most commonly detected microorganisms. Yet 14 piglets that were delivered from three sows infected with either or both herpesviruses were not infected with the herpesviruses, showing that transmission of these viruses from sow to the caesarian-delivery piglet was inefficient.

Prichard says the assays promise to enhance the safety of pig tissues for xenotransplantation, and they will also aid evaluation of human specimens after xenotransplantation.

The UAB researchers say they subsequently have evaluated more than 300 additional specimens, and that resulted in the detection of most of the targets. “The detection of these targets in pig specimens provides reassurance that the analytical methods are functioning as designed,” said Prichard, “and there is no a priori reason some targets might be more difficult to detect than others with the methods described here.”

As is my custom, here’s a link to and a citation for the paper,

Xenotransplantation panel for the detection of infectious agents in pigs by Caroll B. Hartline, Ra’Shun L. Conner, Scott H. James, Jennifer Potter, Edward Gray, Jose Estrada, Mathew Tector, A. Joseph Tector, Mark N. Prichard. Xenotransplantaion Volume 25, Issue 4 July/August 2018 e12427 DOI: https://doi.org/10.1111/xen.12427 First published: 18 August 2018

This paper is open access.

All this leads to questions about chimeras. If a pig is incubating organs with human cells it’s a chimera but then means the human receiving the organ becomes a chimera too. (For an example, see my Dec. 22, 2013 posting where there’s mention of a woman who received a trachea from a pig. Scroll down about 30% of the way.)

What is it to be human?

A question much beloved of philosophers and others, the question seems particularly timely with xenotransplantion and other developments such neuroprosthetics (cyborgs) and neuromorphic computing (brainlike computing).

As I’ve noted before, although not recently, popular culture offers a discourse on these issues. Take a look at the superhero movies and the way in which enhanced humans and aliens are presented. For example, X-Men comics and movies present mutants (humans with enhanced abilities) as despised and rejected. Video games (not really my thing but there is the Deus Ex series which has as its hero, a cyborg also offer insight into these issues.

Other than popular culture and in the ‘bleeding edge’ arts community, I can’t recall any public discussion on these matters arising from the extraordinary set of technologies which are being deployed or prepared for deployment in the foreseeable future.

(If you’re in Vancouver (Canada) from September 14 – December 15, 2018, you may want to check out Piccinini’s work. Also, there’s ” NCSU [North Carolina State University] Libraries, NC State’s Genetic Engineering and Society (GES) Center, and the Gregg Museum of Art & Design have issued a public call for art for the upcoming exhibition Art’s Work in the Age of Biotechnology: Shaping our Genetic Futures.” from my Sept. 6, 2018 posting. Deadline: Oct. 1, 2018.)

At a guess, there will be pushback from people who have no interest in debating what it is to be human as they already know, and will find these developments, when they learn about them, to be horrifying and unnatural.

Being smart about using artificial intelligence in the field of medicine

Since my August 20, 2018 post featured an opinion piece about the possibly imminent replacement of radiologists with artificial intelligence systems and the latest research about employing them for diagnosing eye diseases, it seems like a good time to examine some of the mythology embedded in the discussion about AI and medicine.

Imperfections in medical AI systems

An August 15, 2018 article for Slate.com by W. Nicholson Price II (who teaches at the University of Michigan School of Law; in addition to his law degree he has a PhD in Biological Sciences from Columbia University) begins with the peppy, optimistic view before veering into more critical territory (Note: Links have been removed),

For millions of people suffering from diabetes, new technology enabled by artificial intelligence promises to make management much easier. Medtronic’s Guardian Connect system promises to alert users 10 to 60 minutes before they hit high or low blood sugar level thresholds, thanks to IBM Watson, “the same supercomputer technology that can predict global weather patterns.” Startup Beta Bionics goes even further: In May, it received Food and Drug Administration approval to start clinical trials on what it calls a “bionic pancreas system” powered by artificial intelligence, capable of “automatically and autonomously managing blood sugar levels 24/7.”

An artificial pancreas powered by artificial intelligence represents a huge step forward for the treatment of diabetes—but getting it right will be hard. Artificial intelligence (also known in various iterations as deep learning and machine learning) promises to automatically learn from patterns in medical data to help us do everything from managing diabetes to finding tumors in an MRI to predicting how long patients will live. But the artificial intelligence techniques involved are typically opaque. We often don’t know how the algorithm makes the eventual decision. And they may change and learn from new data—indeed, that’s a big part of the promise. But when the technology is complicated, opaque, changing, and absolutely vital to the health of a patient, how do we make sure it works as promised?

Price describes how a ‘closed loop’ artificial pancreas with AI would automate insulin levels for diabetic patients, flaws in the automated system, and how companies like to maintain a competitive advantage (Note: Links have been removed),

[…] a “closed loop” artificial pancreas, where software handles the whole issue, receiving and interpreting signals from the monitor, deciding when and how much insulin is needed, and directing the insulin pump to provide the right amount. The first closed-loop system was approved in late 2016. The system should take as much of the issue off the mind of the patient as possible (though, of course, that has limits). Running a close-loop artificial pancreas is challenging. The way people respond to changing levels of carbohydrates is complicated, as is their response to insulin; it’s hard to model accurately. Making it even more complicated, each individual’s body reacts a little differently.

Here’s where artificial intelligence comes into play. Rather than trying explicitly to figure out the exact model for how bodies react to insulin and to carbohydrates, machine learning methods, given a lot of data, can find patterns and make predictions. And existing continuous glucose monitors (and insulin pumps) are excellent at generating a lot of data. The idea is to train artificial intelligence algorithms on vast amounts of data from diabetic patients, and to use the resulting trained algorithms to run a closed-loop artificial pancreas. Even more exciting, because the system will keep measuring blood glucose, it can learn from the new data and each patient’s artificial pancreas can customize itself over time as it acquires new data from that patient’s particular reactions.

Here’s the tough question: How will we know how well the system works? Diabetes software doesn’t exactly have the best track record when it comes to accuracy. A 2015 study found that among smartphone apps for calculating insulin doses, two-thirds of the apps risked giving incorrect results, often substantially so. … And companies like to keep their algorithms proprietary for a competitive advantage, which makes it hard to know how they work and what flaws might have gone unnoticed in the development process.

There’s more,

These issues aren’t unique to diabetes care—other A.I. algorithms will also be complicated, opaque, and maybe kept secret by their developers. The potential for problems multiplies when an algorithm is learning from data from an entire hospital, or hospital system, or the collected data from an entire state or nation, not just a single patient. …

The [US Food and Drug Administraiont] FDA is working on this problem. The head of the agency has expressed his enthusiasm for bringing A.I. safely into medical practice, and the agency has a new Digital Health Innovation Action Plan to try to tackle some of these issues. But they’re not easy, and one thing making it harder is a general desire to keep the algorithmic sauce secret. The example of IBM Watson for Oncology has given the field a bit of a recent black eye—it turns out that the company knew the algorithm gave poor recommendations for cancer treatment but kept that secret for more than a year. …

While Price focuses on problems with algorithms and with developers and their business interests, he also hints at some of the body’s complexities.

Can AI systems be like people?

Susan Baxter, a medical writer with over 20 years experience, a PhD in health economics, and author of countless magazine articles and several books, offers a more person-centered approach to the discussion in her July 6, 2018 posting on susanbaxter.com,

The fascination with AI continues to irk, given that every second thing I read seems to be extolling the magic of AI and medicine and how It Will Change Everything. Which it will not, trust me. The essential issue of illness remains perennial and revolves around an individual for whom no amount of technology will solve anything without human contact. …

But in this world, or so we are told by AI proponents, radiologists will soon be obsolete. [my August 20, 2018 post] The adaptational learning capacities of AI mean that reading a scan or x-ray will soon be more ably done by machines than humans. The presupposition here is that we, the original programmers of this artificial intelligence, understand the vagaries of real life (and real disease) so wonderfully that we can deconstruct these much as we do the game of chess (where, let’s face it, Big Blue ate our lunch) and that analyzing a two-dimensional image of a three-dimensional body, already problematic, can be reduced to a series of algorithms.

Attempting to extrapolate what some “shadow” on a scan might mean in a flesh and blood human isn’t really quite the same as bishop to knight seven. Never mind the false positive/negatives that are considered an acceptable risk or the very real human misery they create.

Moravec called it

It’s called Moravec’s paradox, the inability of humans to realize just how complex basic physical tasks are – and the corresponding inability of AI to mimic it. As you walk across the room, carrying a glass of water, talking to your spouse/friend/cat/child; place the glass on the counter and open the dishwasher door with your foot as you open a jar of pickles at the same time, take a moment to consider just how many concurrent tasks you are doing and just how enormous the computational power these ostensibly simple moves would require.

Researchers in Singapore taught industrial robots to assemble an Ikea chair. Essentially, screw in the legs. A person could probably do this in a minute. Maybe two. The preprogrammed robots took nearly half an hour. And I suspect programming those robots took considerably longer than that.

Ironically, even Elon Musk, who has had major production problems with the Tesla cars rolling out of his high tech factory, has conceded (in a tweet) that “Humans are underrated.”

I wouldn’t necessarily go that far given the political shenanigans of Trump & Co. but in the grand scheme of things I tend to agree. …

Is AI going the way of gene therapy?

Susan draws a parallel between the AI and medicine discussion with the discussion about genetics and medicine (Note: Links have been removed),

On a somewhat similar note – given the extent to which genetics discourse has that same linear, mechanistic  tone [as AI and medicine] – it turns out all this fine talk of using genetics to determine health risk and whatnot is based on nothing more than clever marketing, since a lot of companies are making a lot of money off our belief in DNA. Truth is half the time we don’t even know what a gene is never mind what it actually does;  geneticists still can’t agree on how many genes there are in a human genome, as this article in Nature points out.

Along the same lines, I was most amused to read about something called the Super Seniors Study, research following a group of individuals in their 80’s, 90’s and 100’s who seem to be doing really well. Launched in 2002 and headed by Angela Brooks Wilson, a geneticist at the BC [British Columbia] Cancer Agency and SFU [Simon Fraser University] Chair of biomedical physiology and kinesiology, this longitudinal work is examining possible factors involved in healthy ageing.

Turns out genes had nothing to do with it, the title of the Globe and Mail article notwithstanding. (“Could the DNA of these super seniors hold the secret to healthy aging?” The answer, a resounding “no”, well hidden at the very [end], the part most people wouldn’t even get to.) All of these individuals who were racing about exercising and working part time and living the kind of life that makes one tired just reading about it all had the same “multiple (genetic) factors linked to a high probability of disease”. You know, the gene markers they tell us are “linked” to cancer, heart disease, etc., etc. But these super seniors had all those markers but none of the diseases, demonstrating (pretty strongly) that the so-called genetic links to disease are a load of bunkum. Which (she said modestly) I have been saying for more years than I care to remember. You’re welcome.

The fundamental error in this type of linear thinking is in allowing our metaphors (genes are the “blueprint” of life) and propensity towards social ideas of determinism to overtake common sense. Biological and physiological systems are not static; they respond to and change to life in its entirety, whether it’s diet and nutrition to toxic or traumatic insults. Immunity alters, endocrinology changes, – even how we think and feel affects the efficiency and effectiveness of physiology. Which explains why as we age we become increasingly dissimilar.

If you have the time, I encourage to read Susan’s comments in their entirety.

Scientific certainties

Following on with genetics, gene therapy dreams, and the complexity of biology, the June 19, 2018 Nature article by Cassandra Willyard (mentioned in Susan’s posting) highlights an aspect of scientific research not often mentioned in public,

One of the earliest attempts to estimate the number of genes in the human genome involved tipsy geneticists, a bar in Cold Spring Harbor, New York, and pure guesswork.

That was in 2000, when a draft human genome sequence was still in the works; geneticists were running a sweepstake on how many genes humans have, and wagers ranged from tens of thousands to hundreds of thousands. Almost two decades later, scientists armed with real data still can’t agree on the number — a knowledge gap that they say hampers efforts to spot disease-related mutations.

In 2000, with the genomics community abuzz over the question of how many human genes would be found, Ewan Birney launched the GeneSweep contest. Birney, now co-director of the European Bioinformatics Institute (EBI) in Hinxton, UK, took the first bets at a bar during an annual genetics meeting, and the contest eventually attracted more than 1,000 entries and a US$3,000 jackpot. Bets on the number of genes ranged from more than 312,000 to just under 26,000, with an average of around 40,000. These days, the span of estimates has shrunk — with most now between 19,000 and 22,000 — but there is still disagreement (See ‘Gene Tally’).

… the inconsistencies in the number of genes from database to database are problematic for researchers, Pruitt says. “People want one answer,” she [Kim Pruitt, a genome researcher at the US National Center for Biotechnology Information {NCB}] in Bethesda, Maryland] adds, “but biology is complex.”

I wanted to note that scientists do make guesses and not just with genetics. For example, Gina Mallet’s 2005 book ‘Last Chance to Eat: The Fate of Taste in a Fast Food World’ recounts the story of how good and bad levels of cholesterol were established—the experts made some guesses based on their experience. That said, Willyard’s article details the continuing effort to nail down the number of genes almost 20 years after the human genome project was completed and delves into the problems the scientists have uncovered.

Final comments

In addition to opaque processes with developers/entrepreneurs wanting to maintain their secrets for competitive advantages and in addition to our own poor understanding of the human body (how many genes are there anyway?), there are same major gaps (reflected in AI) in our understanding of various diseases. Angela Lashbrook’s August 16, 2018 article for The Atlantic highlights some issues with skin cancer and shade of your skin (Note: Links have been removed),

… While fair-skinned people are at the highest risk for contracting skin cancer, the mortality rate for African Americans is considerably higher: Their five-year survival rate is 73 percent, compared with 90 percent for white Americans, according to the American Academy of Dermatology.

As the rates of melanoma for all Americans continue a 30-year climb, dermatologists have begun exploring new technologies to try to reverse this deadly trend—including artificial intelligence. There’s been a growing hope in the field that using machine-learning algorithms to diagnose skin cancers and other skin issues could make for more efficient doctor visits and increased, reliable diagnoses. The earliest results are promising—but also potentially dangerous for darker-skinned patients.

… Avery Smith, … a software engineer in Baltimore, Maryland, co-authored a paper in JAMA [Journal of the American Medical Association] Dermatology that warns of the potential racial disparities that could come from relying on machine learning for skin-cancer screenings. Smith’s co-author, Adewole Adamson of the University of Texas at Austin, has conducted multiple studies on demographic imbalances in dermatology. “African Americans have the highest mortality rate [for skin cancer], and doctors aren’t trained on that particular skin type,” Smith told me over the phone. “When I came across the machine-learning software, one of the first things I thought was how it will perform on black people.”

Recently, a study that tested machine-learning software in dermatology, conducted by a group of researchers primarily out of Germany, found that “deep-learning convolutional neural networks,” or CNN, detected potentially cancerous skin lesions better than the 58 dermatologists included in the study group. The data used for the study come from the International Skin Imaging Collaboration, or ISIC, an open-source repository of skin images to be used by machine-learning algorithms. Given the rise in melanoma cases in the United States, a machine-learning algorithm that assists dermatologists in diagnosing skin cancer earlier could conceivably save thousands of lives each year.

… Chief among the prohibitive issues, according to Smith and Adamson, is that the data the CNN relies on come from primarily fair-skinned populations in the United States, Australia, and Europe. If the algorithm is basing most of its knowledge on how skin lesions appear on fair skin, then theoretically, lesions on patients of color are less likely to be diagnosed. “If you don’t teach the algorithm with a diverse set of images, then that algorithm won’t work out in the public that is diverse,” says Adamson. “So there’s risk, then, for people with skin of color to fall through the cracks.”

As Adamson and Smith’s paper points out, racial disparities in artificial intelligence and machine learning are not a new issue. Algorithms have mistaken images of black people for gorillas, misunderstood Asians to be blinking when they weren’t, and “judged” only white people to be attractive. An even more dangerous issue, according to the paper, is that decades of clinical research have focused primarily on people with light skin, leaving out marginalized communities whose symptoms may present differently.

The reasons for this exclusion are complex. According to Andrew Alexis, a dermatologist at Mount Sinai, in New York City, and the director of the Skin of Color Center, compounding factors include a lack of medical professionals from marginalized communities, inadequate information about those communities, and socioeconomic barriers to participating in research. “In the absence of a diverse study population that reflects that of the U.S. population, potential safety or efficacy considerations could be missed,” he says.

Adamson agrees, elaborating that with inadequate data, machine learning could misdiagnose people of color with nonexistent skin cancers—or miss them entirely. But he understands why the field of dermatology would surge ahead without demographically complete data. “Part of the problem is that people are in such a rush. This happens with any new tech, whether it’s a new drug or test. Folks see how it can be useful and they go full steam ahead without thinking of potential clinical consequences. …

Improving machine-learning algorithms is far from the only method to ensure that people with darker skin tones are protected against the sun and receive diagnoses earlier, when many cancers are more survivable. According to the Skin Cancer Foundation, 63 percent of African Americans don’t wear sunscreen; both they and many dermatologists are more likely to delay diagnosis and treatment because of the belief that dark skin is adequate protection from the sun’s harmful rays. And due to racial disparities in access to health care in America, African Americans are less likely to get treatment in time.

Happy endings

I’ll add one thing to Price’s article, Susan’s posting, and Lashbrook’s article about the issues with AI , certainty, gene therapy, and medicine—the desire for a happy ending prefaced with an easy solution. If the easy solution isn’t possible accommodations will be made but that happy ending is a must. All disease will disappear and there will be peace on earth. (Nod to Susan Baxter and her many discussions with me about disease processes and happy endings.)

The solutions, for the most part, are seen as technological despite the mountain of evidence suggesting that technology reflects our own imperfect understanding of health and disease therefore providing what is at best an imperfect solution.

Also, we tend to underestimate just how complex humans are not only in terms of disease and health but also with regard to our skills, understanding, and, perhaps not often enough, our ability to respond appropriately in the moment.

There is much to celebrate in what has been accomplished: no more black death, no more smallpox, hip replacements, pacemakers, organ transplants, and much more. Yes, we should try to improve our medicine. But, maybe alongside the celebration we can welcome AI and other technologies with a lot less hype and a lot more skepticism.

Killing bacteria on contact with dragonfly-inspired nanocoating

Scientists in Singapore were inspired by dragonflies and cicadas according to a March 28, 2018 news item on Nanowerk (Note: A link has been removed),

Studies have shown that the wings of dragonflies and cicadas prevent bacterial growth due to their natural structure. The surfaces of their wings are covered in nanopillars making them look like a bed of nails. When bacteria come into contact with these surfaces, their cell membranes get ripped apart immediately and they are killed. This inspired researchers from the Institute of Bioengineering and Nanotechnology (IBN) of A*STAR to invent an anti-bacterial nano coating for disinfecting frequently touched surfaces such as door handles, tables and lift buttons.

This technology will prove particularly useful in creating bacteria-free surfaces in places like hospitals and clinics, where sterilization is important to help control the spread of infections. Their new research was recently published in the journal Small (“ZnO Nanopillar Coated Surfaces with Substrate-Dependent Superbactericidal Property”)

Image 1: Zinc oxide nanopillars that looked like a bed of nails can kill a broad range of germs when used as a coating on frequently-touched surfaces. Courtesy: A*STAR

A March 28, 2018 Agency for Science Technology and Research (A*STAR) press release, which originated the news item, describes the work further,

80% of common infections are spread by hands, according to the B.C. [province of Canada] Centre for Disease Control1. Disinfecting commonly touched surfaces helps to reduce the spread of harmful germs by our hands, but would require manual and repeated disinfection because germs grow rapidly. Current disinfectants may also contain chemicals like triclosan which are not recognized as safe and effective 2, and may lead to bacterial resistance and environmental contamination if used extensively.

“There is an urgent need for a better way to disinfect surfaces without causing bacterial resistance or harm to the environment. This will help us to prevent the transmission of infectious diseases from contact with surfaces,” said IBN Executive Director Professor Jackie Y. Ying.

To tackle this problem, a team of researchers led by IBN Group Leader Dr Yugen Zhang created a novel nano coating that can spontaneously kill bacteria upon contact. Inspired by studies on dragonflies and cicadas, the IBN scientists grew nanopilllars of zinc oxide, a compound known for its anti-bacterial and non-toxic properties. The zinc oxide nanopillars can kill a broad range of germs like E. coli and S. aureus that are commonly transmitted from surface contact.

Tests on ceramic, glass, titanium and zinc surfaces showed that the coating effectively killed up to 99.9% of germs found on the surfaces. As the bacteria are killed mechanically rather than chemically, the use of the nano coating would not contribute to environmental pollution. Also, the bacteria will not be able to develop resistance as they are completely destroyed when their cell walls are pierced by the nanopillars upon contact.

Further studies revealed that the nano coating demonstrated the best bacteria killing power when it is applied on zinc surfaces, compared with other surfaces. This is because the zinc oxide nanopillars catalyzed the release of superoxides (or reactive oxygen species), which could even kill nearby free floating bacteria that were not in direct contact with the surface. This super bacteria killing power from the combination of nanopillars and zinc broadens the scope of applications of the coating beyond hard surfaces.

Subsequently, the researchers studied the effect of placing a piece of zinc that had been coated with zinc oxide nanopillars into water containing E. coli. All the bacteria were killed, suggesting that this material could potentially be used for water purification.

Dr Zhang said, “Our nano coating is designed to disinfect surfaces in a novel yet practical way. This study demonstrated that our coating can effectively kill germs on different types of surfaces, and also in water. We were also able to achieve super bacteria killing power when the coating was used on zinc surfaces because of its dual mechanism of action. We hope to use this technology to create bacteria-free surfaces in a safe, inexpensive and effective manner, especially in places where germs tend to accumulate.”

IBN has recently received a grant from the National Research Foundation, Prime Minister’s Office, Singapore, under its Competitive Research Programme to further develop this coating technology in collaboration with Tan Tock Seng Hospital for commercial application over the next 5 years.

1 B.C. Centre for Disease Control

2 U.S. Food & Drug Administration

(I wasn’t expecting to see a reference to my home province [BC Centre for Disease Control].) Back to the usual, here’s a link to and a citation for the paper,

ZnO Nanopillar Coated Surfaces with Substrate‐Dependent Superbactericidal Property by Guangshun Yi, Yuan Yuan, Xiukai Li, Yugen Zhang. Small https://doi.org/10.1002/smll.201703159 First published: 22 February 2018

This paper is behind a paywall.

One final comment, this research reminds me of research into simulating shark skin because that too has bacteria-killing nanostructures. My latest about the sharkskin research is a Sept, 18, 2014 posting.

Sunscreens: 2018 update

I don’t usually concern myself with SPF numbers on sunscreens as my primary focus has been on the inclusion of nanoscale metal particles (these are still considered safe). However, a recent conversation with a dental hygienist and coincidentally tripping across a June 19, 2018 posting on the blog shortly after the convo. has me reassessing my take on SPF numbers (Note: Links have been removed),

So, what’s the deal with SPF? A recent interview of Dr Steven Q Wang, M.D., chair of The Skin Cancer Foundation Photobiology Committee, finally will give us some clarity. Apparently, the SPF number, be it 15, 30, or 50, refers to the amount of UVB protection that that sunscreen provides. Rather than comparing the SPFs to each other, like we all do at the store, SPF is a reflection of the length of time it would take for the Sun’s UVB radiation to redden your skin (used exactly as directed), versus if you didn’t apply any sunscreen at all. In ideal situations (in lab settings), if you wore SPF 30, it would take 30 times longer for you to get a sunburn than if you didn’t wear any sunscreen.

What’s more, SPF 30 is not nearly half the strength of SPF 50. Rather, SPF 30 allows 3% of UVB rays to hit your skin, and SPF 50 allows about 2% of UVB rays to hit your skin. Now before you say that that is just one measly percent, it actually is much more. According to Dr Steven Q. Wang, SPF 30 allows around 1.5 times more UV radiation onto your skin than SPF 50. That’s an actual 150% difference [according to Wang’s article “… SPF 30 is allowing 50 percent more UV radiation onto your skin.”] in protection.

(author of the ‘eponymous’ blog) offers a good overview of the topic in a friendly, informative fashion albeit I found the ‘percentage’ to be a bit confusing. (S)he also provides a link to a previous posting about the ingredients in sunscreens (I do have one point of disagreement with regarding oxybenzone) as well as links to Dr. Steven Q. Wang’s May 24, 2018 Ask the Expert article about sunscreens and SPF numbers on skincancer.org. You can find the percentage under the ‘What Does the SPF Number Mean?’ subsection, in the second paragraph.

Ingredients: metallic nanoparticles and oxybenzone

The use of metallic nanoparticles  (usually zinc oxide and/or (titanium dioxide) in sunscreens was loathed by civil society groups, in particular Friends of the Earth (FOE) who campaigned relentlessly against their use in sunscreens. The nadir for FOE was in February 2012 when the Australian government published a survey showing that 13% of the respondents were not using any sunscreens due to their fear of nanoparticles. For those who don’t know, Australia has the highest rate of skin cancer in the world. (You can read about the debacle in my Feb. 9, 2012 posting.)

At the time, the only civil society group which supported the use of metallic nanoparticles in sunscreens was the Environmental Working Group (EWG).  After an examination of the research they, to their own surprise, came out in favour (grudgingly) of metallic nanoparticles. (The EWG were more concerned about the use of oxybenzone in sunscreens.)

Over time, the EWG’s perspective has been adopted by other groups to the point where sunscreens with metallic nanoparticles are commonplace in ‘natural’ or ‘organic’ sunscreens.

As for oxybenzones, in a May 23, 2018 posting about sunscreen ingredients notes this (Note: Links have been removed),

Oxybenzone – Chemical sunscreen, protects from UV damage. Oxybenzone belongs to the chemical family Benzophenone, which are persistent (difficult to get rid of), bioaccumulative (builds up in your body over time), and toxic, or PBT [or: Persistent, bioaccumulative and toxic substances (PBTs)]. They are a possible carcinogen (cancer-causing agent), endocrine disrupter; however, this is debatable. Also could cause developmental and reproductive toxicity, could cause organ system toxicity, as well as could cause irritation and potentially toxic to the environment.

It seems that the tide is turning against the use of oxybenzones (from a July 3, 2018 article by Adam Bluestein for Fast Company; Note: Links have been removed),

On July 3 [2018], Hawaii’s Governor, David Ig, will sign into law the first statewide ban on the sale of sunscreens containing chemicals that scientists say are damaging the Earth’s coral reefs. Passed by state legislators on May 1 [2018], the bill targets two chemicals, oxybenzone and octinoxate, which are found in thousands of sunscreens and other skincare products. Studies published over the past 10 years have found that these UV-filtering chemicals–called benzophenones–are highly toxic to juvenile corals and other marine life and contribute to the fatal bleaching of coral reefs (along with global warming and runoff pollutants from land). (A 2008 study by European researchers estimated that 4,000 to 6,000 tons of sunblock accumulates in coral reefs every year.) Also, though both substances are FDA-approved for use in sunscreens, the nonprofit Environmental Working Group notes numerous studies linking oxybenzone to hormone disruption and cell damage that may lead to skin cancer. In its 2018 annual sunscreen guide, the EWG found oxybenzone in two-thirds of the 650 products it reviewed.

The Hawaii ban won’t take effect until January 2021, but it’s already causing a wave of disruption that’s affecting sunscreen manufacturers, retailers, and the medical community.

For starters, several other municipalities have already or could soon join Hawaii’s effort. In May [2018], the Caribbean island of Bonaire announced a ban on chemicals sunscreens, and nonprofits such as the Sierra Club and Surfrider Foundation, along with dive industry and certain resort groups, are urging legislation to stop sunscreen pollution in California, Colorado, Florida, and the U.S. Virgin Islands. Marine nature reserves in Mexico already prohibit oxybenzone-containing sunscreens, and the U.S. National Park Service website for South Florida, Hawaii, U.S. Virgin Islands, and American Samoa recommends the use of “reef safe” sunscreens, which use natural mineral ingredients–zinc oxide or titanium oxide–to protect skin.

Makers of “eco,” “organic,” and “natural” sunscreens that already meet the new standards are seizing on the news from Hawaii to boost their visibility among the islands’ tourists–and to expand their footprint on the shelves of mainland retailers. This past spring, for example, Miami-based Raw Elements partnered with Hawaiian Airlines, Honolulu’s Waikiki Aquarium, the Aqua-Aston hotel group (Hawaii’s largest), and the Sheraton Maui Resort & Spa to get samples of its reef-safe zinc-oxide-based sunscreens to their guests. “These partnerships have had a tremendous impact raising awareness about this issue,” says founder and CEO Brian Guadagno, who notes that inquiries and sales have increased this year.

As Bluestein notes there are some concerns about this and other potential bans,

“Eliminating the use of sunscreen ingredients considered to be safe and effective by the FDA with a long history of use not only restricts consumer choice, but is also at odds with skin cancer prevention efforts […],” says Bayer, owner of the Coppertone brand, in a statement to Fast Company. Bayer disputes the validity of studies used to support the ban, which were published by scientists from U.S. National Oceanic & Atmospheric Administration, the nonprofit Haereticus Environmental Laboratory, Tel Aviv University, the University of Hawaii, and elsewhere. “Oxybenzone in sunscreen has not been scientifically proven to have an effect on the environment. We take this issue seriously and, along with the industry, have supported additional research to confirm that there is no effect.”

Johnson & Johnson, which markets Neutrogena sunscreens, is taking a similar stance, worrying that “the recent efforts in Hawaii to ban sunscreens that contain oxybenzone may actually adversely affect public health,” according to a company spokesperson. “Science shows that sunscreens are a key factor in preventing skin cancer, and our scientific assessment of the lab studies done to date in Hawaii show the methods were questionable and the data insufficient to draw factual conclusions about any impact on coral reefs.”

Terrified (and rightly so) about anything scaring people away from using sunblock, The American Academy of Dermatology, also opposes Hawaii’s ban. Suzanne M. Olbricht, president of the AADA, has issued a statement that the organization “is concerned that the public’s risk of developing skin cancer could increase due to potential new restrictions in Hawaii that impact access to sunscreens with ingredients necessary for broad-spectrum protection, as well as the potential stigma around sunscreen use that could develop as a result of these restrictions.”

The fact is that there are currently a large number of widely available reef-safe products on the market that provide “full spectrum” protection up to SPF50–meaning they protect against both UVB rays that cause sunburns as well as UVA radiation, which causes deeper skin damage. SPFs higher than 50 are largely a marketing gimmick, say advocates of chemical-free products: According to the Environmental Working Group, properly applied SPF 50 sunscreen blocks 98% of UVB rays; SPF 100 blocks 99%. And a sunscreen lotion’s SPF rating has little to do with its ability to shield skin from UVA rays.

I notice neither Bayer nor Johnson & Johnson nor the American Academy of Dermatology make mention of oxybenzone’s possible role as a hormone disruptor.

Given the importance that coral reefs have to the environment we all share, I’m inclined to support the oxybenzone ban based on that alone. Of course, it’s conceivable that metallic nanoparticles may also have a deleterious effect on coral reefs as their use increases. It’s to be hoped that’s not the case but if it is, then I’ll make my decisions accordingly and hope we have a viable alternative.

As for your sunscreen questions and needs, the Environment Working Group (EWG) has extensive information including a product guide on this page (scroll down to EWG’s Sunscreen Guide) and a discussion of ‘high’ SPF numbers I found useful for my decision-making.

Getting chipped

A January 23, 2018 article by John Converse Townsend for Fast Company highlights the author’s experience of ‘getting chipped’ in Wisconsin (US),

I have an RFID, or radio frequency ID, microchip implanted in my hand. Now with a wave, I can unlock doors, fire off texts, login to my computer, and even make credit card payments.

There are others like me: The majority of employees at the Wisconsin tech company Three Square Market (or 32M) have RFID implants, too. Last summer, with the help of Andy “Gonzo” Whitehead, a local body piercer with 17 years of experience, the company hosted a “chipping party” for employees who’d volunteered to test the technology in the workplace.

“We first presented the concept of being chipped to the employees, thinking we might get a few people interested,” CEO [Chief Executive Officer] Todd Westby, who has implants in both hands, told me. “Literally out of the box, we had 40 people out of close to 90 that were here that said, within 10 minutes, ‘I would like to be chipped.’”

Westby’s left hand can get him into the office, make phone calls, and stores his living will and drivers license information, while the chip in his right hand is using for testing new applications. (The CEO’s entire family is chipped, too.) Other employees said they have bitcoin wallets and photos stored on their devices.

The legendary Gonzo Whitehead was waiting for me when I arrived at Three Square Market HQ, located in quiet River Falls, 40 minutes east of Minneapolis. The minutes leading up to the big moment were a bit nervy, after seeing the size of the needle (it’s huge), but the experience was easier than I could have imagined. The RFID chip is the size of a grain of basmati rice, but the pain wasn’t so bad–comparable to a bee sting, and maybe less so. I experienced a bit of bruising afterward (no bleeding), and today the last remaining mark of trauma is a tiny, fading scar between my thumb and index finger. Unless you were looking for it, the chip resting under my skin is invisible.

Truth is, the applications for RFID implants are pretty cool. But right now, they’re also limited. Without a near-field communication (NFC) writer/reader, which powers on a “passive” RFID chip to write and read information to the device’s memory, an implant isn’t of much use. But that’s mostly a hardware issue. As NFC technology becomes available, which is increasingly everywhere thanks to Samsung Pay and Apple Pay and new contactless “tap-and-go” credit cards, the possibilities become limitless. [emphasis mine]

Health and privacy?

Townsend does cover a few possible downsides to the ‘limitless possibilities’ offered by RFID’s combined with NFC technology,

From a health perspective, the RFID implants are biologically safe–not so different from birth control implants [emphasis mine]. [US Food and Drug Administration] FDA-sanctioned for use in humans since 2004, the chips neither trigger metal detectors nor disrupt [magnetic resonance imaging] MRIs, and their glass casings hold up to pressure testing, whether that’s being dropped from a rooftop or being run over by a pickup truck.

The privacy side of things is a bit more complicated, but the undeniable reality is that privacy isn’t as prized as we’d like to think [emphasis mine]. It’s already a regular concession to convenience.

“Your information’s for sale every day,” McMullen [Patrick McMullen, president, Three Square Market] says. “Thirty-four billion avenues exist for your information to travel down every single day, whether you’re checking Facebook, checking out at the supermarket, driving your car . . . your information’s everywhere.

Townsend may not be fully up-to-date on the subject of birth control implants. I think ‘safeish’ might be a better description in light of this news of almost two years ago (from a March 1, 2016 news item on CBS [Columbia Broadcasting Service] News [online]), Note: Links have been removed,

[US] Federal health regulators plan to warn consumers more strongly about Essure, a contraceptive implant that has drawn thousands of complaints from women reporting chronic pain, bleeding and other health problems.

The Food and Drug Administration announced Monday it would add a boxed warning — its most serious type — to alert doctors and patients to problems reported with the nickel-titanium implant.

But the FDA stopped short of removing the device from the market, a step favored by many women who have petitioned the agency in the last year. Instead, the agency is requiring manufacturer Bayer to conduct studies of the device to further assess its risks in different groups of women.

The FDA is requiring Bayer to conduct a study of 2,000 patients comparing problems like unplanned pregnancy and pelvic pain between patients getting Essure and those receiving traditional “tube tying” surgery. Agency officials said they have reviewed more than 600 reports of women becoming pregnant after receiving Essure. Women are supposed to get a test after three months to make sure Essure is working appropriately, but the agency noted some women do not follow-up for the test.

FDA officials acknowledged the proposed study would take years to complete, but said Bayer would be expected to submit interim results by mid-2017.

According to a Sept. 25, 2017 article by Kerri O’Brien for WRIC.com, Bayer had suspended sales of their device in all countries except the US,

Bayer, the manufacturer of Essure, has announced it’s halting sales of Essure in all countries outside of the U.S. In a statement, Bayer told 8News it’s due to a lack of interest in the product outside of the U.S.

“Bayer made a commercial decision this Spring to discontinue the distribution of Essure® outside of the U.S. where there is not as much patient interest in permanent birth control,” the statement read.

The move also comes after the European Union suspended sales of the device. The suspension was prompted by the National Standards Authority of Ireland declining to renew Essure’s CE marketing. “CE,” according to the European Commission website signifies products sold in the EEA that has been assessed to meet “high safety, health, and environmental protection requirements.”

These excerpts are about the Essure birth control implant. Perhaps others are safer? That noted, it does seem that Townsend was a bit dismissive of safety concerns.

As for privacy, he does investigate further to discover this,

As technology evolves and becomes more sophisticated, the methods to break it also evolve and get more sophisticated, says D.C.-based privacy expert Michelle De Mooy. Even so, McMullen believes that our personal information is safer in our hand than in our wallets. He  says the smartphone you touch 2,500 times a day does 100 times more reporting of data than does an RFID implant, plus the chip can save you from pickpockets and avoid credit card skimmers altogether.

Well, the first sentence suggests some caution. As for De Mooy, there’s this from her profile page on the Center for Democracy and Technology website (Note: A link has been removed),

Michelle De Mooy is Director of the Privacy & Data Project at the Center for Democracy & Technology. She advocates for data privacy rights and protections in legislation and regulation, works closely with industry and other stakeholders to investigate good data practices and controls, as well as identifying and researching emerging technology that impacts personal privacy. She leads CDT’s health privacy work, chairing the Health Privacy Working Group and focusing on the intersection between individual privacy, health information and technology. Michelle’s current research is focused on ethical and privacy-aware internal research and development in wearables, the application of data analytics to health information found on non-traditional platforms, like social media, and the growing market for genetic data. She has testified before Congress on health policy, spoken about native advertising at the Federal Trade Commission, and written about employee wellness programs for US News & World Report’s “Policy Dose” blog. Michelle is a frequent media contributor, appearing in the New York Times, the Guardian, the Wall Street Journal, Vice, and the Los Angeles Times, as well as on The Today Show, Voice of America, and Government Matters TV programs.

Ethics anyone?

Townsend does raise some ethical issues (Note: A link has been removed),

… Word from CEO Todd Westby is that parents in Wisconsin have been asking whether (and when) they can have their children implanted with GPS-enabled devices (which, incidentally, is the subject of the “Arkangel” episode in the new season of Black Mirror [US television programme]). But that, of course, raises ethical questions: What if a kid refused to be chipped? What if they never knew?

Final comments on implanted RFID chips and bodyhacking

It doesn’t seem that implantable chips have changed much since I first wrote about them in a May 27, 2010 posting titled: Researcher infects self with virus.  In that instance, Dr Mark Gasson, a researcher at the University of Reading. introduced a virus into a computer chip implanted in his body.

Of course since 2010, there are additional implantable items such as computer chips and more making their way into our bodies and it doesn’t seem to be much public discussion (other than in popular culture) about the implications.

Presumably, there are policy makers tracking these developments. I have to wonder if the technology gurus will continue to tout these technologies as already here or having made such inroads that we (the public) are presented with a fait accompli with the policy makers following behind.

3D bioprinting: a conference about the latest trends (May 3 – 5, 2017 at the University of British Columbia, Vancouver)

The University of British Columbia’s (UBC) Peter Wall Institute for Advanced Studies (PWIAS) is hosting along with local biotech firm, Aspect Biosystems, a May 3 -5, 2017 international research roundtable known as ‘Printing the Future of Therapeutics in 3D‘.

A May 1, 2017 UBC news release (received via email) offers some insight into the field of bioprinting from one of the roundtable organizers,

This week, global experts will gather [4] at the University of British
Columbia to discuss the latest trends in 3D bioprinting—a technology
used to create living tissues and organs.

In this Q&A, UBC chemical and biological engineering professor
Vikramaditya Yadav [5], who is also with the Regenerative Medicine
Cluster Initiative in B.C., explains how bioprinting could potentially
transform healthcare and drug development, and highlights Canadian
innovations in this field.

WHY IS 3D BIOPRINTING SIGNIFICANT?

Bioprinted tissues or organs could allow scientists to predict
beforehand how a drug will interact within the body. For every
life-saving therapeutic drug that makes its way into our medicine
cabinets, Health Canada blocks the entry of nine drugs because they are
proven unsafe or ineffective. Eliminating poor-quality drug candidates
to reduce development costs—and therefore the cost to consumers—has
never been more urgent.

In Canada alone, nearly 4,500 individuals are waiting to be matched with
organ donors. If and when bioprinters evolve to the point where they can
manufacture implantable organs, the concept of an organ transplant
waiting list would cease to exist. And bioprinted tissues and organs
from a patient’s own healthy cells could potentially reduce the risk
of transplant rejection and related challenges.

HOW IS THIS TECHNOLOGY CURRENTLY BEING USED?

Skin, cartilage and bone, and blood vessels are some of the tissue types
that have been successfully constructed using bioprinting. Two of the
most active players are the Wake Forest Institute for Regenerative
Medicine in North Carolina, which reports that its bioprinters can make
enough replacement skin to cover a burn with 10 times less healthy
tissue than is usually needed, and California-based Organovo, which
makes its kidney and liver tissue commercially available to
pharmaceutical companies for drug testing.

Beyond medicine, bioprinting has already been commercialized to print
meat and artificial leather. It’s been estimated that the global
bioprinting market will hit $2 billion by 2021.

HOW IS CANADA INVOLVED IN THIS FIELD?

Canada is home to some of the most innovative research clusters and
start-up companies in the field. The UBC spin-off Aspect Biosystems [6]
has pioneered a bioprinting paradigm that rapidly prints on-demand
tissues. It has successfully generated tissues found in human lungs.

Many initiatives at Canadian universities are laying strong foundations
for the translation of bioprinting and tissue engineering into
mainstream medical technologies. These include the Regenerative Medicine
Cluster Initiative in B.C., which is headed by UBC, and the University
of Toronto’s Institute of Biomaterials and Biomedical Engineering.

WHAT ETHICAL ISSUES DOES BIOPRINTING CREATE?

There are concerns about the quality of the printed tissues. It’s
important to note that the U.S. Food and Drug Administration and Health
Canada are dedicating entire divisions to regulation of biomanufactured
products and biomedical devices, and the FDA also has a special division
that focuses on regulation of additive manufacturing – another name
for 3D printing.

These regulatory bodies have an impressive track record that should
assuage concerns about the marketing of substandard tissue. But cost and
pricing are arguably much more complex issues.

Some ethicists have also raised questions about whether society is not
too far away from creating Replicants, à la _Blade Runner_. The idea is
fascinating, scary and ethically grey. In theory, if one could replace
the extracellular matrix of bones and muscles with a stronger substitute
and use cells that are viable for longer, it is not too far-fetched to
create bones or muscles that are stronger and more durable than their
natural counterparts.

WILL DOCTORS BE PRINTING REPLACEMENT BODY PARTS IN 20 YEARS’ TIME?

This is still some way off. Optimistically, patients could see the
technology in certain clinical environments within the next decade.
However, some technical challenges must be addressed in order for this
to occur, beginning with faithful replication of the correct 3D
architecture and vascularity of tissues and organs. The bioprinters
themselves need to be improved in order to increase cell viability after
printing.

These developments are happening as we speak. Regulation, though, will
be the biggest challenge for the field in the coming years.

There are some events open to the public (from the international research roundtable homepage),

OPEN EVENTS

You’re invited to attend the open events associated with Printing the Future of Therapeutics in 3D.

Café Scientifique

Thursday, May 4, 2017
Telus World of Science
5:30 – 8:00pm [all tickets have been claimed as of May 2, 2017 at 3:15 pm PT]

3D Bioprinting: Shaping the Future of Health

Imagine a world where drugs are developed without the use of animals, where doctors know how a patient will react to a drug before prescribing it and where patients can have a replacement organ 3D-printed using their own cells, without dealing with long donor waiting lists or organ rejection. 3D bioprinting could enable this world. Join us for lively discussion and dessert as experts in the field discuss the exciting potential of 3D bioprinting and the ethical issues raised when you can print human tissues on demand. This is also a rare opportunity to see a bioprinter live in action!

Open Session

Friday, May 5, 2017
Peter Wall Institute for Advanced Studies
2:00 – 7:00pm

A Scientific Discussion on the Promise of 3D Bioprinting

The medical industry is struggling to keep our ageing population healthy. Developing effective and safe drugs is too expensive and time-consuming to continue unchanged. We cannot meet the current demand for transplant organs, and people are dying on the donor waiting list every day.  We invite you to join an open session where four of the most influential academic and industry professionals in the field discuss how 3D bioprinting is being used to shape the future of health and what ethical challenges may be involved if you are able to print your own organs.

ROUNDTABLE INFORMATION

The University of British Columbia and the award-winning bioprinting company Aspect Biosystems, are proud to be co-organizing the first “Printing the Future of Therapeutics in 3D” International Research Roundtable. This event will congregate global leaders in tissue engineering research and pharmaceutical industry experts to discuss the rapidly emerging and potentially game-changing technology of 3D-printing living human tissues (bioprinting). The goals are to:

Highlight the state-of-the-art in 3D bioprinting research
Ideate on disruptive innovations that will transform bioprinting from a novel research tool to a broadly adopted systematic practice
Formulate an actionable strategy for industry engagement, clinical translation and societal impact
Present in a public forum, key messages to educate and stimulate discussion on the promises of bioprinting technology

The Roundtable will bring together a unique collection of industry experts and academic leaders to define a guiding vision to efficiently deploy bioprinting technology for the discovery and development of new therapeutics. As the novel technology of 3D bioprinting is more broadly adopted, we envision this Roundtable will become a key annual meeting to help guide the development of the technology both in Canada and globally.

We thank you for your involvement in this ground-breaking event and look forward to you all joining us in Vancouver for this unique research roundtable.

Kind Regards,
The Organizing Committee
Christian Naus, Professor, Cellular & Physiological Sciences, UBC
Vikram Yadav, Assistant Professor, Chemical & Biological Engineering, UBC
Tamer Mohamed, CEO, Aspect Biosystems
Sam Wadsworth, CSO, Aspect Biosystems
Natalie Korenic, Business Coordinator, Aspect Biosystems

I’m glad to see this event is taking place—and with public events too! (Wish I’d seen the Café Scientifique announcement earlier when I first checked for tickets  yesterday. I was hoping there’d been some cancellations today.) Finally, for the interested, you can find Aspect Biosystems here.

New iron oxide nanoparticle as an MRI (magnetic resonance imaging) contrast agent

This high-resolution transmission electron micrograph of particles made by the research team shows the particles’ highly uniform size and shape. These are iron oxide particles just 3 nanometers across, coated with a zwitterion layer. Their small size means they can easily be cleared through the kidneys after injection. Courtesy of the researchers

A Feb. 14, 2017 news item on ScienceDaily announces a new MRI (magnetic resonance imaging) contrast agent,

A new, specially coated iron oxide nanoparticle developed by a team at MIT [Massachusetts Institute of Technology] and elsewhere could provide an alternative to conventional gadolinium-based contrast agents used for magnetic resonance imaging (MRI) procedures. In rare cases, the currently used gadolinium agents have been found to produce adverse effects in patients with impaired kidney function.

A Feb. 14, 2017 MIT news release (also on EurekAlert), which originated the news item, provides more technical detail,

 

The advent of MRI technology, which is used to observe details of specific organs or blood vessels, has been an enormous boon to medical diagnostics over the last few decades. About a third of the 60 million MRI procedures done annually worldwide use contrast-enhancing agents, mostly containing the element gadolinium. While these contrast agents have mostly proven safe over many years of use, some rare but significant side effects have shown up in a very small subset of patients. There may soon be a safer substitute thanks to this new research.

In place of gadolinium-based contrast agents, the researchers have found that they can produce similar MRI contrast with tiny nanoparticles of iron oxide that have been treated with a zwitterion coating. (Zwitterions are molecules that have areas of both positive and negative electrical charges, which cancel out to make them neutral overall.) The findings are being published this week in the Proceedings of the National Academy of Sciences, in a paper by Moungi Bawendi, the Lester Wolfe Professor of Chemistry at MIT; He Wei, an MIT postdoc; Oliver Bruns, an MIT research scientist; Michael Kaul at the University Medical Center Hamburg-Eppendorf in Germany; and 15 others.

Contrast agents, injected into the patient during an MRI procedure and designed to be quickly cleared from the body by the kidneys afterwards, are needed to make fine details of organ structures, blood vessels, and other specific tissues clearly visible in the images. Some agents produce dark areas in the resulting image, while others produce light areas. The primary agents for producing light areas contain gadolinium.

Iron oxide particles have been largely used as negative (dark) contrast agents, but radiologists vastly prefer positive (light) contrast agents such as gadolinium-based agents, as negative contrast can sometimes be difficult to distinguish from certain imaging artifacts and internal bleeding. But while the gadolinium-based agents have become the standard, evidence shows that in some very rare cases they can lead to an untreatable condition called nephrogenic systemic fibrosis, which can be fatal. In addition, evidence now shows that the gadolinium can build up in the brain, and although no effects of this buildup have yet been demonstrated, the FDA is investigating it for potential harm.

“Over the last decade, more and more side effects have come to light” from the gadolinium agents, Bruns says, so that led the research team to search for alternatives. “None of these issues exist for iron oxide,” at least none that have yet been detected, he says.

The key new finding by this team was to combine two existing techniques: making very tiny particles of iron oxide, and attaching certain molecules (called surface ligands) to the outsides of these particles to optimize their characteristics. The iron oxide inorganic core is small enough to produce a pronounced positive contrast in MRI, and the zwitterionic surface ligand, which was recently developed by Wei and coworkers in the Bawendi research group, makes the iron oxide particles water-soluble, compact, and biocompatible.

The combination of a very tiny iron oxide core and an ultrathin ligand shell leads to a total hydrodynamic diameter of 4.7 nanometers, below the 5.5-nanometer renal clearance threshold. This means that the coated iron oxide should quickly clear through the kidneys and not accumulate. This renal clearance property is an important feature where the particles perform comparably to gadolinium-based contrast agents.

Now that initial tests have demonstrated the particles’ effectiveness as contrast agents, Wei and Bruns say the next step will be to do further toxicology testing to show the particles’ safety, and to continue to improve the characteristics of the material. “It’s not perfect. We have more work to do,” Bruns says. But because iron oxide has been used for so long and in so many ways, even as an iron supplement, any negative effects could likely be treated by well-established protocols, the researchers say. If all goes well, the team is considering setting up a startup company to bring the material to production.

For some patients who are currently excluded from getting MRIs because of potential side effects of gadolinium, the new agents “could allow those patients to be eligible again” for the procedure, Bruns says. And, if it does turn out that the accumulation of gadolinium in the brain has negative effects, an overall phase-out of gadolinium for such uses could be needed. “If that turned out to be the case, this could potentially be a complete replacement,” he says.

Ralph Weissleder, a physician at Massachusetts General Hospital who was not involved in this work, says, “The work is of high interest, given the limitations of gadolinium-based contrast agents, which typically have short vascular half-lives and may be contraindicated in renally compromised patients.”

The research team included researchers in MIT’s chemistry, biological engineering, nuclear science and engineering, brain and cognitive sciences, and materials science and engineering departments and its program in Health Sciences and Technology; and at the University Medical Center Hamburg-Eppendorf; Brown University; and the Massachusetts General Hospital. It was supported by the MIT-Harvard NIH Center for Cancer Nanotechnology, the Army Research Office through MIT’s Institute for Soldier Nanotechnologies, the NIH-funded Laser Biomedical Research Center, the MIT Deshpande Center, and the European Union Seventh Framework Program.

Here’s a link to and a citation for the paper,

Exceedingly small iron oxide nanoparticles as positive MRI contrast agents by He Wei, Oliver T. Bruns, Michael G. Kaul, Eric C. Hansen, Mariya Barch, Agata Wiśniowsk, Ou Chen, Yue Chen, Nan Li, Satoshi Okada, Jose M. Cordero, Markus Heine, Christian T. Farrar, Daniel M. Montana, Gerhard Adam, Harald Ittrich, Alan Jasanoff, Peter Nielsen, and Moungi G. Bawendi. PNAS February 13, 2017 doi: 10.1073/pnas.1620145114 Published online before print February 13, 2017

This paper is behind a paywall.

Nanoparticles in baby formula

Needle-like particles of hydroxyapatite found in infant formula by ASU researchers. Westerhoff and Schoepf/ASU, CC BY-ND

Needle-like particles of hydroxyapatite found in infant formula by ASU [Arizona State University] researchers. Westerhoff and Schoepf/ASU, CC BY-ND

Nanowerk is featuring an essay about hydroxyapatite nanoparticles in baby formula written by Dr. Andrew Maynard in a May 17, 2016 news item (Note: A link has been removed),

There’s a lot of stuff you’d expect to find in baby formula: proteins, carbs, vitamins, essential minerals. But parents probably wouldn’t anticipate finding extremely small, needle-like particles. Yet this is exactly what a team of scientists here at Arizona State University [ASU] recently discovered.

The research, commissioned and published by Friends of the Earth (FoE) – an environmental advocacy group – analyzed six commonly available off-the-shelf baby formulas (liquid and powder) and found nanometer-scale needle-like particles in three of them. The particles were made of hydroxyapatite – a poorly soluble calcium-rich mineral. Manufacturers use it to regulate acidity in some foods, and it’s also available as a dietary supplement.

Andrew’s May 17, 2016 essay first appeared on The Conversation website,

Looking at these particles at super-high magnification, it’s hard not to feel a little anxious about feeding them to a baby. They appear sharp and dangerous – not the sort of thing that has any place around infants. …

… questions like “should infants be ingesting them?” make a lot of sense. However, as is so often the case, the answers are not quite so straightforward.

Andrew begins by explaining about calcium and hydroxyapatite (from The Conversation),

Calcium is an essential part of a growing infant’s diet, and is a legally required component in formula. But not necessarily in the form of hydroxyapatite nanoparticles.

Hydroxyapatite is a tough, durable mineral. It’s naturally made in our bodies as an essential part of bones and teeth – it’s what makes them so strong. So it’s tempting to assume the substance is safe to eat. But just because our bones and teeth are made of the mineral doesn’t automatically make it safe to ingest outright.

The issue here is what the hydroxyapatite in formula might do before it’s digested, dissolved and reconstituted inside babies’ bodies. The size and shape of the particles ingested has a lot to do with how they behave within a living system.

He then discusses size and shape, which are important at the nanoscale,

Size and shape can make a difference between safe and unsafe when it comes to particles in our food. Small particles aren’t necessarily bad. But they can potentially get to parts of our body that larger ones can’t reach. Think through the gut wall, into the bloodstream, and into organs and cells. Ingested nanoscale particles may be able to interfere with cells – even beneficial gut microbes – in ways that larger particles don’t.

These possibilities don’t necessarily make nanoparticles harmful. Our bodies are pretty well adapted to handling naturally occurring nanoscale particles – you probably ate some last time you had burnt toast (carbon nanoparticles), or poorly washed vegetables (clay nanoparticles from the soil). And of course, how much of a material we’re exposed to is at least as important as how potentially hazardous it is.

Yet there’s a lot we still don’t know about the safety of intentionally engineered nanoparticles in food. Toxicologists have started paying close attention to such particles, just in case their tiny size makes them more harmful than otherwise expected.

Currently, hydroxyapatite is considered safe at the macroscale by the US Food and Drug Administration (FDA). However, the agency has indicated that nanoscale versions of safe materials such as hydroxyapatite may not be safe food additives. From Andrew’s May 17, 2016 essay,

Hydroxyapatite is a tough, durable mineral. It’s naturally made in our bodies as an essential part of bones and teeth – it’s what makes them so strong. So it’s tempting to assume the substance is safe to eat. But just because our bones and teeth are made of the mineral doesn’t automatically make it safe to ingest outright.

The issue here is what the hydroxyapatite in formula might do before it’s digested, dissolved and reconstituted inside babies’ bodies. The size and shape of the particles ingested has a lot to do with how they behave within a living system. Size and shape can make a difference between safe and unsafe when it comes to particles in our food. Small particles aren’t necessarily bad. But they can potentially get to parts of our body that larger ones can’t reach. Think through the gut wall, into the bloodstream, and into organs and cells. Ingested nanoscale particles may be able to interfere with cells – even beneficial gut microbes – in ways that larger particles don’t.These possibilities don’t necessarily make nanoparticles harmful. Our bodies are pretty well adapted to handling naturally occurring nanoscale particles – you probably ate some last time you had burnt toast (carbon nanoparticles), or poorly washed vegetables (clay nanoparticles from the soil). And of course, how much of a material we’re exposed to is at least as important as how potentially hazardous it is.Yet there’s a lot we still don’t know about the safety of intentionally engineered nanoparticles in food. Toxicologists have started paying close attention to such particles, just in case their tiny size makes them more harmful than otherwise expected.

Putting particle size to one side for a moment, hydroxyapatite is classified by the US Food and Drug Administration (FDA) as “Generally Regarded As Safe.” That means it considers the material safe for use in food products – at least in a non-nano form. However, the agency has raised concerns that nanoscale versions of food ingredients may not be as safe as their larger counterparts.Some manufacturers may be interested in the potential benefits of “nanosizing” – such as increasing the uptake of vitamins and minerals, or altering the physical, textural and sensory properties of foods. But because decreasing particle size may also affect product safety, the FDA indicates that intentionally nanosizing already regulated food ingredients could require regulatory reevaluation.In other words, even though non-nanoscale hydroxyapatite is “Generally Regarded As Safe,” according to the FDA, the safety of any nanoscale form of the substance would need to be reevaluated before being added to food products.Despite this size-safety relationship, the FDA confirmed to me that the agency is unaware of any food substance intentionally engineered at the nanoscale that has enough generally available safety data to determine it should be “Generally Regarded As Safe.”Casting further uncertainty on the use of nanoscale hydroxyapatite in food, a 2015 report from the European Scientific Committee on Consumer Safety (SCCS) suggests there may be some cause for concern when it comes to this particular nanomaterial.Prompted by the use of nanoscale hydroxyapatite in dental products to strengthen teeth (which they consider “cosmetic products”), the SCCS reviewed published research on the material’s potential to cause harm. Their conclusion?

The available information indicates that nano-hydroxyapatite in needle-shaped form is of concern in relation to potential toxicity. Therefore, needle-shaped nano-hydroxyapatite should not be used in cosmetic products.

This recommendation was based on a handful of studies, none of which involved exposing people to the substance. Researchers injected hydroxyapatite needles directly into the bloodstream of rats. Others exposed cells outside the body to the material and observed the effects. In each case, there were tantalizing hints that the small particles interfered in some way with normal biological functions. But the results were insufficient to indicate whether the effects were meaningful in people.

As Andrew also notes in his essay, none of the studies examined by the SCCS OEuropean Scientific Committee on Consumer Safety) looked at what happens to nano-hydroxyapatite once it enters your gut and that is what the researchers at Arizona State University were considering (from the May 17, 2016 essay),

The good news is that, according to preliminary studies from ASU researchers, hydroxyapatite needles don’t last long in the digestive system.

This research is still being reviewed for publication. But early indications are that as soon as the needle-like nanoparticles hit the highly acidic fluid in the stomach, they begin to dissolve. So fast in fact, that by the time they leave the stomach – an exceedingly hostile environment – they are no longer the nanoparticles they started out as.

These findings make sense since we know hydroxyapatite dissolves in acids, and small particles typically dissolve faster than larger ones. So maybe nanoscale hydroxyapatite needles in food are safer than they sound.

This doesn’t mean that the nano-needles are completely off the hook, as some of them may get past the stomach intact and reach more vulnerable parts of the gut. But the findings do suggest these ultra-small needle-like particles could be an effective source of dietary calcium – possibly more so than larger or less needle-like particles that may not dissolve as quickly.

Intriguingly, recent research has indicated that calcium phosphate nanoparticles form naturally in our stomachs and go on to be an important part of our immune system. It’s possible that rapidly dissolving hydroxyapatite nano-needles are actually a boon, providing raw material for these natural and essential nanoparticles.

While it’s comforting to know that preliminary research suggests that the hydroxyapatite nanoparticles are likely safe for use in food products, Andrew points out that more needs to be done to insure safety (from the May 17, 2016 essay),

And yet, even if these needle-like hydroxyapatite nanoparticles in infant formula are ultimately a good thing, the FoE report raises a number of unresolved questions. Did the manufacturers knowingly add the nanoparticles to their products? How are they and the FDA ensuring the products’ safety? Do consumers have a right to know when they’re feeding their babies nanoparticles?

Whether the manufacturers knowingly added these particles to their formula is not clear. At this point, it’s not even clear why they might have been added, as hydroxyapatite does not appear to be a substantial source of calcium in most formula. …

And regardless of the benefits and risks of nanoparticles in infant formula, parents have a right to know what’s in the products they’re feeding their children. In Europe, food ingredients must be legally labeled if they are nanoscale. In the U.S., there is no such requirement, leaving American parents to feel somewhat left in the dark by producers, the FDA and policy makers.

As far as I’m aware, the Canadian situation is much the same as the US. If the material is considered safe at the macroscale, there is no requirement to indicate that a nanoscale version of the material is in the product.

I encourage you to read Andrew’s essay in its entirety. As for the FoE report (Nanoparticles in baby formula: Tiny new ingredients are a big concern), that is here.