Tag Archives: China

Iridescent giant clams could point the way to safety, climatologically speaking

Giant clams in Palau (Cynthia Barnett)

These don’t look like any clams I’ve ever seen but that is the point of Cynthia Barnett’s absorbing Sept. 10, 2018 article for The Atlantic (Note: A link has been removed),

Snorkeling amid the tree-tangled rock islands of Ngermid Bay in the western Pacific nation of Palau, Alison Sweeney lingers at a plunging coral ledge, photographing every giant clam she sees along a 50-meter transect. In Palau, as in few other places in the world, this means she is going to be underwater for a skin-wrinkling long time.

At least the clams are making it easy for Sweeney, a biophysicist at the University of Pennsylvania. The animals plump from their shells like painted lips, shimmering in blues, purples, greens, golds, and even electric browns. The largest are a foot across and radiate from the sea floor, but most are the smallest of the giant clams, five-inch Tridacna crocea, living higher up on the reef. Their fleshy Technicolor smiles beam in all directions from the corals and rocks of Ngermid Bay.

… Some of the corals are bleached from the conditions in Ngermid Bay, where naturally high temperatures and acidity mirror the expected effects of climate change on the global oceans. (Ngermid Bay is more commonly known as “Nikko Bay,” but traditional leaders and government officials are working to revive the indigenous name of Ngermid.)

Even those clams living on bleached corals are pulsing color, like wildflowers in a white-hot desert. Sweeney’s ponytail flows out behind her as she nears them with her camera. They startle back into their fluted shells. Like bashful fairytale creatures cursed with irresistible beauty, they cannot help but draw attention with their sparkly glow.

Barnett makes them seem magical and perhaps they are (Note: A link has been removed),

It’s the glow that drew Sweeney’s attention to giant clams, and to Palau, a tiny republic of more than 300 islands between the Philippines and Guam. Its sun-laden waters are home to seven of the world’s dozen giant-clam species, from the storied Tridacna gigas—which can weigh an estimated 550 pounds and measure over four feet across—to the elegantly fluted Tridacna squamosa. Sweeney first came to the archipelago in 2009, while working on animal iridescence as a post-doctoral fellow at the University of California at Santa Barbara. Whether shimmering from a blue morpho butterfly’s wings or a squid’s skin, iridescence is almost always associated with a visual signal—one used to attract mates or confuse predators. Giant clams’ luminosity is not such a signal. So, what is it?

In the years since, Sweeney and her colleagues have discovered that the clams’ iridescence is essentially the outer glow of a solar transformer—optimized over millions of years to run on sunlight and algal biofuel. Giant clams reach their cartoonish proportions thanks to an exceptional ability to grow their own photosynthetic algae in vertical farms spread throughout their flesh. Sweeney and other scientists think this evolved expertise may shed light on alternative fuel technologies and other industrial solutions for a warming world.

Barnett goes on to describe Palau’s relationship to the clams and the clams’ environment,

Palau’s islands have been inhabited for at least 3,400 years, and from the start, giant clams were a staple of diet, daily life, and even deity. Many of the islands’ oldest-surviving tools are crafted of thick giant-clam shell: arched-blade adzes, fishhooks, gougers, heavy taro-root pounders. Giant-clam shell makes up more than three-fourths of some of the oldest shell middens in Palau, a percentage that decreases through the centuries. Archaeologists suggest that the earliest islanders depleted the giant clams that crowded the crystalline shallows, then may have self-corrected. Ancient Palauan conservation law, known as bul, prohibited fishing during critical spawning periods, or when a species showed signs of over-harvesting.

Before the Christianity that now dominates Palauan religion sailed in on eighteenth-century mission ships, the culture’s creation lore began with a giant clam called to life in an empty sea. The clam grew bigger and bigger until it sired Latmikaik, the mother of human children, who birthed them with the help of storms and ocean currents.

The legend evokes giant clams in their larval phase, moving with the currents for their first two weeks of life. Before they can settle, the swimming larvae must find and ingest one or two photosynthetic alga, which later multiply, becoming self-replicating fuel cells. After the larvae down the alga and develop a wee shell and a foot, they kick around like undersea farmers, looking for a sunny spot for their crop. When they’ve chosen a well-lit home in a shallow lagoon or reef, they affix to the rock, their shell gaping to the sky. After the sun hits and photosynthesis begins, the microalgae will multiply to millions, or in the case of T. gigas, billions, and clam and algae will live in symbiosis for life.

Giant clam is a beloved staple in Palau and many other Pacific islands, prepared raw with lemon, simmered into coconut soup, baked into a savory pancake, or sliced and sautéed in a dozen other ways. But luxury demand for their ivory-like shells and their adductor muscle, which is coveted as high-end sashimi and an alleged aphrodisiac, has driven T. gigas extinct in China, Taiwan, and other parts of their native habitat. Some of the toughest marine-protection laws in the world, along with giant-clam aquaculture pioneered here, have helped Palau’s wild clams survive. The Palau Mariculture Demonstration Center raises hundreds of thousands of giant clams a year, supplying local clam farmers who sell to restaurants and the aquarium trade and keeping pressure off the wild population. But as other nations have wiped out their clams, Palau’s 230,000-square-mile ocean territory is an increasing target of illegal foreign fishers.

Barnett delves into how the country of Palau is responding to the voracious appetite for the giant clams and other marine life,

Palau, drawing on its ancient conservation tradition of bul, is fighting back. In 2015, President Tommy Remengesau Jr. signed into law the Palau National Marine Sanctuary Act, which prohibits fishing in 80 percent of Palau’s Exclusive Economic Zone and creates a domestic fishing area in the remaining 20 percent, set aside for local fishers selling to local markets. In 2016, the nation received a $6.6 million grant from Japan to launch a major renovation of the Palau Mariculture Demonstration Center. Now under construction at the waterfront on the southern tip of Malakal Island, the new facility will amp up clam-aquaculture research and increase giant-clam production five-fold, to more than a million seedlings a year.

Last year, Palau amended its immigration policy to require that all visitors sign a pledge to behave in an ecologically responsible manner. The pledge, stamped into passports by an immigration officer who watches you sign, is written to the island’s children:

Children of Palau, I take this pledge, as your guest, to preserve and protect your beautiful and unique island home. I vow to tread lightly, act kindly and explore mindfully. I shall not take what is not given. I shall not harm what does not harm me. The only footprints I shall leave are those that will wash away.

The pledge is winning hearts and public-relations awards. But Palau’s existential challenge is still the collective “we,” the world’s rising carbon emissions and the resulting upturns in global temperatures, sea levels, and destructive storms.

F. Umiich Sengebau, Palau’s Minister for Natural Resources, Environment, and Tourism, grew up on Koror and is full of giant-clam proverbs, wisdom and legends from his youth. He tells me a story I also heard from an elder in the state of Airai: that in old times, giant clams were known as “stormy-weather food,” the fresh staple that was easy to collect and have on hand when it was too stormy to go out fishing.

As Palau faces the storms of climate change, Sengebau sees giant clams becoming another sort of stormy-weather food, serving as a secure source of protein; a fishing livelihood; a glowing icon for tourists; and now, an inspiration for alternative energy and other low-carbon technologies. “In the old days, clams saved us,” Sengebau tells me. “I think there’s a lot of power in that, a great power and meaning in the history of clams as food, and now clams as science.”

I highly recommend Barnett’s article, which is one article in a larger series, from a November 6, 2017 The Atlantic press release,

The Atlantic is expanding the global footprint of its science writing today with a multi-year series to investigate life in all of its multitudes. The series, “Life Up Close,” created with support from Howard Hughes Medical Institute’s Department of Science Education (HHMI), begins today at TheAtlantic.com. In the first piece for the project, “The Zombie Diseases of Climate Change,” The Atlantic’s Robinson Meyer travels to Greenland to report on the potentially dangerous microbes emerging from thawing Arctic permafrost.

The project is ambitious in both scope and geographic reach, and will explore how life is adapting to our changing planet. Journalists will travel the globe to examine these changes as they happen to microbes, plants, and animals in oceans, grasslands, forests, deserts, and the icy poles. The Atlantic will question where humans should look for life next: from the Martian subsurface, to Europa’s oceans, to the atmosphere of nearby stars and beyond. “Life Up Close” will feature at least twenty reported pieces continuing through 2018.

“The Atlantic has been around for 160 years, but that’s a mere pinpoint in history when it comes to questions of life and where it started, and where we’re going,” said Ross Andersen, The Atlantic’s senior editor who oversees science, tech, and health. “The questions that this project will set out to tackle are critical; and this support will allow us to cover new territory in new and more ambitious ways.”

About The Atlantic:
Founded in 1857 and today one of the fastest growing media platforms in the industry, The Atlantic has throughout its history championed the power of big ideas and continues to shape global debate across print, digital, events, and video platforms. With its award-winning digital presence TheAtlantic.com and CityLab.com on cities around the world, The Atlantic is a multimedia forum on the most critical issues of our times—from politics, business, urban affairs, and the economy, to technology, arts, and culture. The Atlantic is celebrating its 160th anniversary this year. Bob Cohn is president of The Atlantic and Jeffrey Goldberg is editor in chief.

About the Howard Hughes Medical Institute (HHMI) Department of Science Education:
HHMI is the leading private nonprofit supporter of scientific research and science education in the United States. The Department of Science Education’s BioInteractive division produces free, high quality educational media for science educators and millions of students around the globe, its HHMI Tangled Bank Studios unit crafts powerful stories of scientific discovery for television and big screens, and its grants program aims to transform science education in universities and colleges. For more information, visit www.hhmi.org.

Getting back to the giant clams, sometimes all you can do is marvel, eh?

Manipulating light at the nanoscale with kiragami-inspired technique

At left, different patterns of slices through a thin metal foil, are made by a focused ion beam. These patterns cause the metal to fold up into predetermined shapes, which can be used for such purposes as modifying a beam of light. Courtesy of the researchers

Nanokiragami (or nano-kiragami) is a fully fledged field of research? That was news to me as was much else in a July 6, 2018 news item on ScienceDaily,

Nanokirigami has taken off as a field of research in the last few years; the approach is based on the ancient arts of origami (making 3-D shapes by folding paper) and kirigami (which allows cutting as well as folding) but applied to flat materials at the nanoscale, measured in billionths of a meter.

Now, researchers at MIT [Massachusetts Institute of Technology] and in China have for the first time applied this approach to the creation of nanodevices to manipulate light, potentially opening up new possibilities for research and, ultimately, the creation of new light-based communications, detection, or computational devices.

A July 6, 2018 MIT news release (also on EurekAlert), which originated the news item, adds detail,

The findings are described today [July 6, 2018] in the journal Science Advances, in a paper by MIT professor of mechanical engineering Nicholas X Fang and five others. Using methods based on standard microchip manufacturing technology, Fang and his team used a focused ion beam to make a precise pattern of slits in a metal foil just a few tens of nanometers thick. The process causes the foil to bend and twist itself into a complex three-dimensional shape capable of selectively filtering out light with a particular polarization.

Previous attempts to create functional kirigami devices have used more complicated fabrication methods that require a series of folding steps and have been primarily aimed at mechanical rather than optical functions, Fang says. The new nanodevices, by contrast, can be formed in a single folding step and could be used to perform a number of different optical functions.

For these initial proof-of-concept devices, the team produced a nanomechanical equivalent of specialized dichroic filters that can filter out circularly polarized light that is either “right-handed” or “left-handed.” To do so, they created a pattern just a few hundred nanometers across in the thin metal foil; the result resembles pinwheel blades, with a twist in one direction that selects the corresponding twist of light.

The twisting and bending of the foil happens because of stresses introduced by the same ion beam that slices through the metal. When using ion beams with low dosages, many vacancies are created, and some of the ions end up lodged in the crystal lattice of the metal, pushing the lattice out of shape and creating strong stresses that induce the bending.

“We cut the material with an ion beam instead of scissors, by writing the focused ion beam across this metal sheet with a prescribed pattern,” Fang says. “So you end up with this metal ribbon that is wrinkling up” in the precisely planned pattern.

“It’s a very nice connection of the two fields, mechanics and optics,” Fang says. The team used helical patterns to separate out the clockwise and counterclockwise polarized portions of a light beam, which may represent “a brand new direction” for nanokirigami research, he says.

The technique is straightforward enough that, with the equations the team developed, researchers should now be able to calculate backward from a desired set of optical characteristics and produce the needed pattern of slits and folds to produce just that effect, Fang says.

“It allows a prediction based on optical functionalities” to create patterns that achieve the desired result, he adds. “Previously, people were always trying to cut by intuition” to create kirigami patterns for a particular desired outcome.

The research is still at an early stage, Fang points out, so more research will be needed on possible applications. But these devices are orders of magnitude smaller than conventional counterparts that perform the same optical functions, so these advances could lead to more complex optical chips for sensing, computation, or communications systems or biomedical devices, the team says.

For example, Fang says, devices to measure glucose levels often use measurements of light polarity, because glucose molecules exist in both right- and left-handed forms which interact differently with light. “When you pass light through the solution, you can see the concentration of one version of the molecule, as opposed to the mixture of both,” Fang explains, and this method could allow for much smaller, more efficient detectors.

Circular polarization is also a method used to allow multiple laser beams to travel through a fiber-optic cable without interfering with each other. “People have been looking for such a system for laser optical communications systems” to separate the beams in devices called optical isolaters, Fang says. “We have shown that it’s possible to make them in nanometer sizes.”

The team also included MIT graduate student Huifeng Du; Zhiguang Liu, Jiafang Li (project supervisor), and Ling Lu at the Chinese Academy of Sciences in Beijing; and Zhi-Yuan Li at the South China University of Technology. The work was supported by the National Key R&D Program of China, the National Natural Science Foundation of China, and the U.S Air Force Office of Scientific Research.

The researchers have also provided some GIFs,

And,

Here’s a link to and a citation for the paper,

Nano-kirigami with giant optical chirality by Zhiguang Liu, Huifeng Du, Jiafang Li, Ling Lu, Zhi-Yuan Li, and Nicholas X. Fang. Science Advances 06 Jul 2018: Vol. 4, no. 7, eaat4436 DOI: 10.1126/sciadv.aat4436

This paper is open access.

Chinese scientists strike gold in plant tissues

I have heard of phytomining in soil remediation efforts (reclaiming nanoscale metals in plants near mining operations; you can find a more detailed definition here at Wiktionary) but, in this case, scientists have discovered plant tissues with nanoscale gold in an area which has no known deposits of gold. From a June 14, 2018 news item on Nanowwerk (Note: A link has been removed),

Plants containing the element gold are already widely known. The flowering perennial plant alfafa, for example, has been cultivated by scientists to contain pure gold in its plant tissue. Now researchers from the Sun Yat-sen University in China have identified and investigated the characteristics of gold nanoparticles in two plant species growing in their natural environments.

The study, led by Xiaoen Luo, is published in Environmental Chemistry Letters (“Discovery of nano-sized gold particles in natural plant tissues”) and has implications for the way gold nanoparticles are produced and absorbed from the environment.

A June 14, 2018 Springer Publications press release, which originated the news item, delves further and proposes a solution to the mystery,

Xiaoen Luo and her colleagues investigated the perennial shrub B. nivea and the annual or biennial weed Erigeron Canadensis. The researchers collected and prepared samples of both plants so that they could be examined using the specialist analytical tool called field-emission transmission electron microscope (TEM).

Gold-bearing nanoparticles – tiny gold particles fused with another element such as oxygen or copper – were found in both types of plant. In E. Canadensis these particles were around 20-50 nm in diameter and had an irregular form. The gold-bearing particles in B. nivea were circular, elliptical or bone-rod shaped with smooth edges and were 5-15 nm.

“The abundance of gold in the crust is very low and there was no metal deposit in the sampling area so we speculate that the source of these gold nanoparticles is a nearby electroplating plant that uses gold in its operations, “ explains Jianjin Cao who is a co-author of the study.

Most of the characteristics of the nanoparticles matched those of artificial particles rather than naturally occurring nanoparticles, which would support this theory. The researchers believe that the gold-bearing particles were absorbed through the pores of the plants directly, indicating that gold could be accumulated from the soil, water or air.

“Discovering gold-bearing nanoparticles in natural plant tissues is of great significance and allows new possibilities to clean up areas contaminated with nanoparticles, and also to enrich gold nanoparticles using plants,” says Xiaoen Luo.

The researchers plan to further study the migration mechanism, storage locations and growth patterns of gold nanoparticles in plants and also verify the absorbing capacity of different plants for gold nanoparticles in polluted areas.

For anyone who’d like to find out more about electroplating, there’s this January 25, 2018 article by Anne Marie Helmenstine for ThoughtCo.

Here’s a link to and a citation for the paper,

Discovery of nano-sized gold particles in natural plant tissues by Xiaoen Luo (Luo, X.) and Jianjin Cao (Cao, J.). Environ Chem Lett (2018) pp 1–8 https://doi.org/10.1007/s10311-018-0749-0 First published online 14 June 2018

This paper appears to be open access.

First CRISPR gene-edited babies? Ethics and the science story

Scientists, He Jiankui and Michael Deem, may have created the first human babies born after being subjected to CRISPR (clustered regularly interspaced short palindromic repeats) gene editing.  At this point, no one is entirely certain that these babies  as described actually exist since the information was made public in a rather unusual (for scientists) fashion.

The news broke on Sunday, November 25, 2018 through a number of media outlets none of which included journals associated with gene editing or high impact journals such as Cell, Nature, or Science.The news broke in MIT Technology Review and in Associated Press. Plus, this all happened just before the Second International Summit on Human Genome Editing (Nov. 27 – 29, 2018) in Hong Kong. He Jiankui was scheduled to speak today, Nov. 27, 2018.

Predictably, this news has caused quite a tizzy.

Breaking news

Antonio Regalado broke the news in a November 25, 2018  article for MIT [Massachusetts Institute of Technology] Technology Review (Note: Links have been removed),

According to Chinese medical documents posted online this month (here and here), a team at the Southern University of Science and Technology, in Shenzhen, has been recruiting couples in an effort to create the first gene-edited babies. They planned to eliminate a gene called CCR5 in hopes of rendering the offspring resistant to HIV, smallpox, and cholera.

The clinical trial documents describe a study in which CRISPR is employed to modify human embryos before they are transferred into women’s uteruses.

The scientist behind the effort, He Jiankui, did not reply to a list of questions about whether the undertaking had produced a live birth. Reached by telephone, he declined to comment.

However, data submitted as part of the trial listing shows that genetic tests have been carried out on fetuses as late as 24 weeks, or six months. It’s not known if those pregnancies were terminated, carried to term, or are ongoing.

Apparently He changed his mind because Marilynn Marchione in a November 26, 2018 article for the Associated Press confirms the news,

A Chinese researcher claims that he helped make the world’s first genetically edited babies — twin girls born this month whose DNA he said he altered with a powerful new tool capable of rewriting the very blueprint of life.

If true, it would be a profound leap of science and ethics.

A U.S. scientist [Dr. Michael Deem] said he took part in the work in China, but this kind of gene editing is banned in the United States because the DNA changes can pass to future generations and it risks harming other genes.

Many mainstream scientists think it’s too unsafe to try, and some denounced the Chinese report as human experimentation.

There is no independent confirmation of He’s claim, and it has not been published in a journal, where it would be vetted by other experts. He revealed it Monday [November 26, 2018] in Hong Kong to one of the organizers of an international conference on gene editing that is set to begin Tuesday [November 27, 2018], and earlier in exclusive interviews with The Associated Press.

“I feel a strong responsibility that it’s not just to make a first, but also make it an example,” He told the AP. “Society will decide what to do next” in terms of allowing or forbidding such science.

Some scientists were astounded to hear of the claim and strongly condemned it.

It’s “unconscionable … an experiment on human beings that is not morally or ethically defensible,” said Dr. Kiran Musunuru, a University of Pennsylvania gene editing expert and editor of a genetics journal.

“This is far too premature,” said Dr. Eric Topol, who heads the Scripps Research Translational Institute in California. “We’re dealing with the operating instructions of a human being. It’s a big deal.”

However, one famed geneticist, Harvard University’s George Church, defended attempting gene editing for HIV, which he called “a major and growing public health threat.”

“I think this is justifiable,” Church said of that goal.

h/t Cale Guthrie Weissman’s Nov. 26, 2018 article for Fast Company.

Diving into more detail

Ed Yong in a November 26, 2018 article for The Atlantic provides more details about the claims (Note: Links have been removed),

… “Two beautiful little Chinese girls, Lulu and Nana, came crying into the world as healthy as any other babies a few weeks ago,” He said in the first of five videos, posted yesterday {Nov. 25, 2018] to YouTube [link provided at the end of this section of the post]. “The girls are home now with their mom, Grace, and dad, Mark.” The claim has yet to be formally verified, but if true, it represents a landmark in the continuing ethical and scientific debate around gene editing.

Late last year, He reportedly enrolled seven couples in a clinical trial, and used their eggs and sperm to create embryos through in vitro fertilization. His team then used CRISPR to deactivate a single gene called CCR5 in the embryos, six of which they then implanted into mothers. CCR5 is a protein that the HIV virus uses to gain entry into human cells; by deactivating it, the team could theoretically reduce the risk of infection. Indeed, the fathers in all eight couples were HIV-positive.

Whether the experiment was successful or not, it’s intensely controversial. Scientists have already begun using CRISPR and other gene-editing technologies to alter human cells, in attempts to treat cancers, genetic disorders, and more. But in these cases, the affected cells stay within a person’s body. Editing an embryo [it’s often called, germline editing] is very different: It changes every cell in the body of the resulting person, including the sperm or eggs that would pass those changes to future generations. Such work is banned in many European countries, and prohibited in the United States. “I understand my work will be controversial, but I believe families need this technology and I’m willing to take the criticism for them,” He said.

“Was this a reasonable thing to do? I would say emphatically no,” says Paula Cannon of the University of Southern California. She and others have worked on gene editing, and particularly on trials that knock out CCR5 as a way to treat HIV. But those were attempts to treat people who were definitively sick and had run out of other options. That wasn’t the case with Nana and Lulu.

“The idea that being born HIV-susceptible, which is what the vast majority of humans are, is somehow a disease state that requires the extraordinary intervention of gene editing blows my mind,” says Cannon. “I feel like he’s appropriating this potentially valuable therapy as a shortcut to doing something in the sphere of gene editing. He’s either very naive or very cynical.”

“I want someone to make sure that it has happened,” says Hank Greely, an ethicist at Stanford University. If it hasn’t, that “would be a pretty bald-faced fraud,” but such deceptions have happened in the past. “If it is true, I’m disappointed. It’s reckless on safety grounds, and imprudent and stupid on social grounds.” He notes that a landmark summit in 2015 (which included Chinese researchers) and a subsequent major report from the National Academies of Science, Engineering, and Medicine both argued that “public participation should precede any heritable germ-line editing.” That is: Society needs to work out how it feels about making gene-edited babies before any babies are edited. Absent that consensus, He’s work is “waving a red flag in front of a bull,” says Greely. “It provokes not just the regular bio-Luddites, but also reasonable people who just wanted to talk it out.”

Societally, the creation of CRISPR-edited babies is a binary moment—a Rubicon that has been crossed. But scientifically, the devil is in the details, and most of those are still unknown.

CRISPR is still inefficient. [emphasis mine] The Chinese teams who first used it to edit human embryos only did so successfully in a small proportion of cases, and even then, they found worrying levels of “off-target mutations,” where they had erroneously cut parts of the genome outside their targeted gene. He, in his video, claimed that his team had thoroughly sequenced Nana and Lulu’s genomes and found no changes in genes other than CCR5.

That claim is impossible to verify in the absence of a peer-reviewed paper, or even published data of any kind. “The paper is where we see whether the CCR5 gene was properly edited, what effect it had at the cellular level, and whether [there were] any off-target effects,” said Eric Topol of the Scripps Research Institute. “It’s not just ‘it worked’ as a binary declaration.”

In the video, He said that using CRISPR for human enhancement, such as enhancing IQ or selecting eye color, “should be banned.” Speaking about Nana and Lulu’s parents, he said that they “don’t want a designer baby, just a child who won’t suffer from a disease that medicine can now prevent.”

But his rationale is questionable. Huang [Junjiu Huang of Sun Yat-sen University ], the first Chinese researcher to use CRISPR on human embryos, targeted the faulty gene behind an inherited disease called beta thalassemia. Mitalipov, likewise, tried to edit a gene called MYBPC3, whose faulty versions cause another inherited disease called hypertrophic cardiomyopathy (HCM). Such uses are still controversial, but they rank among the more acceptable applications for embryonic gene editing as ways of treating inherited disorders for which treatments are either difficult or nonexistent.

In contrast, He’s team disableda normal gene in an attempt to reduce the risk of a disease that neither child had—and one that can be controlled. There are already ways of preventing fathers from passing HIV to their children. There are antiviral drugs that prevent infections. There’s safe-sex education. “This is not a plague for which we have no tools,” says Cannon.

As Marilynn Marchione of the AP reports, early tests suggest that He’s editing was incomplete [emphasis mine], and at least one of the twins is a mosaic, where some cells have silenced copies of CCR5 and others do not. If that’s true, it’s unlikely that they would be significantly protected from HIV. And in any case, deactivating CCR5 doesn’t confer complete immunity, because some HIV strains can still enter cells via a different protein called CXCR4.

Nana and Lulu might have other vulnerabilities. …

It is also unclear if the participants in He’s trial were fully aware of what they were signing up for. [emphasis mine] The team’s informed-consent document describes their work as an “AIDS vaccine development project,” and while it describes CRISPR gene editing, it does so in heavily technical language. It doesn’t mention any of the risks of disabling CCR5, and while it does note the possibility of off-target effects, it also says that the “project team is not responsible for the risk.”

He owns two genetics companies, and his collaborator, Michael Deem of Rice University,  [emphasis mine] holds a small stake in, and sits on the advisory board of, both of them. The AP’s Marchione reports, “Both men are physics experts with no experience running human clinical trials.” [emphasis mine]

Yong’s article is well worth reading in its entirety. As for YouTube, here’s The He Lab’s webpage with relevant videos.

Reactions

Gina Kolata, Sui-Lee Wee, and Pam Belluck writing in a Nov. 26, 2018 article for the New York Times chronicle some of the response to He’s announcement,

It is highly unusual for a scientist to announce a groundbreaking development without at least providing data that academic peers can review. Dr. He said he had gotten permission to do the work from the ethics board of the hospital Shenzhen Harmonicare, but the hospital, in interviews with Chinese media, denied being involved. Cheng Zhen, the general manager of Shenzhen Harmonicare, has asked the police to investigate what they suspect are “fraudulent ethical review materials,” according to the Beijing News.

The university that Dr. He is attached to, the Southern University of Science and Technology, said Dr. He has been on no-pay leave since February and that the school of biology believed that his project “is a serious violation of academic ethics and academic norms,” according to the state-run Beijing News.

In a statement late on Monday, China’s national health commission said it has asked the health commission in southern Guangdong province to investigate Mr. He’s claims.

“I think that’s completely insane,” said Shoukhrat Mitalipov, director of the Center for Embryonic Cell and Gene Therapy at Oregon Health and Science University. Dr. Mitalipov broke new ground last year by using gene editing to successfully remove a dangerous mutation from human embryos in a laboratory dish. [I wrote a three-part series about CRISPR, which included what was then the latest US news, Mitalipov’s announcement, along with a roundup of previous work in China. Links are at the end of this section.’

Dr. Mitalipov said that unlike his own work, which focuses on editing out mutations that cause serious diseases that cannot be prevented any other way, Dr. He did not do anything medically necessary. There are other ways to prevent H.I.V. infection in newborns.

Just three months ago, at a conference in late August on genome engineering at Cold Spring Harbor Laboratory in New York, Dr. He presented work on editing the CCR₅ gene in the embryos of nine couples.

At the conference, whose organizers included Jennifer Doudna, one of the inventors of Crispr technology, Dr. He gave a careful talk about something that fellow attendees considered squarely within the realm of ethically approved research. But he did not mention that some of those embryos had been implanted in a woman and could result in genetically engineered babies.

“What we now know is that as he was talking, there was a woman in China carrying twins,” said Fyodor Urnov, deputy director of the Altius Institute for Biomedical Sciences and a visiting researcher at the Innovative Genomics Institute at the University of California. “He had the opportunity to say ‘Oh and by the way, I’m just going to come out and say it, people, there’s a woman carrying twins.’”

“I would never play poker against Dr. He,” Dr. Urnov quipped.

Richard Hynes, a cancer researcher at the Massachusetts Institute of Technology, who co-led an advisory group on human gene editing for the National Academy of Sciences and the National Academy of Medicine, said that group and a similar organization in Britain had determined that if human genes were to be edited, the procedure should only be done to address “serious unmet needs in medical treatment, it had to be well monitored, it had to be well followed up, full consent has to be in place.”

It is not clear why altering genes to make people resistant to H.I.V. is “a serious unmet need.” Men with H.I.V. do not infect embryos. …

Dr. He got his Ph.D., from Rice University, in physics and his postdoctoral training, at Stanford, was with Stephen Quake, a professor of bioengineering and applied physics who works on sequencing DNA, not editing it.

Experts said that using Crispr would actually be quite easy for someone like Dr. He.

After coming to Shenzhen in 2012, Dr. He, at age 28, established a DNA sequencing company, Direct Genomics, and listed Dr. Quake on its advisory board. But, in a telephone interview on Monday, Dr. Quake said he was never associated with the company.

Deem, the US scientist who worked in China with He is currently being investigated (from a Nov. 26, 2018 article by Andrew Joseph in STAT),

Rice University said Monday that it had opened a “full investigation” into the involvement of one of its faculty members in a study that purportedly resulted in the creation of the world’s first babies born with edited DNA.

Michael Deem, a bioengineering professor at Rice, told the Associated Press in a story published Sunday that he helped work on the research in China.

Deem told the AP that he was in China when participants in the study consented to join the research. Deem also said that he had “a small stake” in and is on the scientific advisory boards of He’s two companies.

Megan Molteni in a Nov. 27, 2018 article for Wired admits she and her colleagues at the magazine may have dismissed CRISPR concerns about designer babies prematurely while shedding more light on this  latest development (Note: Links have been removed),

We said “don’t freak out,” when scientists first used Crispr to edit DNA in non-viable human embryos. When they tried it in embryos that could theoretically produce babies, we said “don’t panic.” Many years and years of boring bench science remain before anyone could even think about putting it near a woman’s uterus. Well, we might have been wrong. Permission to push the panic button granted.

Late Sunday night, a Chinese researcher stunned the world by claiming to have created the first human babies, a set of twins, with Crispr-edited DNA….

What’s perhaps most strange is not that He ignored global recommendations on conducting responsible Crispr research in humans. He also ignored his own advice to the world—guidelines that were published within hours of his transgression becoming public.

On Monday, He and his colleagues at Southern University of Science and Technology, in Shenzhen, published a set of draft ethical principles “to frame, guide, and restrict clinical applications that communities around the world can share and localize based on religious beliefs, culture, and public-health challenges.” Those principles included transparency and only performing the procedure when the risks are outweighed by serious medical need.

The piece appeared in the The Crispr Journal, a young publication dedicated to Crispr research, commentary, and debate. Rodolphe Barrangou, the journal’s editor in chief, where the peer-reviewed perspective appeared, says that the article was one of two that it had published recently addressing the ethical concerns of human germline editing, the other by a bioethicist at the University of North Carolina. Both papers’ authors had requested that their writing come out ahead of a major gene editing summit taking place this week in Hong Kong. When half-rumors of He’s covert work reached Barrangou over the weekend, his team discussed pulling the paper, but ultimately decided that there was nothing too solid to discredit it, based on the information available at the time.

Now Barrangou and his team are rethinking that decision. For one thing, He did not disclose any conflicts of interest, which is standard practice among respectable journals. It’s since become clear that not only is He at the helm of several genetics companies in China, He was actively pursuing controversial human research long before writing up a scientific and moral code to guide it.“We’re currently assessing whether the omission was a matter of ill-management or ill-intent,” says Barrangou, who added that the journal is now conducting an audit to see if a retraction might be warranted. …

“There are all sorts of questions these issues raise, but the most fundamental is the risk-benefit ratio for the babies who are going to be born,” says Hank Greely, an ethicist at Stanford University. “And the risk-benefit ratio on this stinks. Any institutional review board that approved it should be disbanded if not jailed.”

Reporting by Stat indicates that He may have just gotten in over his head and tried to cram a self-guided ethics education into a few short months. The young scientist—records indicate He is just 34—has a background in biophysics, with stints studying in the US at Rice University and in bioengineer Stephen Quake’s lab at Stanford. His resume doesn’t read like someone steeped deeply in the nuances and ethics of human research. Barrangou says that came across in the many rounds of edits He’s framework went through.

… China’s central government in Beijing has yet to come down one way or another. Condemnation would make He a rogue and a scientific outcast. Anything else opens the door for a Crispr IVF cottage industry to emerge in China and potentially elsewhere. “It’s hard to imagine this was the only group in the world doing this,” says Paul Knoepfler, a stem cell researcher at UC Davis who wrote a book on the future of designer babies called GMO Sapiens. “Some might say this broke the ice. Will others forge ahead and go public with their results or stop what they’re doing and see how this plays out?”

Here’s some of the very latest information with the researcher attempting to explain himself.

What does He have to say?

After He’s appearance at the Second International Summit on Human Genome Editing today, Nov. 27, 2018, David Cyranoski produced this article for Nature,

He Jiankui, the Chinese scientist who claims to have helped produce the first people born with edited genomes — twin girls — appeared today at a gene-editing summit in Hong Kong to explain his experiment. He gave his talk amid threats of legal action and mounting questions, from the scientific community and beyond, about the ethics of his work and the way in which he released the results.

He had never before presented his work publicly outside of a handful of videos he posted on YouTube. Scientists welcomed the fact that he appeared at all — but his talk left many hungry for more answers, and still not completely certain that He has achieved what he claims.

“There’s no reason not to believe him,” says Robin Lovell-Badge, a developmental biologist at the Francis Crick Institute in London. “I’m just not completely convinced.”

Lovell-Badge, like others at the conference, says that an independent body should confirm the test results by performing an in-depth comparison of the parents’ and childrens’ genes.

Many scientists faulted He for a lack of transparency and the seemingly cavalier nature in which he embarked on such a landmark, and potentially risky, project.

“I’m happy he came but I was really horrified and stunned when he described the process he used,” says Jennifer Doudna, a biochemist at the University of California, Berkeley and a pioneer of the CRISPR/Cas-9 gene-editing technique that He used. “It was so inappropriate on so many levels.”

He seemed shaky approaching the stage and nervous during the talk. “I think he was scared,” says Matthew Porteus, who researches genome-editing at Stanford University in California and co-hosted a question-and-answer session with He after his presentation. Porteus attributes this either to the legal pressures that He faces or the mounting criticism from the scientists and media he was about to address.

He’s talk leaves a host of other questions unanswered, including whether the prospective parents were properly informed of the risks; why He selected CCR5 when there are other, proven ways to prevent HIV; why he chose to do the experiment with couples in which the fathers have HIV, rather than mothers who have a higher chance of passing the virus on to their children; and whether the risks of knocking out CCR5 — a gene normally present in people, which could have necessary but still unknown functions — outweighed the benefits in this case.

In the discussion following He’s talk, one scientist asked why He proceeded with the experiments despite the clear consensus among scientists worldwide that such research shouldn’t be done. He didn’t answer the question.

He’s attempts to justify his actions mainly fell flat. In response to questions about why the science community had not been informed of the experiments before the first women were impregnated, he cited presentations that he gave last year at meetings at the University of California, Berkeley, and at the Cold Spring Harbor Laboratory in New York. But Doudna, who organized the Berkeley meeting, says He did not present anything that showed he was ready to experiment in people. She called his defence “disingenuous at best”.

He also said he discussed the human experiment with unnamed scientists in the United States. But Porteus says that’s not enough for such an extraordinary experiment: “You need feedback not from your two closest friends but from the whole community.” …

Pressure was mounting on He ahead of the presentation. On 27 November, the Chinese national health commission ordered the Guangdong health commission, in the province where He’s university is located, to investigate.

On the same day, the Chinese Academy of Sciences issued a statement condemning his work, and the Genetics Society of China and the Chinese Society for Stem Cell Research jointly issued a statement saying the experiment “violates internationally accepted ethical principles regulating human experimentation and human rights law”.

The hospital cited in China’s clinical-trial registry as the that gave ethical approval for He’s work posted a press release on 27 November saying it did not give any approval. It questioned the signatures on the approval form and said that the hospital’s medical-ethics committee never held a meeting related to He’s research. The hospital, which itself is under investigation by the Shenzhen health authorities following He’s revelations, wrote: “The Company does not condone the means of the Claimed Project, and has reservations as to the accuracy, reliability and truthfulness of its contents and results.”

He has not yet responded to requests for comment on these statements and investigations, nor on why the hospital was listed in the registry and the claim of apparent forged signatures.

Alice Park’s Nov. 26, 2018 article for Time magazine includes an embedded video of He’s Nov. 27, 2018 presentation at the summit meeting.

What about the politics?

Mara Hvistendahl’s Nov. 27, 2018 article about this research for Slate.com poses some geopolitical questions (Note: Links have been removed),

The informed consent agreement for He Jiankui’s experiment describes it as an “AIDS vaccine development project” and used highly technical language to describe the procedure that patients would undergo. If the reality for some Chinese patients is that such agreements are glossed over, densely written, or never read, the reality for some researchers working in the country is that the appeal of cutting-edge trials is too great to resist. It is not just Chinese scientists who can be blinded by the lure of quick breakthroughs. Several of the most notable breaches of informed consent on the mainland have involved Western researchers or co-authors. … When people say that the usual rules don’t apply in China, they are really referring to authoritarian science, not some alternative communitarian ethics.

For the many scientists in China who adhere to recognized international standards, the incident comes as a disgrace. He Jiankui now faces an ethics investigation from provincial health authorities, and his institution, Southern University of Science and Technology, was quick to issue a statement noting that He was on unpaid leave. …

It would seem that US [and from elsewhere]* scientists wanting to avoid pesky ethics requirements in the US have found that going to China could be the answer to their problems. I gather it’s not just big business that prefers deregulated environments.

Guillaume Levrier’s  (he’ studying for a PhD at the Universté Sorbonne Paris Cité) November 16, 2018 essay for The Conversation sheds some light on political will and its impact on science (Note: Links have been removed),

… China has entered a “genome editing” race among great scientific nations and its progress didn’t come out of nowhere. China has invested heavily in the natural-sciences sector over the past 20 years. The Ninth Five-Year Plan (1996-2001) mentioned the crucial importance of biotechnologies. The current Thirteenth Five-Year Plan is even more explicit. It contains a section dedicated to “developing efficient and advanced biotechnologies” and lists key sectors such as “genome-editing technologies” intended to “put China at the bleeding edge of biotechnology innovation and become the leader in the international competition in this sector”.

Chinese embryo research is regulated by a legal framework, the “technical norms on human-assisted reproductive technologies”, published by the Science and Health Ministries. The guidelines theoretically forbid using sperm or eggs whose genome have been manipulated for procreative purposes. However, it’s hard to know how much value is actually placed on this rule in practice, especially in China’s intricate institutional and political context.

In theory, three major actors have authority on biomedical research in China: the Science and Technology Ministry, the Health Ministry, and the Chinese Food and Drug Administration. In reality, other agents also play a significant role. Local governments interpret and enforce the ministries’ “recommendations”, and their own interpretations can lead to significant variations in what researchers can and cannot do on the ground. The Chinese National Academy of Medicine is also a powerful institution that has its own network of hospitals, universities and laboratories.

Another prime actor is involved: the health section of the People’s Liberation Army (PLA), which has its own biomedical faculties, hospitals and research labs. The PLA makes its own interpretations of the recommendations and has proven its ability to work with the private sector on gene editing projects. …

One other thing from Levrier’s essay,

… And the media timing is just a bit too perfect, …

Do read the essay; there’s a twist at the end.

Final thoughts and some links

If I read this material rightly, there are suspicions there may be more of this work being done in China and elsewhere. In short, we likely don’t have the whole story.

As for the ethical issues, this is a discussion among experts only, so far. The great unwashed (thee and me) are being left at the wayside. Sure, we’ll be invited to public consultations, one day,  after the big decisions have been made.

Anyone who’s read up on the history of science will tell you this kind of breach is very common at the beginning. Richard Holmes’  2008 book, ‘The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science’ recounts stories of early scientists (European science) who did crazy things. Some died, some shortened their life spans; and, some irreversibly damaged their health.  They also experimented on other people. Informed consent had not yet been dreamed up.

In fact, I remember reading somewhere that the largest human clinical trial in history was held in Canada. The small pox vaccine was highly contested in the US but the Canadian government thought it was a good idea so they offered US scientists the option of coming here to vaccinate Canadian babies. This was in the 1950s and the vaccine seems to have been administered almost universally. That was a lot of Canadian babies. Thankfully, it seems to have worked out but it does seem mind-boggling today.

For all the indignation and shock we’re seeing, this is not the first time nor will it be the last time someone steps over a line in order to conduct scientific research. And, that is the eternal problem.

Meanwhile I think some of the real action regarding CRISPR and germline editing is taking place in the field (pun!) of agriculture:

My Nov. 27, 2018 posting titled: ‘Designer groundcherries by CRISPR (clustered regularly interspaced short palindromic repeats)‘ and a more disturbing Nov. 27, 2018 post titled: ‘Agriculture and gene editing … shades of the AquAdvantage salmon‘. That second posting features a company which is trying to sell its gene-editing services to farmers who would like cows that  never grow horns and pigs that never reach puberty.

Then there’s this ,

The Genetic Revolution‘, a documentary that offers relatively up-to-date information about gene editing, which was broadcast on Nov. 11, 2018 as part of The Nature of Things series on CBC (Canadian Broadcasting Corporation).

My July 17, 2018 posting about research suggesting that scientists hadn’t done enough research on possible effects of CRISPR editing titled: ‘The CRISPR ((clustered regularly interspaced short palindromic repeats)-CAS9 gene-editing technique may cause new genetic damage kerfuffle’.

My 2017 three-part series on CRISPR and germline editing:

CRISPR and editing the germline in the US (part 1 of 3): In the beginning

CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

There you have it.

Added on November 30, 2018: David Cyanowski has written one final article (Nov. 30, 2018 for Nature) about He and the Second International Summit on Human Genome Editing. He did not make his second scheduled appearance at the summit, returning to China before the summit concluded. He was rebuked in a statement produced by the Summit’s organizing committee at the end of the three-day meeting. The situation with regard to his professional status in China is ambiguous. Cyanowski ends his piece with the information that the third summit will take place in London (likely in the UK) in 2021. I encourage you to read Cyanowski’s Nov. 30, 2018 article in its entirety; it’s not long.

Added on Dec. 3, 2018: The story continues. Ed Yong has written a summary of the issues to date in a Dec. 3, 2018 article for The Atlantic (even if you know the story ift’s eyeopening to see all the parts put together.

J. Benjamin Hurlbut, Associate Professor of Life Sciences at Arizona State University (ASU) and Jason Scott Robert, Director of the Lincoln Center for Applied Ethics at Arizona State University have written a provocative (and true) Dec. 3, 2018 essay titled, CRISPR babies raise an uncomfortable reality – abiding by scientific standards doesn’t guarantee ethical research, for The Conversation. h/t phys.org

*[and from elsewhere] added January 17, 2019.

Added on January 23, 2019: He has been fired by his university (Southern University of Science and Technology in Shenzhen) as announced on January 21, 2019.  David Cyranoski provides a details accounting in his January 22, 2019 article for Nature.

Wooden supercapacitors: a cellulose nanofibril story

A May 24, 2018 news item on Nanowerk announces a technique for making sustainable electrodes (Note: A link has been removed),

Carbon aerogels are ultralight, conductive materials, which are extensively investigated for applications in supercapacitor electrodes in electrical cars and cell phones. Chinese scientists have now found a way to make these electrodes sustainably. The aerogels can be obtained directly from cellulose nanofibrils, the abundant cell-wall material in wood, finds the study reported in the journal Angewandte Chemie (“Wood-Derived Ultrathin Carbon Nanofiber Aerogels”).

A May 24, 2018 Wiley Publications press release, which originated the news item, explains further,

Supercapacitors are capacitors that can take up and release a very large amount of energy in a very short time. Key requirements for supercapacitor electrodes are a large surface area and conductivity, combined with a simple production method. Another growing issue in supercapacitor production–mainly for smartphone and electric car technologies–is sustainability. However, sustainable and economical production of carbon aerogels as supercapacitor electrode materials is possible, propose Shu-Hong Yu and colleagues from the University of Science and Technology of China, Hefei, China.

Carbon aerogels are ultralight conductive materials with a very large surface area. They can be prepared by two production routes: the first and cheapest starts from mostly phenolic components and produces aerogels with improvable conductivity, while the second route is based on graphene- and carbon-nanotube precursors. The latter method delivers high-performance aerogels but is expensive and non-environmentally friendly. In their search for different precursors, Yu and colleagues have found an abundant, far less expensive, and sustainable source: wood pulp.

Well, not really wood pulp, but its major ingredient, nanocellulose. Plant cell walls are stabilized by fibrous nanocellulose, and this extractable material has very recently stimulated substantial research and technological development. It forms a highly porous, but very stable transparent network, and, with the help of a recent technique–oxidation with a radical scavenger called TEMPO–it forms a microporous hydrogel of highly oriented cellulose nanofibrils with a uniform width and length. As organic aerogels are produced from hydrogels by drying and pyrolysis, the authors attempted pyrolysis of supercritically or freeze-dried nanofibrillated cellulose hydrogel.

As it turns out, the method was not as straightforward as expected because ice crystal formation and insufficient dehydration hampered carbonization, according to the authors. Here, a trick helped. The scientists pyrolyzed the dried gel in the presence of the organic acid catalyst para-toluenesulfonic acid. The catalyst lowered the decomposition temperature and yielded a “mechanically stable and porous three-dimensional nanofibrous network” featuring a “large specific surface area and high electrical conductivity,” the authors reported.

The authors also demonstrated that their wood-derived carbon aerogel worked well as a binder-free electrode for supercapacitor applications. The material displayed electrochemical properties comparable to commercial electrodes. The method is an interesting and innovative way in which to fabricate sustainable materials suitable for use in high-performance electronic devices.

This is the first time I’ve seen work on wood-based nanocellulose from China. Cellulose according to its Wikipedia entry is: ” … the most abundant organic polymer on Earth.” For example, there’s more cellulose in cotton than there is wood. So, I find it interesting that in a country not known for its forests, nanocellulose (in this project anyway) is being derived from wood.

Here’s a link to and a citation for the paper,

Wood‐Derived Ultrathin Carbon Nanofiber Aerogels by Si‐Cheng Li, Bi‐Cheng Hu, Dr. Yan‐Wei Ding, Prof. Hai‐Wei Liang, Chao Li, Dr. Zi‐You Yu, Dr. Zhen‐Yu Wu, Prof. Wen‐Shuai Chen, Prof. Shu‐Hong Yu. Angewandt Chemie First published: 23 April 2018 DOI: https://doi.org/10.1002/anie.201802753

This paper is behind a paywall.

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

Robot radiologists (artificially intelligent doctors)

Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),

Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.

Musa has offered a compelling argument with lots of links to supporting evidence.

[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]

And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,

An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.

The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.

The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.

The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.

To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.

All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.

Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.

“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.

Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]

AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.

Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]

Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]

Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.

Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.

China has introduced a series of plans in developing AI applications in recent years.

In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”

The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.

I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

Radiology isn’t the only area where experts might find themselves displaced.

Eye experts

It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),

An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].

The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,

More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.

Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”

“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.

Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.

To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.

The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.

Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.

The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.

If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.

The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.

Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.

Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”

Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”

Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”

Here’s a link to and a citation for the study,

Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018

This paper is behind a paywall.

And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human

The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!

I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.

AI x 2: the Amnesty International and Artificial Intelligence story

Amnesty International and artificial intelligence seem like an unexpected combination but it all makes sense when you read a June 13, 2018 article by Steven Melendez for Fast Company (Note: Links have been removed),

If companies working on artificial intelligence don’t take steps to safeguard human rights, “nightmare scenarios” could unfold, warns Rasha Abdul Rahim, an arms control and artificial intelligence researcher at Amnesty International in a blog post. Those scenarios could involve armed, autonomous systems choosing military targets with little human oversight, or discrimination caused by biased algorithms, she warns.

Rahim pointed at recent reports of Google’s involvement in the Pentagon’s Project Maven, which involves harnessing AI image recognition technology to rapidly process photos taken by drones. Google recently unveiled new AI ethics policies and has said it won’t continue with the project once its current contract expires next year after high-profile employee dissent over the project. …

“Compliance with the laws of war requires human judgement [sic] –the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack,” Rahim writes. “Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment, and use of fully autonomous weapon systems.”

Rasha Abdul Rahim’s June 14, 2018 posting (I’m putting the discrepancy in publication dates down to timezone differences) on the Amnesty International website (Note: Links have been removed),

Last week [June 7, 2018] Google released a set of principles to govern its development of AI technologies. They include a broad commitment not to design or deploy AI in weaponry, and come in the wake of the company’s announcement that it will not renew its existing contract for Project Maven, the US Department of Defense’s AI initiative, when it expires in 2019.

The fact that Google maintains its existing Project Maven contract for now raises an important question. Does Google consider that continuing to provide AI technology to the US government’s drone programme is in line with its new principles? Project Maven is a litmus test that allows us to see what Google’s new principles mean in practice.

As details of the US drone programme are shrouded in secrecy, it is unclear precisely what role Google plays in Project Maven. What we do know is that US drone programme, under successive administrations, has been beset by credible allegations of unlawful killings and civilian casualties. The cooperation of Google, in any capacity, is extremely troubling and could potentially implicate it in unlawful strikes.

As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. Machine learning, and AI more broadly, impact a range of human rights including privacy, freedom of expression and the right to life. It is partly in the hands of companies like Google to safeguard these rights in relation to their operations – for us and for future generations. If they don’t, some nightmare scenarios could unfold.

Warfare has already changed dramatically in recent years – a couple of decades ago the idea of remote controlled bomber planes would have seemed like science fiction. While the drones currently in use are still controlled by humans, China, France, Israel, Russia, South Korea, the UK and the US are all known to be developing military robots which are getting smaller and more autonomous.

For example, the UK is developing a number of autonomous systems, including the BAE [Systems] Taranis, an unmanned combat aircraft system which can fly in autonomous mode and automatically identify a target within a programmed area. Kalashnikov, the Russian arms manufacturer, is developing a fully automated, high-calibre gun that uses artificial neural networks to choose targets. The US Army Research Laboratory in Maryland, in collaboration with BAE Systems and several academic institutions, has been developing micro drones which weigh less than 30 grams, as well as pocket-sized robots that can hop or crawl.

Of course, it’s not just in conflict zones that AI is threatening human rights. Machine learning is already being used by governments in a wide range of contexts that directly impact people’s lives, including policing [emphasis mine], welfare systems, criminal justice and healthcare. Some US courts use algorithms to predict future behaviour of defendants and determine their sentence lengths accordingly. The potential for this approach to reinforce power structures, discrimination or inequalities is huge.

In july 2017, the Vancouver Police Department announced its use of predictive policing software, the first such jurisdiction in Canada to make use of the technology. My Nov. 23, 2017 posting featured the announcement.

The almost too aptly named Campaign to Stop Killer Robots can be found here. Their About Us page provides a brief history,

Formed by the following non-governmental organizations (NGOs) at a meeting in New York on 19 October 2012 and launched in London in April 2013, the Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons. See the Chronology charting our major actions and achievements to date.

Steering Committee

The Steering Committee is the campaign’s principal leadership and decision-making body. It is comprised of five international NGOs, a regional NGO network, and four national NGOs that work internationally:

Human Rights Watch
Article 36
Association for Aid and Relief Japan
International Committee for Robot Arms Control
Mines Action Canada
Nobel Women’s Initiative
PAX (formerly known as IKV Pax Christi)
Pugwash Conferences on Science & World Affairs
Seguridad Humana en América Latina y el Caribe (SEHLAC)
Women’s International League for Peace and Freedom

For more information, see this Overview. A Terms of Reference is also available on request, detailing the committee’s selection process, mandate, decision-making, meetings and communication, and expected commitments.

For anyone who may be interested in joining Amnesty International, go here.

Yes! Art, genetic modifications, gene editing, and xenotransplantation at the Vancouver Biennale (Canada)

Patricia Piccinini’s Curious Imaginings Courtesy: Vancouver Biennale [downloaded from http://dailyhive.com/vancouver/vancouver-biennale-unsual-public-art-2018/]

Up to this point, I’ve been a little jealous of the Art/Sci Salon’s (Toronto, Canada) January 2018 workshops for artists and discussions about CRISPR ((clustered regularly interspaced short palindromic repeats))/Cas9 and its social implications. (See my January 10, 2018 posting for more about the events.) Now, it seems Vancouver may be in line for its ‘own’ discussion about CRISPR and the implications of gene editing. The image you saw (above) represents one of the installations being hosted by the 2018 – 2020 edition of the Vancouver Biennale.

While this posting is mostly about the Biennale and Piccinini’s work, there is a ‘science’ subsection featuring the science of CRISPR and xenotransplantation. Getting back to the Biennale and Piccinini: A major public art event since 1988, the Vancouver Biennale has hosted over 91 outdoor sculptures and new media works by more than 78 participating artists from over 25 countries and from 4 continents.

Quickie description of the 2018 – 2020 Vancouver Biennale

The latest edition of the Vancouver Biennale was featured in a June 6, 2018 news item on the Daily Hive (Vancouver),

The Vancouver Biennale will be bringing new —and unusual— works of public art to the city beginning this June.

The theme for this season’s Vancouver Biennale exhibition is “re-IMAGE-n” and it kicks off on June 20 [2018] in Vanier Park with Saudi artist Ajlan Gharem’s Paradise Has Many Gates.

Gharem’s architectural chain-link sculpture resembles a traditional mosque, the piece is meant to challenge the notions of religious orthodoxy and encourages individuals to image a space free of Islamophobia.

Melbourne artist Patricia Piccinini’s Curious Imaginings is expected to be one of the most talked about installations of the exhibit. Her style of “oddly captivating, somewhat grotesque, human-animal hybrid creature” is meant to be shocking and thought-provoking.

Piccinini’s interactive [emphasis mine] experience will “challenge us to explore the social impacts of emerging biotechnology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.”

Piccinini’s work will be displayed in the 105-year-old Patricia Hotel in Vancouver’s Strathcona neighbourhood. The 90-day ticketed exhibition [emphasis mine] is scheduled to open this September [2018].

Given that this blog is focused on nanotechnology and other emerging technologies such as CRISPR, I’m focusing on Piccinini’s work and its art/science or sci-art status. This image from the GOMA Gallery where Piccinini’s ‘Curious Affection‘ installation is being shown from March 24 – Aug. 5, 2018 in Brisbane, Queensland, Australia may give you some sense of what one of her installations is like,

Courtesy: Queensland Art Gallery | Gallery of Modern Art (QAGOMA)

I spoke with Serena at the Vancouver Biennale office and asked about the ‘interactive’ aspect of Piccinini’s installation. She suggested the term ‘immersive’ as an alternative. In other words, you won’t be playing with the sculptures or pressing buttons and interacting with computer screens or robots. She also noted that the ticket prices have not been set yet and they are currently developing events focused on the issues raised by the installation. She knew that 2018 is the 200th anniversary of the publication of Mary Shelley’s Frankenstein but I’m not sure how the Biennale folks plan (or don’t plan)  to integrate any recognition of the novle’s impact on the discussions about ‘new’ technologies .They expect Piccinini will visit Vancouver. (Note 1: Piccinini’s work can  also be seen in a group exhibition titled: Frankenstein’s Birthday Party at the Hosfselt Gallery in San Francisco (California, US) from June 23 – August 11, 2018.  Note 2: I featured a number of international events commemorating the 200th anniversary of the publication of Mary Shelley’s novel, Frankenstein, in my Feb. 26, 2018 posting. Note 3: The term ‘Frankenfoods’ helped to shape the discussion of genetically modified organisms and food supply on this planet. It was a wildly successful campaign for activists affecting legislation in some areas of research. Scientists have not been as enthusiastic about the effects. My January 15, 2009 posting briefly traces a history of the term.)

The 2018 – 2020 Vancouver Biennale and science

A June 7, 2018 Vancouver Biennale news release provides more detail about the current series of exhibitions,

The Biennale is also committed to presenting artwork at the cutting edge of discussion and in keeping with the STEAM (science, technology, engineering, arts, math[ematics]) approach to integrating the arts and sciences. In August [2018], Colombian/American visual artist Jessica Angel will present her monumental installation Dogethereum Bridge at Hinge Park in Olympic Village. Inspired by blockchain technology, the artwork’s design was created through the integration of scientific algorithms, new developments in technology, and the arts. This installation, which will serve as an immersive space and collaborative hub for artists and technologists, will host a series of activations with blockchain as the inspirational jumping-off point.

In what is expected to become one of North America’s most talked-about exhibitions of the year, Melbourne artist Patricia Piccinini’s Curious Imaginings will see the intersection of art, science, and ethics. For the first time in the Biennale’s fifteen years of creating transformative experiences, and in keeping with the 2018-2020 theme of “re-IMAGE-n,” the Biennale will explore art in unexpected places by exhibiting in unconventional interior spaces.  The hyperrealist “world of oddly captivating, somewhat grotesque, human-animal hybrid creatures” will be the artist’s first exhibit in a non-museum setting, transforming a wing of the 105-year-old Patricia Hotel. Situated in Vancouver’s oldest neighbourbood of Strathcona, Piccinini’s interactive experience will “challenge us to explore the social impacts of emerging bio-technology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.” In this intimate hotel setting located in a neighborhood continually undergoing its own change, Curious Imaginings will empower visitors to personally consider questions posed by the exhibition, including the promises and consequences of genetic research and human interference. …

There are other pieces being presented at the Biennale but my special interest is in the art/sci pieces and, at this point, CRISPR.

Piccinini in more depth

You can find out more about Patricia Piccinini in her biography on the Vancouver Biennale website but I found this Char Larsson April 7, 2018 article for the Independent (UK) more informative (Note: A link has been removed),

Patricia Piccinini’s sculptures are deeply disquieting. Walking through Curious Affection, her new solo exhibition at Brisbane’s Gallery of Modern Art, is akin to entering a science laboratory full of DNA experiments. Made from silicone, fibreglass and even human hair, her sculptures are breathtakingly lifelike, however, we can’t be sure what life they are like. The artist creates an exuberant parallel universe where transgenic experiments flourish and human evolution has given way to genetic engineering and DNA splicing.

Curious Affection is a timely and welcome recognition of Piccinini’s enormous contribution to reaching back to the mid-1990s. Working across a variety of mediums including photography, video and drawing, she is perhaps best known for her hyperreal creations.

As a genre, hyperrealism depends on the skill of the artist to create the illusion of reality. To be truly successful, it must convince the spectator of its realness. Piccinini acknowledges this demand, but with a delightful twist. The excruciating attention to detail deliberately solicits our desire to look, only to generate unease, as her sculptures are imbued with a fascinating otherness. Part human, part animal, the works are uncannily familiar, but also alarmingly “other”.

Inspired by advances in genetically modified pigs to generate replacement organs for humans [also known as xenotransplantation], we are reminded that Piccinini has always been at the forefront of debates concerning the possibilities of science, technology and DNA cloning. She does so, however, with a warm affection and sense of humour, eschewing the hysterical anxiety frequently accompanying these scientific developments.

Beyond the astonishing level of detail achieved by working with silicon and fibreglass, there is an ethics at work here. Piccinini is asking us not to avert our gaze from the other, and in doing so, to develop empathy and understanding through the encounter.

I encourage anyone who’s interested to read Larsson’s entire piece (April 7, 2018 article).

According to her Wikipedia entry, Piccinini works in a variety of media including video, sound, sculpture, and more. She also has her own website.

Gene editing and xenotransplantation

Sarah Zhang’s June 8, 2018 article for The Atlantic provides a peek at the extraordinary degree of interest and competition in the field of gene editing and CRISPR ((clustered regularly interspaced short palindromic repeats))/Cas9 research (Note: A link has been removed),

China Is Genetically Engineering Monkeys With Brain Disorders

Guoping Feng applied to college the first year that Chinese universities reopened after the Cultural Revolution. It was 1977, and more than a decade’s worth of students—5.7 million—sat for the entrance exams. Feng was the only one in his high school to get in. He was assigned—by chance, essentially—to medical school. Like most of his contemporaries with scientific ambitions, he soon set his sights on graduate studies in the United States. “China was really like 30 to 50 years behind,” he says. “There was no way to do cutting-edge research.” So in 1989, he left for Buffalo, New York, where for the first time he saw snow piled several feet high. He completed his Ph.D. in genetics at the State University of New York at Buffalo.

Feng is short and slim, with a monk-like placidity and a quick smile, and he now holds an endowed chair in neuroscience at MIT, where he focuses on the genetics of brain disorders. His 45-person lab is part of the McGovern Institute for Brain Research, which was established in 2000 with the promise of a $350 million donation, the largest ever received by the university. In short, his lab does not lack for much.

Yet Feng now travels to China several times a year, because there, he can pursue research he has not yet been able to carry out in the United States. [emphasis mine] …

Feng had organized a symposium at SIAT [Shenzhen Institutes of Advanced Technology], and he was not the only scientist who traveled all the way from the United States to attend: He invited several colleagues as symposium speakers, including a fellow MIT neuroscientist interested in tree shrews, a tiny mammal related to primates and native to southern China, and Chinese-born neuroscientists who study addiction at the University of Pittsburgh and SUNY Upstate Medical University. Like Feng, they had left China in the ’80s and ’90s, part of a wave of young scientists in search of better opportunities abroad. Also like Feng, they were back in China to pursue a type of cutting-edge research too expensive and too impractical—and maybe too ethically sensitive—in the United States.

Here’s what precipitated Feng’s work in China, (from Zhang’s article; Note: Links have been removed)

At MIT, Feng’s lab worked on genetically engineering a monkey species called marmosets, which are very small and genuinely bizarre-looking. They are cheaper to keep due to their size, but they are a relatively new lab animal, and they can be difficult to train on lab tasks. For this reason, Feng also wanted to study Shank3 on macaques in China. Scientists have been cataloging the social behavior of macaques for decades, making it an obvious model for studies of disorders like autism that have a strong social component. Macaques are also more closely related to humans than marmosets, making their brains a better stand-in for those of humans.

The process of genetically engineering a macaque is not trivial, even with the advanced tools of CRISPR. Researchers begin by dosing female monkeys with the same hormones used in human in vitro fertilization. They then collect and fertilize the eggs, and inject the resulting embryos with CRISPR proteins using a long, thin glass needle. Monkey embryos are far more sensitive than mice embryos, and can be affected by small changes in the pH of the injection or the concentration of CRISPR proteins. Only some of the embryos will have the desired mutation, and only some will survive once implanted in surrogate mothers. It takes dozens of eggs to get to just one live monkey, so making even a few knockout monkeys required the support of a large breeding colony.

The first Shank3 macaque was born in 2015. Four more soon followed, bringing the total to five.

To visit his research animals, Feng now has to fly 8,000 miles across 12 time zones. It would be a lot more convenient to carry out his macaque research in the United States, of course, but so far, he has not been able to.

He originally inquired about making Shank3 macaques at the New England Primate Research Center, one of eight national primate research centers then funded by the National Institutes of Health in partnership with a local institution (Harvard Medical School, in this case). The center was conveniently located in Southborough, Massachusetts, just 20 miles west of the MIT campus. But in 2013, Harvard decided to shutter the center.

The decision came as a shock to the research community, and it was widely interpreted as a sign of waning interest in primate research in the United States. While the national primate centers have been important hubs of research on HIV, Zika, Ebola, and other diseases, they have also come under intense public scrutiny. Animal-rights groups like the Humane Society of the United States have sent investigators to work undercover in the labs, and the media has reported on monkey deaths in grisly detail. Harvard officially made its decision to close for “financial” reasons. But the announcement also came after the high-profile deaths of four monkeys from improper handling between 2010 and 2012. The deaths sparked a backlash; demonstrators showed up at the gates. The university gave itself two years to wind down their primate work, officially closing the center in 2015.

“They screwed themselves,” Michael Halassa, the MIT neuroscientist who spoke at Feng’s symposium, told me in Shenzhen. Wei-Dong Yao, another one of the speakers, chimed in, noting that just two years later CRISPR has created a new wave of interest in primate research. Yao was one of the researchers at Harvard’s primate center before it closed; he now runs a lab at SUNY Upstate Medical University that uses genetically engineered mouse and human stem cells, and he had come to Shenzhen to talk about restarting his addiction research on primates.

Here’s comes the competition (from Zhang’s article; Note: Links have been removed),

While the U.S. government’s biomedical research budget has been largely flat, both national and local governments in China are eager to raise their international scientific profiles, and they are shoveling money into research. A long-rumored, government-sponsored China Brain Project is supposed to give neuroscience research, and primate models in particular, a big funding boost. Chinese scientists may command larger salaries, too: Thanks to funding from the Shenzhen local government, a new principal investigator returning from overseas can get 3 million yuan—almost half a million U.S. dollars—over his or her first five years. China is even finding success in attracting foreign researchers from top U.S. institutions like Yale.

In the past few years, China has seen a miniature explosion of genetic engineering in monkeys. In Kunming, Shanghai, and Guangzhou, scientists have created monkeys engineered to show signs of Parkinson’s, Duchenne muscular dystrophy, autism, and more. And Feng’s group is not even the only one in China to have created Shank3 monkeys. Another group—a collaboration primarily between researchers at Emory University and scientists in China—has done the same.

Chinese scientists’ enthusiasm for CRISPR also extends to studies of humans, which are moving much more quickly, and in some cases under less oversight, than in the West. The first studies to edit human embryos and first clinical trials for cancer therapies using CRISPR have all happened in China. [emphases mine]

Some ethical issues are also covered (from Zhang’s article),

Parents with severely epileptic children had asked him if it would be possible to study the condition in a monkey. Feng told them what he thought would be technically possible. “But I also said, ‘I’m not sure I want to generate a model like this,’” he recalled. Maybe if there were a drug to control the monkeys’ seizures, he said: “I cannot see them seizure all the time.”

But is it ethical, he continued, to let these babies die without doing anything? Is it ethical to generate thousands or millions of mutant mice for studies of brain disorders, even when you know they will not elucidate much about human conditions?

Primates should only be used if other models do not work, says Feng, and only if a clear path forward is identified. The first step in his work, he says, is to use the Shank3 monkeys to identify the changes the mutations cause in the brain. Then, researchers might use that information to find targets for drugs, which could be tested in the same monkeys. He’s talking with the Oregon National Primate Research Center about carrying out similar work in the United States. ….[Note: I have a three-part series about CRISPR and germline editing* in the US, precipitated by research coming out of Oregon, Part 1, which links to the other parts, is here.]

Zhang’s June 8, 2018 article is excellent and I highly recommend reading it.

I touched on the topic of xenotransplanttaion in a commentary on a book about the science  of the television series, Orphan Black in a January 31,2018 posting (Note: A chimera is what you use to incubate a ‘human’ organ for transplantation or, more accurately, xenotransplantation),

On the subject of chimeras, the Canadian Broadcasting Corporation (CBC) featured a January 26, 2017 article about the pig-human chimeras on its website along with a video,

The end

I am very excited to see Piccinini’s work come to Vancouver. There have been a number of wonderful art and art/science installations and discussions here but this is the first one (I believe) to tackle the emerging gene editing technologies and the issues they raise. (It also fits in rather nicely with the 200th anniversary of the publication of Mary Shelley’s Frankenstein which continues to raise issues and stimulate discussion.)

In addition to the ethical issues raised in Zhang’s article, there are some other philosophical questions:

  • what does it mean to be human
  • if we are going to edit genes to create hybrid human/animals, what are they and how do they fit into our current animal/human schema
  • are you still human if you’ve had an organ transplant where the organ was incubated in a pig

There are also going to be legal issues. In addition to any questions about legal status, there are also fights about intellectual property such as the one involving Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley (March 15, 2017 posting)..

While I’m thrilled about the Piccinini installation, it should be noted the issues raised by other artworks hosted in this version of the Biennale are important. Happily, they have been broached here in Vancouver before and I suspect this will result in more nuanced  ‘conversations’ than are possible when a ‘new’ issue is introduced.

Bravo 2018 – 2020 Vancouver Biennale!

* Germline editing is when your gene editing will affect subsequent generations as opposed to editing out a mutated gene for the lifetime of a single individual.

Art/sci and CRISPR links

This art/science posting may prove of some interest:

The connectedness of living things: an art/sci project in Saskatchewan: evolutionary biology (February 16, 2018)

A selection of my CRISPR posts:

CRISPR and editing the germline in the US (part 1 of 3): In the beginning (August 15, 2017)

NOTE: An introductory CRISPR video describing how CRISPR/Cas9 works was embedded in part1.

Why don’t you CRISPR yourself? (January 25, 2018)

Editing the genome with CRISPR ((clustered regularly interspaced short palindromic repeats)-carrying nanoparticles (January 26, 2018)

Immune to CRISPR? (April 10, 2018)