Tag Archives: France

The latest math stars: honeybees!

Understanding the concept of zero—I still remember climbing that mountain, so to speak. It took the teacher quite a while to convince me that representing ‘nothing’ as a zero was worthwhile. In fact, it took the combined efforts of both my parents and the teacher to convince me to use zeroes as I was prepared to go without. The battle is long since over and I have learned to embrace zero.

I don’t think bees have to be convinced but they too may have a concept of zero. More about that later, here’s the latest abut bees and math from an October 10, 2019 news item on phys.org,

Start thinking about numbers and they can become large very quickly. The diameter of the universe is about 8.8×1023 km and the largest known number—googolplex, 1010100—outranks it enormously. Although that colossal concept was dreamt up by brilliant mathematicians, we’re still pretty limited when it comes to assessing quantities at a glance. ‘Humans have a threshold limit for instantly processing one to four elements accurately’, says Adrian Dyer from RMIT University, Australia; and it seems that we are not alone. Scarlett Howard from RMIT and the Université de Toulouse, France, explains that guppies, angelfish and even honeybees are capable of distinguishing between quantities of three and four, although the trusty insects come unstuck at finer differences; they fail to differentiate between four and five, which made her wonder. According to Howard, honeybees are quite accomplished mathematicians. ‘Recently, honeybees were shown to learn the rules of “less than” and “greater than” and apply these rules to evaluate numbers from zero to six’, she says. Maybe numeracy wasn’t the bees’ problem; was it how the question was posed? The duo publishes their discovery that bees can discriminate between four and five if the training procedure is correct in Journal of Experimental Biology.

An October 10, 2019 The Company of Biologists’ press release on EurekAlert, which originated the news item, refines the information with more detail,

Dyer explains that when animals are trained to distinguish between colours and objects, some training procedures simply reward the animals when they make the correct decision. In the case of the honeybees that could distinguish three from four, they received a sip of super-sweet sugar water when they made the correct selection but just a taste of plain water when they got it wrong. However, Dyer, Howard and colleagues Aurore Avarguès-Weber, Jair Garcia and Andrew Greentree knew there was an alternative strategy. This time, the bees would be given a bitter-tasting sip of quinine-flavoured water when they got the answer wrong. Would the unpleasant flavour help the honeybees to focus better and improve their maths?

‘[The] honeybees were very cooperative, especially when I was providing sugar rewards’, says Howard, who moved to France each April to take advantage the northern summer during the Australian winter, when bees are dormant. Training the bees to enter a Y-shaped maze, Howard presented the insects with a choice; a card featuring four shapes in one arm and a card featuring a different number of shapes (ranging from one to 10) in the other. During the first series of training sessions, Howard rewarded the bees with a sugary sip when they alighted correctly before the card with four shapes, in contrast to a sip of water when they selected the wrong card. However, when Howard trained a second set of bees she reproved them with a bitter-tasting sip of quinine when they chose incorrectly, rewarding the insects with sugar when they selected the card with four shapes. Once the bees had learned to pick out the card with four shapes, Howard tested whether they could distinguish the card with four shapes when offered a choice between it and cards with eight, seven, six or – the most challenging comparison – five shapes.

Not surprisingly, the bees that had only been rewarded during training struggled; they couldn’t even differentiate between four and eight shapes. However, when Howard tested the honeybees that had been trained more rigorously – receiving a quinine reprimand – their performance was considerably better, consistently picking the card with four shapes when offered a choice between it and cards with seven or eight shapes. Even more impressively, the bees succeeded when offered the more subtle choice between four and five shapes.

So, it seems that honeybees are better mathematicians than had been credited. Unlocking their ability was simply a matter of asking the question in the right way and Howard is now keen to find out just how far counting bees can go.

I’ll get to the link to and citation for the paper in a minute but first, I found more about bees and math (including zero) in this February 7, 2019 article by Jason Daley for The Smithsonian (Note: Links have been removed),

Bees are impressive creatures, powering entire ecosystems via pollination and making sweet honey at the same time, one of the most incredible substances in nature. But it turns out the little striped insects are also quite clever. A new study suggests that, despite having tiny brains, bees understand the mathematical concepts of addition and subtraction.

To test the numeracy of the arthropods, researchers set up unique Y-shaped math mazes for the bees to navigate, according to Nicola Davis at the The Guardian. Because the insects can’t read, and schooling them to recognize abstract symbols like plus and minus signs would be incredibly difficult, the researchers used color to indicate addition or subtraction. …

Fourteen bees spent between four and seven hours completing 100 trips through the mazes during training exercises with the shapes and numbers chosen at random. All of the bees appeared to learn the concept. Then, the bees were tested 10 times each using two addition and two subtraction scenarios that had not been part of the training runs. The little buzzers got the correct answer between 64 and 72 percent of the time, better than would be expected by chance.

Last year, the same team of researchers published a paper suggesting that bees could understand the concept of zero, which puts them in an elite club of mathematically-minded animals that, at a minimum, have the ability to perceive higher and lower numbers in different groups. Animals with this ability include frogs, lions, spiders, crows, chicken chicks, some fish and other species. And these are not the only higher-level skills that bees appear to possess. A 2010 study that Dyer [Adrian Dyer of RMIT University in Australia] also participated in suggests that bees can remember human faces using the same mechanisms as people. Bees also use a complex type of movement called the waggle dance to communicate geographical information to one other, another sophisticated ability packed into a brain the size of a sesame seed.

If researchers could figure out how bees perform so many complicated tasks with such a limited number of neurons, the research could have implications for both biology and technology, such as machine learning. …

Then again, maybe the honey makers are getting more credit than they deserve. Clint Perry, who studies invertebrate intelligence at the Bee Sensory and Behavioral Ecology Lab at Queen Mary University of London tells George Dvorsky at Gizmodo that he’s not convinced by the research, and he had similar qualms about the study that suggested bees can understand the concept of zero. He says the bees may not be adding and subtracting, but rather are simply looking for an image that most closely matches the initial one they see, associating it with the sugar reward. …

If you have the time and the interest, definitely check out Daley’s article.

Here’s a link to and a citation for the latest paper about honeybees and math,

Surpassing the subitizing threshold: appetitive–aversive conditioning improves discrimination of numerosities in honeybees by Scarlett R. Howard, Aurore Avarguès-Weber, Jair E. Garcia, Andrew D. Greentree, Adrian G. Dyer. Journal of Experimental Biology 2019 222: jeb205658 doi: 10.1242/jeb.205658 Published 10 October 2019

This paper is behind a paywall.

Of puke, CRISPR, fruit flies, and monarch butterflies

I’ve never seen an educational institution use a somewhat vulgar slang term such as ‘puke’ before. Especially not in a news release. You’ll find that elsewhere online ‘puke’ has been replaced, in the headline, with the more socially acceptable ‘vomit’.

Since I wanted to catch this historic moment amid concerns that the original version of the news release will disappear, I’m including the entire news release as i saw it on EurekAlert.com (from an October 2, 2019 University of California at Berkeley news release),

News Release 2-Oct-2019

CRISPRed fruit flies mimic monarch butterfly — and could make you puke
Scientists recreate in flies the mutations that let monarch butterfly eat toxic milkweed with impunity

University of California – Berkeley

The fruit flies in Noah Whiteman’s lab may be hazardous to your health.

Whiteman and his University of California, Berkeley, colleagues have turned perfectly palatable fruit flies — palatable, at least, to frogs and birds — into potentially poisonous prey that may cause anything that eats them to puke. In large enough quantities, the flies likely would make a human puke, too, much like the emetic effect of ipecac syrup.

That’s because the team genetically engineered the flies, using CRISPR-Cas9 gene editing, to be able to eat milkweed without dying and to sequester its toxins, just as America’s most beloved butterfly, the monarch, does to deter predators.

This is the first time anyone has recreated in a multicellular organism a set of evolutionary mutations leading to a totally new adaptation to the environment — in this case, a new diet and new way of deterring predators.

Like monarch caterpillars, the CRISPRed fruit fly maggots thrive on milkweed, which contains toxins that kill most other animals, humans included. The maggots store the toxins in their bodies and retain them through metamorphosis, after they turn into adult flies, which means the adult “monarch flies” could also make animals upchuck.

The team achieved this feat by making three CRISPR edits in a single gene: modifications identical to the genetic mutations that allow monarch butterflies to dine on milkweed and sequester its poison. These mutations in the monarch have allowed it to eat common poisonous plants other insects could not and are key to the butterfly’s thriving presence throughout North and Central America.

Flies with the triple genetic mutation proved to be 1,000 times less sensitive to milkweed toxin than the wild fruit fly, Drosophila melanogaster.

Whiteman and his colleagues will describe their experiment in the Oct. 2 [2019] issue of the journal Nature.

Monarch flies

The UC Berkeley researchers created these monarch flies to establish, beyond a shadow of a doubt, which genetic changes in the genome of monarch butterflies were necessary to allow them to eat milkweed with impunity. They found, surprisingly, that only three single-nucleotide substitutions in one gene are sufficient to give fruit flies the same toxin resistance as monarchs.

“All we did was change three sites, and we made these superflies,” said Whiteman, an associate professor of integrative biology. “But to me, the most amazing thing is that we were able to test evolutionary hypotheses in a way that has never been possible outside of cell lines. It would have been difficult to discover this without having the ability to create mutations with CRISPR.”

Whiteman’s team also showed that 20 other insect groups able to eat milkweed and related toxic plants – including moths, beetles, wasps, flies, aphids, a weevil and a true bug, most of which sport the color orange to warn away predators – independently evolved mutations in one, two or three of the same amino acid positions to overcome, to varying degrees, the toxic effects of these plant poisons.

In fact, his team reconstructed the one, two or three mutations that led to each of the four butterfly and moth lineages, each mutation conferring some resistance to the toxin. All three mutations were necessary to make the monarch butterfly the king of milkweed.
Resistance to milkweed toxin comes at a cost, however. Monarch flies are not as quick to recover from upsets, such as being shaken — a test known as “bang” sensitivity.

“This shows there is a cost to mutations, in terms of recovery of the nervous system and probably other things we don’t know about,” Whiteman said. “But the benefit of being able to escape a predator is so high … if it’s death or toxins, toxins will win, even if there is a cost.”

Plant vs. insect

Whiteman is interested in the evolutionary battle between plants and parasites and was intrigued by the evolutionary adaptations that allowed the monarch to beat the milkweed’s toxic defense. He also wanted to know whether other insects that are resistant — though all less resistant than the monarch — use similar tricks to disable the toxin.

“Since plants and animals first invaded land 400 million years ago, this coevolutionary arms race is thought to have given rise to a lot of the plant and animal diversity that we see, because most animals are insects, and most insects are herbivorous: they eat plants,” he said.

Milkweeds and a variety of other plants, including foxglove, the source of digitoxin and digoxin, contain related toxins — called cardiac glycosides — that can kill an elephant and any creature with a beating heart. Foxglove’s effect on the heart is the reason that an extract of the plant, in the genus Digitalis, has been used for centuries to treat heart conditions, and why digoxin and digitoxin are used today to treat congestive heart failure.

These plants’ bitterness alone is enough to deter most animals, but a small minority of insects, including the monarch (Danaus plexippus) and its relative, the queen butterfly (Danaus gilippus), have learned to love milkweed and use it to repel predators.

Whiteman noted that the monarch is a tropical lineage that invaded North America after the last ice age, in part enabled by the three mutations that allowed it to eat a poisonous plant other animals could not, giving it a survival edge and a natural defense against predators.

“The monarch resists the toxin the best of all the insects, and it has the biggest population size of any of them; it’s all over the world,” he said.

The new paper reveals that the mutations had to occur in the right sequence, or else the flies would never have survived the three separate mutational events.

Thwarting the sodium pump

The poisons in these plants, most of them a type of cardenolide, interfere with the sodium/potassium pump (Na+/K+-ATPase) that most of the body’s cells use to move sodium ions out and potassium ions in. The pump creates an ion imbalance that the cell uses to its favor. Nerve cells, for example, transmit signals along their elongated cell bodies, or axons, by opening sodium and potassium gates in a wave that moves down the axon, allowing ions to flow in and out to equilibrate the imbalance. After the wave passes, the sodium pump re-establishes the ionic imbalance.

Digitoxin, from foxglove, and ouabain, the main toxin in milkweed, block the pump and prevent the cell from establishing the sodium/potassium gradient. This throws the ion concentration in the cell out of whack, causing all sorts of problems. In animals with hearts, like birds and humans, heart cells begin to beat so strongly that the heart fails; the result is death by cardiac arrest.

Scientists have known for decades how these toxins interact with the sodium pump: they bind the part of the pump protein that sticks out through the cell membrane, clogging the channel. They’ve even identified two specific amino acid changes or mutations in the protein pump that monarchs and the other insects evolved to prevent the toxin from binding.

But Whiteman and his colleagues weren’t satisfied with this just so explanation: that insects coincidentally developed the same two identical mutations in the sodium pump 14 separate times, end of story. With the advent of CRISPR-Cas9 gene editing in 2012, coinvented by UC Berkeley’s Jennifer Doudna, Whiteman and colleagues Anurag Agrawal of Cornell University and Susanne Dobler of the University of Hamburg in Germany applied to the Templeton Foundation for a grant to recreate these mutations in fruit flies and to see if they could make the flies immune to the toxic effects of cardenolides.

Seven years, many failed attempts and one new grant from the National Institutes of Health later, along with the dedicated CRISPR work of GenetiVision of Houston, Texas, they finally achieved their goal. In the process, they discovered a third critical, compensatory mutation in the sodium pump that had to occur before the last and most potent resistance mutation would stick. Without this compensatory mutation, the maggots died.

Their detective work required inserting single, double and triple mutations into the fruit fly’s own sodium pump gene, in various orders, to assess which ones were necessary. Insects having only one of the two known amino acid changes in the sodium pump gene were best at resisting the plant poisons, but they also had serious side effects — nervous system problems — consistent with the fact that sodium pump mutations in humans are often associated with seizures. However, the third, compensatory mutation somehow reduces the negative effects of the other two mutations.

“One substitution that evolved confers weak resistance, but it is always present and allows for substitutions that are going to confer the most resistance,” said postdoctoral fellow Marianna Karageorgi, a geneticist and evolutionary biologist. “This substitution in the insect unlocks the resistance substitutions, reducing the neurological costs of resistance. Because this trait has evolved so many times, we have also shown that this is not random.”

The fact that one compensatory mutation is required before insects with the most resistant mutation could survive placed a constraint on how insects could evolve toxin resistance, explaining why all 21 lineages converged on the same solution, Whiteman said. In other situations, such as where the protein involved is not so critical to survival, animals might find different solutions.

“This helps answer the question, ‘Why does convergence evolve sometimes, but not other times?'” Whiteman said. “Maybe the constraints vary. That’s a simple answer, but if you think about it, these three mutations turned a Drosophila protein into a monarch one, with respect to cardenolide resistance. That’s kind of remarkable.”

###

The research was funded by the Templeton Foundation and the National Institutes of Health. Co-authors with Whiteman and Agrawal are co-first authors Marianthi Karageorgi of UC Berkeley and Simon Groen, now at New York University; Fidan Sumbul and Felix Rico of Aix-Marseille Université in France; Julianne Pelaez, Kirsten Verster, Jessica Aguilar, Susan Bernstein, Teruyuki Matsunaga and Michael Astourian of UC Berkeley; Amy Hastings of Cornell; and Susanne Dobler of Universität Hamburg in Germany.

Robert Sanders’ Oct. 2, 2019′ news release for the University of California at Berkeley (it’s also been republished as an Oct. 2, 2019 news item on ScienceDaily) has had its headline changed to ‘vomit’ but you’ll find the more vulgar word remains in two locations of the second paragraph of the revised new release.

If you have time, go to the news release on the University of California at Berkeley website just to admire the images that have been embedded in the news release. Here’s one,

Caption: A Drosophila melanogaster “monarch fly” with mutations introduced by CRISPR-Cas9 genome editing (V111, S119 and H122) to the sodium potassium pump, on a wing of a monarch butterfly (Danaus plexippus). Credit & Ccpyright: Julianne Pelaez

Here’s a link to and a citation for the paper,

Genome editing retraces the evolution of toxin resistance in the monarch butterfly by Marianthi Karageorgi, Simon C. Groen, Fidan Sumbul, Julianne N. Pelaez, Kirsten I. Verster, Jessica M. Aguilar, Amy P. Hastings, Susan L. Bernstein, Teruyuki Matsunaga, Michael Astourian, Geno Guerra, Felix Rico, Susanne Dobler, Anurag A. Agrawal & Noah K. Whiteman. Nature (2019) DOI: https://doi.org/10.1038/s41586-019-1610-8 Published 02 October 2019

This paper is behind a paywall.

Words about a word

I’m glad they changed the headline and substituted vomit for puke. I think we need vulgar and/or taboo words to release anger or disgust or other difficult emotions. Incorporating those words into standard language deprives them of that power.

The last word: Genetivision

The company mentioned in the new release, Genetivision, is the place to go for transgenic flies. Here’s a sampling from the their Testimonials webpage,

GenetiVision‘s service has been excellent in the quality and price. The timeliness of its international service has been a big plus. We are very happy with its consistent service and the flies it generates.”
Kwang-Wook Choi, Ph.D.
Department of Biological Sciences
Korea Advanced Institute of Science and Technology


“We couldn’t be happier with GenetiVision. Great prices on both standard P and PhiC31 transgenics, quick turnaround time, and we’re still batting 1000 with transformant success. We used to do our own injections but your service makes it both faster and more cost-effective. Thanks for your service!”
Thomas Neufeld, Ph.D.
Department of Genetics, Cell Biology and Development
University of Minnesota

You can find out more here at the Genetivision website.

Structure of tunneling nanotubes (TNTs) challenges the dogma of the cell

There is a video that accompanies the news but I strongly advise reading the press release first, unless you already know a lot about cells and tunneling nanotubes.

A January 30, 2019 Institut Pasteur press release (also on EurekAlert but published Jan.31, 2019) announces the work,

Cells in our bodies have the ability to speak with one another much like humans do. This communication allows organs in our bodies to work synchronously, which in turn, enables us to perform the remarkable range of tasks we meet on a daily basis. One of this mean of communication is ‘tunneling nanotubes’ or TNTs. In an article published in Nature Communications, researchers from the Institut Pasteur leaded by Chiara Zurzolo discovered, thanks to advanced imaging techniques, that the structure of these nanotubes challenged the very concept of cell.

As their name implies, TNTs are tiny tunnels that link two (or more cells) and allow the transport of a wide variety of cargoes between them, including ions, viruses, and entire organelles. Previous research by the same team (Membrane Traf?c and Pathogenesis Unit) at the Institut Pasteur have shown that TNTs are involved in the intercellular spreading of pathogenic amyloid proteins involved in Alzheimer and Parkinson’s disease. This led researchers to propose that they serve as a major avenue for the spreading of neurodegenerative diseases in the brain and therefore represent a novel therapeutic target to stop the progression of these incurable diseases. TNTs also appear to play a major role in cancer resistance to therapy. But as scientists still know very little about TNTs and how they relate or differ from other cellular protrusions such as filopodia, they decided to pursue their research to deal with these tiny tubular connections in depth.

The dogma of cell unit questioned

A better understanding of these tiny tubular connections is therefore required as TNTs might have tremendous implications in human health and disease. Addressing this issue has been very difficult due to the fragile and transitory nature of these structures, which do not survive classical microscopic techniques. In order to overcome these obstacles, researchers combined various state-of-the-art electron microscopy approaches, and imaged TNTs at below-freezing temperatures.

Using this imaging strategy, researchers were able to decipher the structure of TNTs in high detail. Specifically, they show that most TNTs – previously shown to be single connections – are in fact made up of multiple, smaller, individual tunneling nanotubes (iTNTs). Their images also show the existence of thin wires that connect iTNTs, which could serve to increase their mechanical stability. They demonstrate the functionality of iTNTs by showing the transport of organelles using time-lapse imaging. Finally, researchers employed a type of microscopy known as ‘FIB-SEM’ to produce 3D images with sufficient resolution to clearly identify that TNTs are ‘open’ at both ends, and thus create continuity between two cells. “This discovery challenges the dogma of cells as individual units, showing that cells can open up to neighbors and exchange materials without a membrane barrier” explains Chiara Zurzolo, head of the Membrane Traf?c and Pathogenesis Unit at the Institut Pasteur.

A news step in cell-to-cell communication decoding

By applying an imaging work-flow that improves upon, and avoids, previous limitations of tools used to study the anatomy of TNTs, researchers provide the first structural description of TNTs. Importantly, they provide the absolute demonstration that these are novel cellular organelles with a defined structure, very different from known cell protrusions. “The description of the structure allows the understanding of the mechanisms involved in their formation and provides a better comprehension of their function in transferring material directly between (the cytosol of) two connected cells” says Chiara Zurzolo. Furthermore, their strategy, which preserves these delicate structures, will be useful for studying the role TNTs play in other physiological and pathological conditions

This work is an essential step toward understanding cell-to-cell communication via TNTs and lays the groundwork for investigations into their physiological functions and their role in spreading of particles linked to diseases such as viruses, bacteria, and misfolded proteins.

The researchers have kindly produced a version of the video in English,

Here’s a link to and a citation for the paper,

Correlative cryo-electron microscopy reveals the structure of TNTs in neuronal cells by Anna Sartori-Rupp, Diégo Cordero Cervantes, Anna Pepe, Karine Gousset, Elise Delage, Simon Corroyer-Dulmont, Christine Schmitt, Jacomina Krijnse-Locker & Chiara Zurzolo. Nature Communications volume 10, Article number: 342 (2019) DOI https://doi.org/10.1038/s41467-018-08178-7 Published 21 January 2019

This paper is open access.

Human lung enzyme can degrade graphene

Caption: A human lung enzyme can biodegrade graphene. Credit: Fotolia Courtesy: Graphene Flagship

The big European Commission research programme, Grahene Flagship, has announced some new work with widespread implications if graphene is to be used in biomedical implants. From a August 23, 2018 news item on ScienceDaily,

Myeloperoxidase — an enzyme naturally found in our lungs — can biodegrade pristine graphene, according to the latest discovery of Graphene Flagship partners in CNRS, University of Strasbourg (France), Karolinska Institute (Sweden) and University of Castilla-La Mancha (Spain). Among other projects, the Graphene Flagship designs based like flexible biomedical electronic devices that will interfaced with the human body. Such applications require graphene to be biodegradable, so our body can be expelled from the body.

An August 23, 2018 Grapehene Flagship press release (mildly edited version on EurekAlert), which originated the news item, provides more detail,

To test how graphene behaves within the body, researchers analysed how it was broken down with the addition of a common human enzyme – myeloperoxidase or MPO. If a foreign body or bacteria is detected, neutrophils surround it and secrete MPO, thereby destroying the threat. Previous work by Graphene Flagship partners found that MPO could successfully biodegrade graphene oxide.

However, the structure of non-functionalized graphene was thought to be more resistant to degradation. To test this, the team looked at the effects of MPO ex vivo on two graphene forms; single- and few-layer.

Alberto Bianco, researcher at Graphene Flagship Partner CNRS, explains: “We used two forms of graphene, single- and few-layer, prepared by two different methods in water. They were then taken and put in contact with myeloperoxidase in the presence of hydrogen peroxide. This peroxidase was able to degrade and oxidise them. This was really unexpected, because we thought that non-functionalized graphene was more resistant than graphene oxide.”

Rajendra Kurapati, first author on the study and researcher at Graphene Flagship Partner CNRS, remarks how “the results emphasize that highly dispersible graphene could be degraded in the body by the action of neutrophils. This would open the new avenue for developing graphene-based materials.”

With successful ex-vivo testing, in-vivo testing is the next stage. Bengt Fadeel, professor at Graphene Flagship Partner Karolinska Institute believes that “understanding whether graphene is biodegradable or not is important for biomedical and other applications of this material. The fact that cells of the immune system are capable of handling graphene is very promising.”

Prof. Maurizio Prato, the Graphene Flagship leader for its Health and Environment Work Package said that “the enzymatic degradation of graphene is a very important topic, because in principle, graphene dispersed in the atmosphere could produce some harm. Instead, if there are microorganisms able to degrade graphene and related materials, the persistence of these materials in our environment will be strongly decreased. These types of studies are needed.” “What is also needed is to investigate the nature of degradation products,” adds Prato. “Once graphene is digested by enzymes, it could produce harmful derivatives. We need to know the structure of these derivatives and study their impact on health and environment,” he concludes.

Prof. Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship, and chair of its management panel added: “The report of a successful avenue for graphene biodegradation is a very important step forward to ensure the safe use of this material in applications. The Graphene Flagship has put the investigation of the health and environment effects of graphene at the centre of its programme since the start. These results strengthen our innovation and technology roadmap.”

Here’s a link to and a citation for the paper,

Degradation of Single‐Layer and Few‐Layer Graphene by Neutrophil Myeloperoxidase by Dr. Rajendra Kurapati, Dr. Sourav P. Mukherjee, Dr. Cristina Martín, Dr. George Bepete, Prof. Ester Vázquez, Dr. Alain Pénicaud, Prof. Dr. Bengt Fadeel, Dr. Alberto Bianco. Angewandte Chemie https://doi.org/10.1002/anie.201806906 First published: 13 July 2018

This paper is behind a paywall.

Carbon nanotube optics and the quantum

A US-France-Germany collaboration has led to some intriguing work with carbon nanotubes. From a June 18, 2018 news item on ScienceDaily,

Researchers at Los Alamos and partners in France and Germany are exploring the enhanced potential of carbon nanotubes as single-photon emitters for quantum information processing. Their analysis of progress in the field is published in this week’s edition of the journal Nature Materials.

“We are particularly interested in advances in nanotube integration into photonic cavities for manipulating and optimizing light-emission properties,” said Stephen Doorn, one of the authors, and a scientist with the Los Alamos National Laboratory site of the Center for Integrated Nanotechnologies (CINT). “In addition, nanotubes integrated into electroluminescent devices can provide greater control over timing of light emission and they can be feasibly integrated into photonic structures. We are highlighting the development and photophysical probing of carbon nanotube defect states as routes to room-temperature single photon emitters at telecom wavelengths.”

A June 18, 2018 Los Alamos National Laboratory (LANL) news release (also on EurekAlert), which originated the news item, expands on the theme,

The team’s overview was produced in collaboration with colleagues in Paris (Christophe Voisin [Ecole Normale Supérieure de Paris (ENS)]) who are advancing the integration of nanotubes into photonic cavities for modifying their emission rates, and at Karlsruhe (Ralph Krupke [Karlsruhe Institute of Technology (KIT]) where they are integrating nanotube-based electroluminescent devices with photonic waveguide structures. The Los Alamos focus is the analysis of nanotube defects for pushing quantum emission to room temperature and telecom wavelengths, he said.

As the paper notes, “With the advent of high-speed information networks, light has become the main worldwide information carrier. . . . Single-photon sources are a key building block for a variety of technologies, in secure quantum communications metrology or quantum computing schemes.”

The use of single-walled carbon nanotubes in this area has been a focus for the Los Alamos CINT team, where they developed the ability to chemically modify the nanotube structure to create deliberate defects, localizing excitons and controlling their release. Next steps, Doorn notes, involve integration of the nanotubes into photonic resonators, to provide increased source brightness and to generate indistinguishable photons. “We need to create single photons that are indistinguishable from one another, and that relies on our ability to functionalize tubes that are well-suited for device integration and to minimize environmental interactions with the defect sites,” he said.

“In addition to defining the state of the art, we wanted to highlight where the challenges are for future progress and lay out some of what may be the most promising future directions for moving forward in this area. Ultimately, we hope to draw more researchers into this field,” Doorn said.

Here’s a link to and a citation for the paper,

Carbon nanotubes as emerging quantum-light sources by X. He, H. Htoon, S. K. Doorn, W. H. P. Pernice, F. Pyatkov, R. Krupke, A. Jeantet, Y. Chassagneux & C. Voisin. Nature Materials (2018) DOI: https://doi.org/10.1038/s41563-018-0109-2 Published online June 18, 2018

This paper is behind a paywall.

Revising history with science and art

Caption: The 2000-year-old pipe sculpture’s bulging neck is evidence of thyroid disease as a result of iodine deficient water and soil in the ancient Ohio Valley. Credit: Kenneth Tankersley

An October 4, 2018 news item on ScienceDaily describes the analytic breakthrough,

Art often imitates life, but when University of Cincinnati anthropologist and geologist Kenneth Tankersley investigated a 2000-year-old carved statue on a tobacco pipe, he exposed a truth he says will rewrite art history.

Since its discovery in 1901, at the Adena Burial Mound in Ross County, Ohio, archaeologists have theorized that the the 8-inch pipe statue—carved into the likeness of an Ohio Valley Native American—represented an achondroplastic dwarf (AD). People with achondroplasia typically have short arms and legs, an enlarged head, and an average-sized trunk, the same condition as Emmy Award-winning actor Peter Dinklage from HBO’s “Game of Thrones.”

“During the early turn of the century, this theory was consistent with actual human remains of a Native American excavated in Kentucky, also interpreted by archaeologists as being an achondroplastic dwarf,” says Tankersley.

This theory flourished in the scientific literature until the turn of the 21st century when Tankersley looked closer.

“Here we have a carved statue and human remains, both of achondroplasia from the same time period,” says Tankersley. “But what caught my eye on this pipe statue was an obvious tumor on the neck that looked remarkably like a goiter [or goitre] or thyroid tumor.”

An October 2, 2018 University of Cincinnati (UC) news release (also on EurekAlert but published Oct. 3, 2018), reveals more details,

Tankersley collaborated with Frederic Bauduer, a visiting biological anthropologist and paleopathologist from the University of Bordeaux, UC’s sister university in France, to ultimately dispel previous academic literature claiming the sculpture as portraying achondroplasia.

“In archaeological science, flesh does not survive, so many ancient maladies go unnoticed and are almost always impossible to get at from an archaeological standpoint,” says Tankersley. “So what struck me was how remarkably Bauduer was using ancient art from various periods of antiquity to argue for the paleopathology he presented.”

Using radiocarbon dating on textile and bark samples surrounding the pipe at the site, the Adena pipe dates to approximately 2000 years ago, to the earliest evidence of tobacco.

Traditionally, tobacco is considered a sacred plant to Native Americans in this region, and smoking tobacco played an important role in their ceremonies, but he points to tobacco smoking as being long associated with an increased prevalence of goiter in low iodine intake zones worldwide.

From a medical perspective, Bauduer found the physical characteristics, such as the short forehead and long bones of the upper and lower limbs, simply not adding up as an achondroplastic dwarf.

“We found the tumor in the neck, as well as the figure’s squatted stance — not foreshortened legs as was formerly documented in the literature — were both signs and symptoms of thyroid disease,” says Tankersley.

“We already know that iodine deficiencies can lead to thyroid tumors, and the Ohio Valley area, where this artifact was found, has historically had iodine depleted soils and water relative to the advance of an Ice Age glacier about 300,000 years ago.”

Students in a university lab look through microscopes.

Tankersley (top center) teaches archaeology students to date soil, bones and textiles using radiocarbon science.

Profile of ancient tobacco pipe sculpture portraying a Native American wearing ceremonial regalia.

The figure’s bulging neck (goiter) and appearance of short stature are actually results of iodine deficient thyroid disease. The legs are bent in a tilted squat likely during a Native American ceremonial dance.

Tankersley says the Ohio Valley region, before the introduction of iodized salt in the 1920s,
was part of the so-called U.S. “goiter belt” where goiter frequency was relatively high —  five to 15 incidences per thousand.

The lower limbs on the statue, previously documented in the literature as short in stature, are actually normal size in bone length, according to Bauduer. Upon closer inspection, both Bauduer and Tankersley agree that the figure is also portrayed in a tilted squat, a common gait anomaly found in people with hypothyroidism.

The figure has what appears to be an abdominal six-pack, but both researchers say the detailed physical features indeed portray a normal physique except for the telltale signs of thyroid disease.

“The fact that the bones of the figure are all normal size leads us to believe the squat portrays more of an abnormal gait while likely in the stance of a typical Native American ritual dance,” says Tankersley, who is one-quarter Native American himself and regularly attends ceremonial events throughout Ohio and Kentucky.

“The regalia the figure is wearing is also strongly indicative of ancient Native Ohio Valley Shawnee, Delaware and Ojibwa to the north and Miami Nation tribes in Indiana.

“The traditional headdress, pierced ears with expanded spool earrings and loincloth with serpentine motif on the front and feathered bustle on back are also still worn by local Native tribes during ceremonial events today.”

Artistic clues

Portrait of Dr. Frederic Bauduer, biological pathologist from University of Bordeaux in France, on an ancient architectural balcony.

Frederic Bauduer, biological anthropologist, paleopathologist and critical collaborator on this research from the University of Bordeaux, UC’s sister university in France. photo/Frederic Bauduer

In addition to figures found in South America and Mesoamerica, Tankersley says the Adena pipe is the first known example of a goiter depicted in ancient Native North American art and one of the oldest from the Western Hemisphere.

“The other real take here is that a lot of people ask, ‘What is the value of ancient art?’” asserts Tankersley. “Well, here’s an example of ancient art that tells a deeper story. And similar indigenous art representations found in South America and Mesoamerica strengthen our hypothesis.”

Tankersley is interested in looking deeper for pathologies and maladies portrayed on other ancient artifacts from Native Americans thousands of years ago here in the Ohio Valley and elsewhere.

“Art history is beginning to help substantiate many scientific hypotheses,” says Tankersley. “Because artists are such keen students of anatomy, artisans such as this ancient Adena pipe sculptor could portray physical maladies with great accuracy, even before they were aware of what the particular disease was.”

Here’s a link to and a citation for the paper,

Medical Hypotheses Evidence of an ancient (2000 years ago) goiter attributed to iodine deficiency in North America by F. Bauduer, K. Barnett Tankersley. Medical Hypotheses Volume 118, September 2018, Pages 6-8 DOI: https://doi.org/10.1016/j.mehy.2018.06.011

This paper looks like it’s behind a paywall.

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

The joys of an electronic ‘pill’: Could Canadian Olympic athletes’ training be hacked?

Lori Ewing (Canadian Press) in an  August 3, 2018 article on the Canadian Broadcasting Corporation news website, heralds a new technology intended for the 2020 Olympics in Tokyo (Japan) but being tested now for the 2018 North American, Central American and Caribbean Athletics Association (NACAC) Track & Field Championships, known as Toronto 2018: Track & Field in the 6ix (Aug. 10-12, 2018) competition.

It’s described as a ‘computerized pill’ that will allow athletes to regulate their body temperature during competition or training workouts, from the August 3, 2018 article,

“We can take someone like Evan [Dunfee, a race walker], have him swallow the little pill, do a full four-hour workout, and then come back and download the whole thing, so we get from data core temperature every 30 seconds through that whole workout,” said Trent Stellingwerff, a sport scientist who works with Canada’s Olympic athletes.

“The two biggest factors of core temperature are obviously the outdoor humidex, heat and humidity, but also exercise intensity.”

Bluetooth technology allows Stellingwerff to gather immediate data with a handheld device — think a tricorder in “Star Trek.” The ingestible device also stores measurements for up to 16 hours when away from the monitor which can be wirelessly transmitted when back in range.

“That pill is going to change the way that we understand how the body responds to heat, because we just get so much information that wasn’t possible before,” Dunfee said. “Swallow a pill, after the race or after the training session, Trent will come up, and just hold the phone [emphasis mine] to your stomach and download all the information. It’s pretty crazy.”

First off, it’s probably not a pill or tablet but a gelcap and it sounds like the device is a wireless biosensor. As Ewing notes, the device collects data and transmits it.

Here’s how the French company, BodyCap, supplying the technology describes their product, from the company’s e-Celsius Performance webpage, (assuming this is the product being used),

Continuous core body temperature measurement

Main applications are:

Risk reduction for people in extreme situations, such as elite athletes. During exercise in a hot environment, thermal stress is amplified by the external temperature and the environment’s humidity. The saturation of the body’s thermoregulation mechanism can quickly cause hyperthermia to levels that may cause nausea, fainting or death.

Performance optimisation for elite athletes.This ingestible pill leaves the user fully mobile. The device keeps a continuous record of temperature during training session, competition and during the recovery phase. The data can then be used to correlate thermoregulation with performances. This enable the development of customised training protocols for each athlete.

e-Celsius Performance® can be used for all sports, including water sports. Its application is best suited to sports that are physically intensive like football, rugby, cycling, long distance running, tennis or those that take place in environments with extreme temperature conditions, like diving or skiing.

e-Celsius Performance®, is a miniaturised ingestible electronic pill that wirelessly transmits a continuous measurement of gastrointestinal temperature. [emphasis mine]

The data are stored on a monitor called e-Viewer Performance®. This device [emphases mine] shows alerts if the measurement is outside the desired range. The activation box is used to turn the pill on from standby mode and connect the e-Celsius Performance pill with the monitor for data collection in either real time or by recovery from the internal memory of e-Celsius Performance®. Each monitor can be used with up to three pills at once to enable extended use.

The monitor’s interface allows the user to download data to a PC/ Mac for storage. The pill is safe, non-invasive and easy to use, leaving the gastric system after one or two days, [emphasis mine] depending on individual transit time.

I found Dunfee’s description mildly confusing but that can be traced to his mention of wireless transmission to a phone. Ewing describes a handheld device which is consistent with the company’s product description. There is no mention of the potential for hacking but I would hope Athletics Canada and BodyCap are keeping up with current concerns over hacking and interference (e.g., Facebook/Cambridge Analytica, Russians and the 2016 US election, Roberto Rocha’s Aug. 3, 2018 article for CBC titled: Data sheds light on how Russian Twitter trolls targeted Canadians, etc.).

Moving on, this type of technology was first featured here in a February 11, 2014 posting (scroll down to the gif where an electronic circuit dissolves in water) and again in a November 23, 2015 posting about wearable and ingestible technologies but this is the first real life application I’ve seen for it.

Coincidentally, an August 2, 2018 Frontiers [Publishing] news release on EurekAlert announced this piece of research (published in June 2018) questioning whether we need this much data and whether these devices work as promoted,

Wearable [and, in the future, ingestible?] devices are increasingly bought to track and measure health and sports performance: [emphasis mine] from the number of steps walked each day to a person’s metabolic efficiency, from the quality of brain function to the quantity of oxygen inhaled while asleep. But the truth is we know very little about how well these sensors and machines work [emphasis mine]– let alone whether they deliver useful information, according to a new review published in Frontiers in Physiology.

“Despite the fact that we live in an era of ‘big data,’ we know surprisingly little about the suitability or effectiveness of these devices,” says lead author Dr Jonathan Peake of the School of Biomedical Sciences and Institute of Health and Biomedical Innovation at the Queensland University of Technology in Australia. “Only five percent of these devices have been formally validated.”

The authors reviewed information on devices used both by everyday people desiring to keep track of their physical and psychological health and by athletes training to achieve certain performance levels. [emphases mine] The devices — ranging from so-called wrist trackers to smart garments and body sensors [emphasis mine] designed to track our body’s vital signs and responses to stress and environmental influences — fall into six categories:

  • devices for monitoring hydration status and metabolism
  • devices, garments and mobile applications for monitoring physical and psychological stress
  • wearable devices that provide physical biofeedback (e.g., muscle stimulation, haptic feedback)
  • devices that provide cognitive feedback and training
  • devices and applications for monitoring and promoting sleep
  • devices and applications for evaluating concussion

The authors investigated key issues, such as: what the technology claims to do; whether the technology has been independently validated against some recognized standards; whether the technology is reliable and what, if any, calibration is needed; and finally, whether the item is commercially available or still under development.

The authors say that technology developed for research purposes generally seems to be more credible than devices created purely for commercial reasons.

“What is critical to understand here is that while most of these technologies are not labeled as ‘medical devices’ per se, their very existence, let alone the accompanying marketing, conveys a sensibility that they can be used to measure a standard of health,” says Peake. “There are ethical issues with this assumption that need to be addressed.” [emphases mine]

For example, self-diagnosis based on self-gathered data could be inconsistent with clinical analysis based on a medical professional’s assessment. And just as body mass index charts of the past really only provided general guidelines and didn’t take into account a person’s genetic predisposition or athletic build, today’s technology is similarly limited.

The authors are particularly concerned about those technologies that seek to confirm or correlate whether someone has sustained or recovered from a concussion, whether from sports or military service.

“We have to be very careful here because there is so much variability,” says Peake. “The technology could be quite useful, but it can’t and should never replace assessment by a trained medical professional.”

Speaking generally again now, Peake says it is important to establish whether using wearable devices affects people’s knowledge and attitude about their own health and whether paying such close attention to our bodies could in fact create a harmful obsession with personal health, either for individuals using the devices, or for family members. Still, self-monitoring may reveal undiagnosed health problems, said Peake, although population data is more likely to point to false positives.

“What we do know is that we need to start studying these devices and the trends they are creating,” says Peake. “This is a booming industry.”

In fact, a March 2018 study by P&S Market Research indicates the wearable market is expected to generate $48.2 billion in revenue by 2023. That’s a mere five years into the future.”

The authors highlight a number of areas for investigation in order to develop reasonable consumer policies around this growing industry. These include how rigorously the device/technology has been evaluated and the strength of evidence that the device/technology actually produces the desired outcomes.

“And I’ll add a final question: Is wearing a device that continuously tracks your body’s actions, your brain activity, and your metabolic function — then wirelessly transmits that data to either a cloud-based databank or some other storage — safe, for users? Will it help us improve our health?” asked Peake. “We need to ask these questions and research the answers.”

The authors were not examining ingestible biosensors nor were they examining any issues related to data about core temperatures but it would seem that some of the same issues could apply especially if and when this technology is brought to the consumer market.

Here’s a link to the and a citation for the paper,

Critical Review of Consumer Wearables, Mobile Applications, and Equipment for Providing Biofeedback, Monitoring Stress, and Sleep in Physically Active Populations by Jonathan M. Peake, Graham Kerr, and John P. Sullivan. Front. Physiol., 28 June 2018 | https://doi.org/10.3389/fphys.2018.00743

This paper is open access.

AI x 2: the Amnesty International and Artificial Intelligence story

Amnesty International and artificial intelligence seem like an unexpected combination but it all makes sense when you read a June 13, 2018 article by Steven Melendez for Fast Company (Note: Links have been removed),

If companies working on artificial intelligence don’t take steps to safeguard human rights, “nightmare scenarios” could unfold, warns Rasha Abdul Rahim, an arms control and artificial intelligence researcher at Amnesty International in a blog post. Those scenarios could involve armed, autonomous systems choosing military targets with little human oversight, or discrimination caused by biased algorithms, she warns.

Rahim pointed at recent reports of Google’s involvement in the Pentagon’s Project Maven, which involves harnessing AI image recognition technology to rapidly process photos taken by drones. Google recently unveiled new AI ethics policies and has said it won’t continue with the project once its current contract expires next year after high-profile employee dissent over the project. …

“Compliance with the laws of war requires human judgement [sic] –the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack,” Rahim writes. “Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment, and use of fully autonomous weapon systems.”

Rasha Abdul Rahim’s June 14, 2018 posting (I’m putting the discrepancy in publication dates down to timezone differences) on the Amnesty International website (Note: Links have been removed),

Last week [June 7, 2018] Google released a set of principles to govern its development of AI technologies. They include a broad commitment not to design or deploy AI in weaponry, and come in the wake of the company’s announcement that it will not renew its existing contract for Project Maven, the US Department of Defense’s AI initiative, when it expires in 2019.

The fact that Google maintains its existing Project Maven contract for now raises an important question. Does Google consider that continuing to provide AI technology to the US government’s drone programme is in line with its new principles? Project Maven is a litmus test that allows us to see what Google’s new principles mean in practice.

As details of the US drone programme are shrouded in secrecy, it is unclear precisely what role Google plays in Project Maven. What we do know is that US drone programme, under successive administrations, has been beset by credible allegations of unlawful killings and civilian casualties. The cooperation of Google, in any capacity, is extremely troubling and could potentially implicate it in unlawful strikes.

As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. Machine learning, and AI more broadly, impact a range of human rights including privacy, freedom of expression and the right to life. It is partly in the hands of companies like Google to safeguard these rights in relation to their operations – for us and for future generations. If they don’t, some nightmare scenarios could unfold.

Warfare has already changed dramatically in recent years – a couple of decades ago the idea of remote controlled bomber planes would have seemed like science fiction. While the drones currently in use are still controlled by humans, China, France, Israel, Russia, South Korea, the UK and the US are all known to be developing military robots which are getting smaller and more autonomous.

For example, the UK is developing a number of autonomous systems, including the BAE [Systems] Taranis, an unmanned combat aircraft system which can fly in autonomous mode and automatically identify a target within a programmed area. Kalashnikov, the Russian arms manufacturer, is developing a fully automated, high-calibre gun that uses artificial neural networks to choose targets. The US Army Research Laboratory in Maryland, in collaboration with BAE Systems and several academic institutions, has been developing micro drones which weigh less than 30 grams, as well as pocket-sized robots that can hop or crawl.

Of course, it’s not just in conflict zones that AI is threatening human rights. Machine learning is already being used by governments in a wide range of contexts that directly impact people’s lives, including policing [emphasis mine], welfare systems, criminal justice and healthcare. Some US courts use algorithms to predict future behaviour of defendants and determine their sentence lengths accordingly. The potential for this approach to reinforce power structures, discrimination or inequalities is huge.

In july 2017, the Vancouver Police Department announced its use of predictive policing software, the first such jurisdiction in Canada to make use of the technology. My Nov. 23, 2017 posting featured the announcement.

The almost too aptly named Campaign to Stop Killer Robots can be found here. Their About Us page provides a brief history,

Formed by the following non-governmental organizations (NGOs) at a meeting in New York on 19 October 2012 and launched in London in April 2013, the Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons. See the Chronology charting our major actions and achievements to date.

Steering Committee

The Steering Committee is the campaign’s principal leadership and decision-making body. It is comprised of five international NGOs, a regional NGO network, and four national NGOs that work internationally:

Human Rights Watch
Article 36
Association for Aid and Relief Japan
International Committee for Robot Arms Control
Mines Action Canada
Nobel Women’s Initiative
PAX (formerly known as IKV Pax Christi)
Pugwash Conferences on Science & World Affairs
Seguridad Humana en América Latina y el Caribe (SEHLAC)
Women’s International League for Peace and Freedom

For more information, see this Overview. A Terms of Reference is also available on request, detailing the committee’s selection process, mandate, decision-making, meetings and communication, and expected commitments.

For anyone who may be interested in joining Amnesty International, go here.

Seeing into silicon nanoparticles with ‘mining’ hardware

This was not the mining hardware I expected and it enters the picture after this paragraph which has been excerpted from a February 28, 2018 news item on Nanowerk,

For the first time, researchers developed a three-dimensional dynamic model of an interaction between light and nanoparticles. They used a supercomputer with graphic accelerators for calculations. Results showed that silicon particles exposed to short intense laser pulses lose their symmetry temporarily. Their optical properties become strongly heterogeneous. Such a change in properties depends on particle size, therefore it can be used for light control in ultrafast information processing nanoscale devices. …

A March 2, 2018 ITMO University (Russia) press release (also on EurekAlert), which originated the news item, provides more detail and a mention of ‘cryptocurrency mining’ hardware,

Improvement of computing devices today focuses on increasing information processing speeds. Nanophotonics is one of the sciences that can solve this problem by means of optical devices. Although optical signals can be transmitted and processed much faster than electronic ones, first, it is necessary to learn how to quickly control light on a small scale. For this purpose, one could use metal particles. They are efficient at localizing light, but weaken the signal, causing significant losses. However, dielectric and semiconducting materials, such as silicon, can be used instead of metal.

Silicon nanoparticles are now actively studied by researchers all around the world, including those at ITMO University. The long-term goal of such studies is to create ultrafast, compact optical signal modulators. They can serve as a basis for computers of the future. However, this technology will become feasible only once we understand how nanoparticles interact with light.

Silicon nanoparticles
Silicon nanoparticles

“When a laser pulse hits the particle, a lot of free electrons are formed inside,” explains Sergey Makarov, head of ITMO’s Laboratory of Hybrid Nanophotonics and Optoelectronics. “A region saturated with oppositely charged particles is created. It is usually called electron-hole plasma. Plasma changes optical properties of particles and, up until today, it was believed that it spreads over the whole particle simultaneously, so that the particle’s symmetry is preserved. We demonstrated that this is not entirely true and an even distribution of plasma inside particles is not the only possible scenario.”

Scientists found that the electromagnetic field caused by an interaction between light and particles has a more complex structure. This leads to a light distortion which varies with time. Therefore, the symmetry of particles is disturbed and optical properties become different throughout one particle.

“Using analytical and numerical methods, we were the first to look inside the particle and we proved that the processes taking place there are far more complicated than we thought,” says Konstantin Ladutenko, staff member of ITMO’s International Research Center of Nanophotonics and Metamaterials.  “Moreover, we found that by changing the particle size, we can affect its interaction with the light signal. This means we might be able to predict the signal path in an entire system of nanoparticles.”

In order to create a tool to study processes inside nanoparticles, scientists from ITMO University joined forces with colleagues from Jean Monnet University in France.

Sergey Makarov
Sergey Makarov

We developed analytical methods to determine the size range of the particles and their refractive index which would make a change in optical properties likely. Afterwards, we used powerful computational methods to monitor processes inside particles. Our colleagues performed calculations on a computer with graphics accelerators. Such computers are often used for cryptocurrency mining [emphasis mine]. However, we decided to enrich humanity with new knowledge, rather than enrich ourselves. Besides, bitcoin rate had just started to go down then,” adds Konstantin.

Devices based on these nanoparticles may become basic elements of optical computers, just as transistors are basic elements of electronics today. They will make it possible to distribute and redirect or branch the signal.

“Such asymmetric structures have a variety of applications, but we are focusing on ultra-fast signal processing,” continues Sergey.We now have a powerful theoretical tool which will help us develop light management systems that will operate on a small scale – in terms of both time and space”.

Here’s a little more about ITMO University from its Wikipedia entry (Note: Links have been removed),

ITMO University (Russian: Университет ИТМО) is a large state university in Saint Petersburg and is one of Russia’s National Research Universities.[1] ITMO University is one of 15 Russian universities that were selected to participate in Russian Academic Excellence Project 5-100[2] by the government of the Russian Federation to improve their international competitiveness among the world’s leading research and educational centers.[3]

Here’s a link to and a citation for the paper,

Photogenerated Free Carrier-Induced Symmetry Breaking in Spherical Silicon Nanoparticle by Anton Rudenko, Konstantin Ladutenko, Sergey Makarov, and Tatiana E. Itina.Advanced Optical Materials Vol. 6 Issue 5 DOI: 10.1002/adom.201701153 Version of Record online: 29 JAN 2018

© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.