Tag Archives: France

Structure of tunneling nanotubes (TNTs) challenges the dogma of the cell

There is a video that accompanies the news but I strongly advise reading the press release first, unless you already know a lot about cells and tunneling nanotubes.

A January 30, 2019 Institut Pasteur press release (also on EurekAlert but published Jan.31, 2019) announces the work,

Cells in our bodies have the ability to speak with one another much like humans do. This communication allows organs in our bodies to work synchronously, which in turn, enables us to perform the remarkable range of tasks we meet on a daily basis. One of this mean of communication is ‘tunneling nanotubes’ or TNTs. In an article published in Nature Communications, researchers from the Institut Pasteur leaded by Chiara Zurzolo discovered, thanks to advanced imaging techniques, that the structure of these nanotubes challenged the very concept of cell.

As their name implies, TNTs are tiny tunnels that link two (or more cells) and allow the transport of a wide variety of cargoes between them, including ions, viruses, and entire organelles. Previous research by the same team (Membrane Traf?c and Pathogenesis Unit) at the Institut Pasteur have shown that TNTs are involved in the intercellular spreading of pathogenic amyloid proteins involved in Alzheimer and Parkinson’s disease. This led researchers to propose that they serve as a major avenue for the spreading of neurodegenerative diseases in the brain and therefore represent a novel therapeutic target to stop the progression of these incurable diseases. TNTs also appear to play a major role in cancer resistance to therapy. But as scientists still know very little about TNTs and how they relate or differ from other cellular protrusions such as filopodia, they decided to pursue their research to deal with these tiny tubular connections in depth.

The dogma of cell unit questioned

A better understanding of these tiny tubular connections is therefore required as TNTs might have tremendous implications in human health and disease. Addressing this issue has been very difficult due to the fragile and transitory nature of these structures, which do not survive classical microscopic techniques. In order to overcome these obstacles, researchers combined various state-of-the-art electron microscopy approaches, and imaged TNTs at below-freezing temperatures.

Using this imaging strategy, researchers were able to decipher the structure of TNTs in high detail. Specifically, they show that most TNTs – previously shown to be single connections – are in fact made up of multiple, smaller, individual tunneling nanotubes (iTNTs). Their images also show the existence of thin wires that connect iTNTs, which could serve to increase their mechanical stability. They demonstrate the functionality of iTNTs by showing the transport of organelles using time-lapse imaging. Finally, researchers employed a type of microscopy known as ‘FIB-SEM’ to produce 3D images with sufficient resolution to clearly identify that TNTs are ‘open’ at both ends, and thus create continuity between two cells. “This discovery challenges the dogma of cells as individual units, showing that cells can open up to neighbors and exchange materials without a membrane barrier” explains Chiara Zurzolo, head of the Membrane Traf?c and Pathogenesis Unit at the Institut Pasteur.

A news step in cell-to-cell communication decoding

By applying an imaging work-flow that improves upon, and avoids, previous limitations of tools used to study the anatomy of TNTs, researchers provide the first structural description of TNTs. Importantly, they provide the absolute demonstration that these are novel cellular organelles with a defined structure, very different from known cell protrusions. “The description of the structure allows the understanding of the mechanisms involved in their formation and provides a better comprehension of their function in transferring material directly between (the cytosol of) two connected cells” says Chiara Zurzolo. Furthermore, their strategy, which preserves these delicate structures, will be useful for studying the role TNTs play in other physiological and pathological conditions

This work is an essential step toward understanding cell-to-cell communication via TNTs and lays the groundwork for investigations into their physiological functions and their role in spreading of particles linked to diseases such as viruses, bacteria, and misfolded proteins.

The researchers have kindly produced a version of the video in English,

Here’s a link to and a citation for the paper,

Correlative cryo-electron microscopy reveals the structure of TNTs in neuronal cells by Anna Sartori-Rupp, Diégo Cordero Cervantes, Anna Pepe, Karine Gousset, Elise Delage, Simon Corroyer-Dulmont, Christine Schmitt, Jacomina Krijnse-Locker & Chiara Zurzolo. Nature Communications volume 10, Article number: 342 (2019) DOI https://doi.org/10.1038/s41467-018-08178-7 Published 21 January 2019

This paper is open access.

Human lung enzyme can degrade graphene

Caption: A human lung enzyme can biodegrade graphene. Credit: Fotolia Courtesy: Graphene Flagship

The big European Commission research programme, Grahene Flagship, has announced some new work with widespread implications if graphene is to be used in biomedical implants. From a August 23, 2018 news item on ScienceDaily,

Myeloperoxidase — an enzyme naturally found in our lungs — can biodegrade pristine graphene, according to the latest discovery of Graphene Flagship partners in CNRS, University of Strasbourg (France), Karolinska Institute (Sweden) and University of Castilla-La Mancha (Spain). Among other projects, the Graphene Flagship designs based like flexible biomedical electronic devices that will interfaced with the human body. Such applications require graphene to be biodegradable, so our body can be expelled from the body.

An August 23, 2018 Grapehene Flagship press release (mildly edited version on EurekAlert), which originated the news item, provides more detail,

To test how graphene behaves within the body, researchers analysed how it was broken down with the addition of a common human enzyme – myeloperoxidase or MPO. If a foreign body or bacteria is detected, neutrophils surround it and secrete MPO, thereby destroying the threat. Previous work by Graphene Flagship partners found that MPO could successfully biodegrade graphene oxide.

However, the structure of non-functionalized graphene was thought to be more resistant to degradation. To test this, the team looked at the effects of MPO ex vivo on two graphene forms; single- and few-layer.

Alberto Bianco, researcher at Graphene Flagship Partner CNRS, explains: “We used two forms of graphene, single- and few-layer, prepared by two different methods in water. They were then taken and put in contact with myeloperoxidase in the presence of hydrogen peroxide. This peroxidase was able to degrade and oxidise them. This was really unexpected, because we thought that non-functionalized graphene was more resistant than graphene oxide.”

Rajendra Kurapati, first author on the study and researcher at Graphene Flagship Partner CNRS, remarks how “the results emphasize that highly dispersible graphene could be degraded in the body by the action of neutrophils. This would open the new avenue for developing graphene-based materials.”

With successful ex-vivo testing, in-vivo testing is the next stage. Bengt Fadeel, professor at Graphene Flagship Partner Karolinska Institute believes that “understanding whether graphene is biodegradable or not is important for biomedical and other applications of this material. The fact that cells of the immune system are capable of handling graphene is very promising.”

Prof. Maurizio Prato, the Graphene Flagship leader for its Health and Environment Work Package said that “the enzymatic degradation of graphene is a very important topic, because in principle, graphene dispersed in the atmosphere could produce some harm. Instead, if there are microorganisms able to degrade graphene and related materials, the persistence of these materials in our environment will be strongly decreased. These types of studies are needed.” “What is also needed is to investigate the nature of degradation products,” adds Prato. “Once graphene is digested by enzymes, it could produce harmful derivatives. We need to know the structure of these derivatives and study their impact on health and environment,” he concludes.

Prof. Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship, and chair of its management panel added: “The report of a successful avenue for graphene biodegradation is a very important step forward to ensure the safe use of this material in applications. The Graphene Flagship has put the investigation of the health and environment effects of graphene at the centre of its programme since the start. These results strengthen our innovation and technology roadmap.”

Here’s a link to and a citation for the paper,

Degradation of Single‐Layer and Few‐Layer Graphene by Neutrophil Myeloperoxidase by Dr. Rajendra Kurapati, Dr. Sourav P. Mukherjee, Dr. Cristina Martín, Dr. George Bepete, Prof. Ester Vázquez, Dr. Alain Pénicaud, Prof. Dr. Bengt Fadeel, Dr. Alberto Bianco. Angewandte Chemie https://doi.org/10.1002/anie.201806906 First published: 13 July 2018

This paper is behind a paywall.

Carbon nanotube optics and the quantum

A US-France-Germany collaboration has led to some intriguing work with carbon nanotubes. From a June 18, 2018 news item on ScienceDaily,

Researchers at Los Alamos and partners in France and Germany are exploring the enhanced potential of carbon nanotubes as single-photon emitters for quantum information processing. Their analysis of progress in the field is published in this week’s edition of the journal Nature Materials.

“We are particularly interested in advances in nanotube integration into photonic cavities for manipulating and optimizing light-emission properties,” said Stephen Doorn, one of the authors, and a scientist with the Los Alamos National Laboratory site of the Center for Integrated Nanotechnologies (CINT). “In addition, nanotubes integrated into electroluminescent devices can provide greater control over timing of light emission and they can be feasibly integrated into photonic structures. We are highlighting the development and photophysical probing of carbon nanotube defect states as routes to room-temperature single photon emitters at telecom wavelengths.”

A June 18, 2018 Los Alamos National Laboratory (LANL) news release (also on EurekAlert), which originated the news item, expands on the theme,

The team’s overview was produced in collaboration with colleagues in Paris (Christophe Voisin [Ecole Normale Supérieure de Paris (ENS)]) who are advancing the integration of nanotubes into photonic cavities for modifying their emission rates, and at Karlsruhe (Ralph Krupke [Karlsruhe Institute of Technology (KIT]) where they are integrating nanotube-based electroluminescent devices with photonic waveguide structures. The Los Alamos focus is the analysis of nanotube defects for pushing quantum emission to room temperature and telecom wavelengths, he said.

As the paper notes, “With the advent of high-speed information networks, light has become the main worldwide information carrier. . . . Single-photon sources are a key building block for a variety of technologies, in secure quantum communications metrology or quantum computing schemes.”

The use of single-walled carbon nanotubes in this area has been a focus for the Los Alamos CINT team, where they developed the ability to chemically modify the nanotube structure to create deliberate defects, localizing excitons and controlling their release. Next steps, Doorn notes, involve integration of the nanotubes into photonic resonators, to provide increased source brightness and to generate indistinguishable photons. “We need to create single photons that are indistinguishable from one another, and that relies on our ability to functionalize tubes that are well-suited for device integration and to minimize environmental interactions with the defect sites,” he said.

“In addition to defining the state of the art, we wanted to highlight where the challenges are for future progress and lay out some of what may be the most promising future directions for moving forward in this area. Ultimately, we hope to draw more researchers into this field,” Doorn said.

Here’s a link to and a citation for the paper,

Carbon nanotubes as emerging quantum-light sources by X. He, H. Htoon, S. K. Doorn, W. H. P. Pernice, F. Pyatkov, R. Krupke, A. Jeantet, Y. Chassagneux & C. Voisin. Nature Materials (2018) DOI: https://doi.org/10.1038/s41563-018-0109-2 Published online June 18, 2018

This paper is behind a paywall.

Revising history with science and art

Caption: The 2000-year-old pipe sculpture’s bulging neck is evidence of thyroid disease as a result of iodine deficient water and soil in the ancient Ohio Valley. Credit: Kenneth Tankersley

An October 4, 2018 news item on ScienceDaily describes the analytic breakthrough,

Art often imitates life, but when University of Cincinnati anthropologist and geologist Kenneth Tankersley investigated a 2000-year-old carved statue on a tobacco pipe, he exposed a truth he says will rewrite art history.

Since its discovery in 1901, at the Adena Burial Mound in Ross County, Ohio, archaeologists have theorized that the the 8-inch pipe statue—carved into the likeness of an Ohio Valley Native American—represented an achondroplastic dwarf (AD). People with achondroplasia typically have short arms and legs, an enlarged head, and an average-sized trunk, the same condition as Emmy Award-winning actor Peter Dinklage from HBO’s “Game of Thrones.”

“During the early turn of the century, this theory was consistent with actual human remains of a Native American excavated in Kentucky, also interpreted by archaeologists as being an achondroplastic dwarf,” says Tankersley.

This theory flourished in the scientific literature until the turn of the 21st century when Tankersley looked closer.

“Here we have a carved statue and human remains, both of achondroplasia from the same time period,” says Tankersley. “But what caught my eye on this pipe statue was an obvious tumor on the neck that looked remarkably like a goiter [or goitre] or thyroid tumor.”

An October 2, 2018 University of Cincinnati (UC) news release (also on EurekAlert but published Oct. 3, 2018), reveals more details,

Tankersley collaborated with Frederic Bauduer, a visiting biological anthropologist and paleopathologist from the University of Bordeaux, UC’s sister university in France, to ultimately dispel previous academic literature claiming the sculpture as portraying achondroplasia.

“In archaeological science, flesh does not survive, so many ancient maladies go unnoticed and are almost always impossible to get at from an archaeological standpoint,” says Tankersley. “So what struck me was how remarkably Bauduer was using ancient art from various periods of antiquity to argue for the paleopathology he presented.”

Using radiocarbon dating on textile and bark samples surrounding the pipe at the site, the Adena pipe dates to approximately 2000 years ago, to the earliest evidence of tobacco.

Traditionally, tobacco is considered a sacred plant to Native Americans in this region, and smoking tobacco played an important role in their ceremonies, but he points to tobacco smoking as being long associated with an increased prevalence of goiter in low iodine intake zones worldwide.

From a medical perspective, Bauduer found the physical characteristics, such as the short forehead and long bones of the upper and lower limbs, simply not adding up as an achondroplastic dwarf.

“We found the tumor in the neck, as well as the figure’s squatted stance — not foreshortened legs as was formerly documented in the literature — were both signs and symptoms of thyroid disease,” says Tankersley.

“We already know that iodine deficiencies can lead to thyroid tumors, and the Ohio Valley area, where this artifact was found, has historically had iodine depleted soils and water relative to the advance of an Ice Age glacier about 300,000 years ago.”

Students in a university lab look through microscopes.

Tankersley (top center) teaches archaeology students to date soil, bones and textiles using radiocarbon science.

Profile of ancient tobacco pipe sculpture portraying a Native American wearing ceremonial regalia.

The figure’s bulging neck (goiter) and appearance of short stature are actually results of iodine deficient thyroid disease. The legs are bent in a tilted squat likely during a Native American ceremonial dance.

Tankersley says the Ohio Valley region, before the introduction of iodized salt in the 1920s,
was part of the so-called U.S. “goiter belt” where goiter frequency was relatively high —  five to 15 incidences per thousand.

The lower limbs on the statue, previously documented in the literature as short in stature, are actually normal size in bone length, according to Bauduer. Upon closer inspection, both Bauduer and Tankersley agree that the figure is also portrayed in a tilted squat, a common gait anomaly found in people with hypothyroidism.

The figure has what appears to be an abdominal six-pack, but both researchers say the detailed physical features indeed portray a normal physique except for the telltale signs of thyroid disease.

“The fact that the bones of the figure are all normal size leads us to believe the squat portrays more of an abnormal gait while likely in the stance of a typical Native American ritual dance,” says Tankersley, who is one-quarter Native American himself and regularly attends ceremonial events throughout Ohio and Kentucky.

“The regalia the figure is wearing is also strongly indicative of ancient Native Ohio Valley Shawnee, Delaware and Ojibwa to the north and Miami Nation tribes in Indiana.

“The traditional headdress, pierced ears with expanded spool earrings and loincloth with serpentine motif on the front and feathered bustle on back are also still worn by local Native tribes during ceremonial events today.”

Artistic clues

Portrait of Dr. Frederic Bauduer, biological pathologist from University of Bordeaux in France, on an ancient architectural balcony.

Frederic Bauduer, biological anthropologist, paleopathologist and critical collaborator on this research from the University of Bordeaux, UC’s sister university in France. photo/Frederic Bauduer

In addition to figures found in South America and Mesoamerica, Tankersley says the Adena pipe is the first known example of a goiter depicted in ancient Native North American art and one of the oldest from the Western Hemisphere.

“The other real take here is that a lot of people ask, ‘What is the value of ancient art?’” asserts Tankersley. “Well, here’s an example of ancient art that tells a deeper story. And similar indigenous art representations found in South America and Mesoamerica strengthen our hypothesis.”

Tankersley is interested in looking deeper for pathologies and maladies portrayed on other ancient artifacts from Native Americans thousands of years ago here in the Ohio Valley and elsewhere.

“Art history is beginning to help substantiate many scientific hypotheses,” says Tankersley. “Because artists are such keen students of anatomy, artisans such as this ancient Adena pipe sculptor could portray physical maladies with great accuracy, even before they were aware of what the particular disease was.”

Here’s a link to and a citation for the paper,

Medical Hypotheses Evidence of an ancient (2000 years ago) goiter attributed to iodine deficiency in North America by F. Bauduer, K. Barnett Tankersley. Medical Hypotheses Volume 118, September 2018, Pages 6-8 DOI: https://doi.org/10.1016/j.mehy.2018.06.011

This paper looks like it’s behind a paywall.

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

The joys of an electronic ‘pill’: Could Canadian Olympic athletes’ training be hacked?

Lori Ewing (Canadian Press) in an  August 3, 2018 article on the Canadian Broadcasting Corporation news website, heralds a new technology intended for the 2020 Olympics in Tokyo (Japan) but being tested now for the 2018 North American, Central American and Caribbean Athletics Association (NACAC) Track & Field Championships, known as Toronto 2018: Track & Field in the 6ix (Aug. 10-12, 2018) competition.

It’s described as a ‘computerized pill’ that will allow athletes to regulate their body temperature during competition or training workouts, from the August 3, 2018 article,

“We can take someone like Evan [Dunfee, a race walker], have him swallow the little pill, do a full four-hour workout, and then come back and download the whole thing, so we get from data core temperature every 30 seconds through that whole workout,” said Trent Stellingwerff, a sport scientist who works with Canada’s Olympic athletes.

“The two biggest factors of core temperature are obviously the outdoor humidex, heat and humidity, but also exercise intensity.”

Bluetooth technology allows Stellingwerff to gather immediate data with a handheld device — think a tricorder in “Star Trek.” The ingestible device also stores measurements for up to 16 hours when away from the monitor which can be wirelessly transmitted when back in range.

“That pill is going to change the way that we understand how the body responds to heat, because we just get so much information that wasn’t possible before,” Dunfee said. “Swallow a pill, after the race or after the training session, Trent will come up, and just hold the phone [emphasis mine] to your stomach and download all the information. It’s pretty crazy.”

First off, it’s probably not a pill or tablet but a gelcap and it sounds like the device is a wireless biosensor. As Ewing notes, the device collects data and transmits it.

Here’s how the French company, BodyCap, supplying the technology describes their product, from the company’s e-Celsius Performance webpage, (assuming this is the product being used),

Continuous core body temperature measurement

Main applications are:

Risk reduction for people in extreme situations, such as elite athletes. During exercise in a hot environment, thermal stress is amplified by the external temperature and the environment’s humidity. The saturation of the body’s thermoregulation mechanism can quickly cause hyperthermia to levels that may cause nausea, fainting or death.

Performance optimisation for elite athletes.This ingestible pill leaves the user fully mobile. The device keeps a continuous record of temperature during training session, competition and during the recovery phase. The data can then be used to correlate thermoregulation with performances. This enable the development of customised training protocols for each athlete.

e-Celsius Performance® can be used for all sports, including water sports. Its application is best suited to sports that are physically intensive like football, rugby, cycling, long distance running, tennis or those that take place in environments with extreme temperature conditions, like diving or skiing.

e-Celsius Performance®, is a miniaturised ingestible electronic pill that wirelessly transmits a continuous measurement of gastrointestinal temperature. [emphasis mine]

The data are stored on a monitor called e-Viewer Performance®. This device [emphases mine] shows alerts if the measurement is outside the desired range. The activation box is used to turn the pill on from standby mode and connect the e-Celsius Performance pill with the monitor for data collection in either real time or by recovery from the internal memory of e-Celsius Performance®. Each monitor can be used with up to three pills at once to enable extended use.

The monitor’s interface allows the user to download data to a PC/ Mac for storage. The pill is safe, non-invasive and easy to use, leaving the gastric system after one or two days, [emphasis mine] depending on individual transit time.

I found Dunfee’s description mildly confusing but that can be traced to his mention of wireless transmission to a phone. Ewing describes a handheld device which is consistent with the company’s product description. There is no mention of the potential for hacking but I would hope Athletics Canada and BodyCap are keeping up with current concerns over hacking and interference (e.g., Facebook/Cambridge Analytica, Russians and the 2016 US election, Roberto Rocha’s Aug. 3, 2018 article for CBC titled: Data sheds light on how Russian Twitter trolls targeted Canadians, etc.).

Moving on, this type of technology was first featured here in a February 11, 2014 posting (scroll down to the gif where an electronic circuit dissolves in water) and again in a November 23, 2015 posting about wearable and ingestible technologies but this is the first real life application I’ve seen for it.

Coincidentally, an August 2, 2018 Frontiers [Publishing] news release on EurekAlert announced this piece of research (published in June 2018) questioning whether we need this much data and whether these devices work as promoted,

Wearable [and, in the future, ingestible?] devices are increasingly bought to track and measure health and sports performance: [emphasis mine] from the number of steps walked each day to a person’s metabolic efficiency, from the quality of brain function to the quantity of oxygen inhaled while asleep. But the truth is we know very little about how well these sensors and machines work [emphasis mine]– let alone whether they deliver useful information, according to a new review published in Frontiers in Physiology.

“Despite the fact that we live in an era of ‘big data,’ we know surprisingly little about the suitability or effectiveness of these devices,” says lead author Dr Jonathan Peake of the School of Biomedical Sciences and Institute of Health and Biomedical Innovation at the Queensland University of Technology in Australia. “Only five percent of these devices have been formally validated.”

The authors reviewed information on devices used both by everyday people desiring to keep track of their physical and psychological health and by athletes training to achieve certain performance levels. [emphases mine] The devices — ranging from so-called wrist trackers to smart garments and body sensors [emphasis mine] designed to track our body’s vital signs and responses to stress and environmental influences — fall into six categories:

  • devices for monitoring hydration status and metabolism
  • devices, garments and mobile applications for monitoring physical and psychological stress
  • wearable devices that provide physical biofeedback (e.g., muscle stimulation, haptic feedback)
  • devices that provide cognitive feedback and training
  • devices and applications for monitoring and promoting sleep
  • devices and applications for evaluating concussion

The authors investigated key issues, such as: what the technology claims to do; whether the technology has been independently validated against some recognized standards; whether the technology is reliable and what, if any, calibration is needed; and finally, whether the item is commercially available or still under development.

The authors say that technology developed for research purposes generally seems to be more credible than devices created purely for commercial reasons.

“What is critical to understand here is that while most of these technologies are not labeled as ‘medical devices’ per se, their very existence, let alone the accompanying marketing, conveys a sensibility that they can be used to measure a standard of health,” says Peake. “There are ethical issues with this assumption that need to be addressed.” [emphases mine]

For example, self-diagnosis based on self-gathered data could be inconsistent with clinical analysis based on a medical professional’s assessment. And just as body mass index charts of the past really only provided general guidelines and didn’t take into account a person’s genetic predisposition or athletic build, today’s technology is similarly limited.

The authors are particularly concerned about those technologies that seek to confirm or correlate whether someone has sustained or recovered from a concussion, whether from sports or military service.

“We have to be very careful here because there is so much variability,” says Peake. “The technology could be quite useful, but it can’t and should never replace assessment by a trained medical professional.”

Speaking generally again now, Peake says it is important to establish whether using wearable devices affects people’s knowledge and attitude about their own health and whether paying such close attention to our bodies could in fact create a harmful obsession with personal health, either for individuals using the devices, or for family members. Still, self-monitoring may reveal undiagnosed health problems, said Peake, although population data is more likely to point to false positives.

“What we do know is that we need to start studying these devices and the trends they are creating,” says Peake. “This is a booming industry.”

In fact, a March 2018 study by P&S Market Research indicates the wearable market is expected to generate $48.2 billion in revenue by 2023. That’s a mere five years into the future.”

The authors highlight a number of areas for investigation in order to develop reasonable consumer policies around this growing industry. These include how rigorously the device/technology has been evaluated and the strength of evidence that the device/technology actually produces the desired outcomes.

“And I’ll add a final question: Is wearing a device that continuously tracks your body’s actions, your brain activity, and your metabolic function — then wirelessly transmits that data to either a cloud-based databank or some other storage — safe, for users? Will it help us improve our health?” asked Peake. “We need to ask these questions and research the answers.”

The authors were not examining ingestible biosensors nor were they examining any issues related to data about core temperatures but it would seem that some of the same issues could apply especially if and when this technology is brought to the consumer market.

Here’s a link to the and a citation for the paper,

Critical Review of Consumer Wearables, Mobile Applications, and Equipment for Providing Biofeedback, Monitoring Stress, and Sleep in Physically Active Populations by Jonathan M. Peake, Graham Kerr, and John P. Sullivan. Front. Physiol., 28 June 2018 | https://doi.org/10.3389/fphys.2018.00743

This paper is open access.

AI x 2: the Amnesty International and Artificial Intelligence story

Amnesty International and artificial intelligence seem like an unexpected combination but it all makes sense when you read a June 13, 2018 article by Steven Melendez for Fast Company (Note: Links have been removed),

If companies working on artificial intelligence don’t take steps to safeguard human rights, “nightmare scenarios” could unfold, warns Rasha Abdul Rahim, an arms control and artificial intelligence researcher at Amnesty International in a blog post. Those scenarios could involve armed, autonomous systems choosing military targets with little human oversight, or discrimination caused by biased algorithms, she warns.

Rahim pointed at recent reports of Google’s involvement in the Pentagon’s Project Maven, which involves harnessing AI image recognition technology to rapidly process photos taken by drones. Google recently unveiled new AI ethics policies and has said it won’t continue with the project once its current contract expires next year after high-profile employee dissent over the project. …

“Compliance with the laws of war requires human judgement [sic] –the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack,” Rahim writes. “Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment, and use of fully autonomous weapon systems.”

Rasha Abdul Rahim’s June 14, 2018 posting (I’m putting the discrepancy in publication dates down to timezone differences) on the Amnesty International website (Note: Links have been removed),

Last week [June 7, 2018] Google released a set of principles to govern its development of AI technologies. They include a broad commitment not to design or deploy AI in weaponry, and come in the wake of the company’s announcement that it will not renew its existing contract for Project Maven, the US Department of Defense’s AI initiative, when it expires in 2019.

The fact that Google maintains its existing Project Maven contract for now raises an important question. Does Google consider that continuing to provide AI technology to the US government’s drone programme is in line with its new principles? Project Maven is a litmus test that allows us to see what Google’s new principles mean in practice.

As details of the US drone programme are shrouded in secrecy, it is unclear precisely what role Google plays in Project Maven. What we do know is that US drone programme, under successive administrations, has been beset by credible allegations of unlawful killings and civilian casualties. The cooperation of Google, in any capacity, is extremely troubling and could potentially implicate it in unlawful strikes.

As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. Machine learning, and AI more broadly, impact a range of human rights including privacy, freedom of expression and the right to life. It is partly in the hands of companies like Google to safeguard these rights in relation to their operations – for us and for future generations. If they don’t, some nightmare scenarios could unfold.

Warfare has already changed dramatically in recent years – a couple of decades ago the idea of remote controlled bomber planes would have seemed like science fiction. While the drones currently in use are still controlled by humans, China, France, Israel, Russia, South Korea, the UK and the US are all known to be developing military robots which are getting smaller and more autonomous.

For example, the UK is developing a number of autonomous systems, including the BAE [Systems] Taranis, an unmanned combat aircraft system which can fly in autonomous mode and automatically identify a target within a programmed area. Kalashnikov, the Russian arms manufacturer, is developing a fully automated, high-calibre gun that uses artificial neural networks to choose targets. The US Army Research Laboratory in Maryland, in collaboration with BAE Systems and several academic institutions, has been developing micro drones which weigh less than 30 grams, as well as pocket-sized robots that can hop or crawl.

Of course, it’s not just in conflict zones that AI is threatening human rights. Machine learning is already being used by governments in a wide range of contexts that directly impact people’s lives, including policing [emphasis mine], welfare systems, criminal justice and healthcare. Some US courts use algorithms to predict future behaviour of defendants and determine their sentence lengths accordingly. The potential for this approach to reinforce power structures, discrimination or inequalities is huge.

In july 2017, the Vancouver Police Department announced its use of predictive policing software, the first such jurisdiction in Canada to make use of the technology. My Nov. 23, 2017 posting featured the announcement.

The almost too aptly named Campaign to Stop Killer Robots can be found here. Their About Us page provides a brief history,

Formed by the following non-governmental organizations (NGOs) at a meeting in New York on 19 October 2012 and launched in London in April 2013, the Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons. See the Chronology charting our major actions and achievements to date.

Steering Committee

The Steering Committee is the campaign’s principal leadership and decision-making body. It is comprised of five international NGOs, a regional NGO network, and four national NGOs that work internationally:

Human Rights Watch
Article 36
Association for Aid and Relief Japan
International Committee for Robot Arms Control
Mines Action Canada
Nobel Women’s Initiative
PAX (formerly known as IKV Pax Christi)
Pugwash Conferences on Science & World Affairs
Seguridad Humana en América Latina y el Caribe (SEHLAC)
Women’s International League for Peace and Freedom

For more information, see this Overview. A Terms of Reference is also available on request, detailing the committee’s selection process, mandate, decision-making, meetings and communication, and expected commitments.

For anyone who may be interested in joining Amnesty International, go here.

Seeing into silicon nanoparticles with ‘mining’ hardware

This was not the mining hardware I expected and it enters the picture after this paragraph which has been excerpted from a February 28, 2018 news item on Nanowerk,

For the first time, researchers developed a three-dimensional dynamic model of an interaction between light and nanoparticles. They used a supercomputer with graphic accelerators for calculations. Results showed that silicon particles exposed to short intense laser pulses lose their symmetry temporarily. Their optical properties become strongly heterogeneous. Such a change in properties depends on particle size, therefore it can be used for light control in ultrafast information processing nanoscale devices. …

A March 2, 2018 ITMO University (Russia) press release (also on EurekAlert), which originated the news item, provides more detail and a mention of ‘cryptocurrency mining’ hardware,

Improvement of computing devices today focuses on increasing information processing speeds. Nanophotonics is one of the sciences that can solve this problem by means of optical devices. Although optical signals can be transmitted and processed much faster than electronic ones, first, it is necessary to learn how to quickly control light on a small scale. For this purpose, one could use metal particles. They are efficient at localizing light, but weaken the signal, causing significant losses. However, dielectric and semiconducting materials, such as silicon, can be used instead of metal.

Silicon nanoparticles are now actively studied by researchers all around the world, including those at ITMO University. The long-term goal of such studies is to create ultrafast, compact optical signal modulators. They can serve as a basis for computers of the future. However, this technology will become feasible only once we understand how nanoparticles interact with light.

Silicon nanoparticles
Silicon nanoparticles

“When a laser pulse hits the particle, a lot of free electrons are formed inside,” explains Sergey Makarov, head of ITMO’s Laboratory of Hybrid Nanophotonics and Optoelectronics. “A region saturated with oppositely charged particles is created. It is usually called electron-hole plasma. Plasma changes optical properties of particles and, up until today, it was believed that it spreads over the whole particle simultaneously, so that the particle’s symmetry is preserved. We demonstrated that this is not entirely true and an even distribution of plasma inside particles is not the only possible scenario.”

Scientists found that the electromagnetic field caused by an interaction between light and particles has a more complex structure. This leads to a light distortion which varies with time. Therefore, the symmetry of particles is disturbed and optical properties become different throughout one particle.

“Using analytical and numerical methods, we were the first to look inside the particle and we proved that the processes taking place there are far more complicated than we thought,” says Konstantin Ladutenko, staff member of ITMO’s International Research Center of Nanophotonics and Metamaterials.  “Moreover, we found that by changing the particle size, we can affect its interaction with the light signal. This means we might be able to predict the signal path in an entire system of nanoparticles.”

In order to create a tool to study processes inside nanoparticles, scientists from ITMO University joined forces with colleagues from Jean Monnet University in France.

Sergey Makarov
Sergey Makarov

We developed analytical methods to determine the size range of the particles and their refractive index which would make a change in optical properties likely. Afterwards, we used powerful computational methods to monitor processes inside particles. Our colleagues performed calculations on a computer with graphics accelerators. Such computers are often used for cryptocurrency mining [emphasis mine]. However, we decided to enrich humanity with new knowledge, rather than enrich ourselves. Besides, bitcoin rate had just started to go down then,” adds Konstantin.

Devices based on these nanoparticles may become basic elements of optical computers, just as transistors are basic elements of electronics today. They will make it possible to distribute and redirect or branch the signal.

“Such asymmetric structures have a variety of applications, but we are focusing on ultra-fast signal processing,” continues Sergey.We now have a powerful theoretical tool which will help us develop light management systems that will operate on a small scale – in terms of both time and space”.

Here’s a little more about ITMO University from its Wikipedia entry (Note: Links have been removed),

ITMO University (Russian: Университет ИТМО) is a large state university in Saint Petersburg and is one of Russia’s National Research Universities.[1] ITMO University is one of 15 Russian universities that were selected to participate in Russian Academic Excellence Project 5-100[2] by the government of the Russian Federation to improve their international competitiveness among the world’s leading research and educational centers.[3]

Here’s a link to and a citation for the paper,

Photogenerated Free Carrier-Induced Symmetry Breaking in Spherical Silicon Nanoparticle by Anton Rudenko, Konstantin Ladutenko, Sergey Makarov, and Tatiana E. Itina.Advanced Optical Materials Vol. 6 Issue 5 DOI: 10.1002/adom.201701153 Version of Record online: 29 JAN 2018

© 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Socially responsible AI—it’s time says University of Manchester (UK) researchers

A May 10, 2018 news item on ScienceDaily describes a report on the ‘fourth industrial revolution’ being released by the University of Manchester,

The development of new Artificial Intelligence (AI) technology is often subject to bias, and the resulting systems can be discriminatory, meaning more should be done by policymakers to ensure its development is democratic and socially responsible.

This is according to Dr Barbara Ribeiro of Manchester Institute of Innovation Research at The University of Manchester, in On AI and Robotics: Developing policy for the Fourth Industrial Revolution, a new policy report on the role of AI and Robotics in society, being published today [May 10, 2018].

Interestingly, the US White House is hosting a summit on AI today, May 10, 2018, according to a May 8, 2018 article by Danny Crichton for TechCrunch (Note: Links have been removed),

Now, it appears the White House itself is getting involved in bringing together key American stakeholders to discuss AI and those opportunities and challenges. …

Among the confirmed guests are Facebook’s Jerome Pesenti, Amazon’s Rohit Prasad, and Intel’s CEO Brian Krzanich. While the event has many tech companies present, a total of 38 companies are expected to be in attendance including United Airlines and Ford.

AI policy has been top-of-mind for many policymakers around the world. French President Emmanuel Macron has announced a comprehensive national AI strategy, as has Canada, which has put together a research fund and a set of programs to attempt to build on the success of notable local AI researchers such as University of Toronto professor George Hinton, who is a major figure in deep learning.

But it is China that has increasingly drawn the attention and concern of U.S. policymakers. The country and its venture capitalists are outlaying billions of dollars to invest in the AI industry, and it has made leading in artificial intelligence one of the nation’s top priorities through its Made in China 2025 program and other reports. …

In comparison, the United States has been remarkably uncoordinated when it comes to AI. …

That lack of engagement from policymakers has been fine — after all, the United States is the world leader in AI research. But with other nations pouring resources and talent into the space, DC policymakers are worried that the U.S. could suddenly find itself behind the frontier of research in the space, with particular repercussions for the defense industry.

Interesting contrast: do we take time to consider the implications or do we engage in a race?

While it’s becoming fashionable to dismiss dichotomous questions of this nature, the two approaches (competition and reflection) are not that compatible and it does seem to be an either/or proposition.

A May 10, 2018 University of Manchester press release (also on EurekAlert), which originated the news item, expands on the theme of responsibility and AI,

Dr Ribeiro adds because investment into AI will essentially be paid for by tax-payers in the long-term, policymakers need to make sure that the benefits of such technologies are fairly distributed throughout society.

She says: “Ensuring social justice in AI development is essential. AI technologies rely on big data and the use of algorithms, which influence decision-making in public life and on matters such as social welfare, public safety and urban planning.”

“In these ‘data-driven’ decision-making processes some social groups may be excluded, either because they lack access to devices necessary to participate or because the selected datasets do not consider the needs, preferences and interests of marginalised and disadvantaged people.”

On AI and Robotics: Developing policy for the Fourth Industrial Revolution is a comprehensive report written, developed and published by Policy@Manchester with leading experts and academics from across the University.

The publication is designed to help employers, regulators and policymakers understand the potential effects of AI in areas such as industry, healthcare, research and international policy.

However, the report doesn’t just focus on AI. It also looks at robotics, explaining the differences and similarities between the two separate areas of research and development (R&D) and the challenges policymakers face with each.

Professor Anna Scaife, Co-Director of the University’s Policy@Manchester team, explains: “Although the challenges that companies and policymakers are facing with respect to AI and robotic systems are similar in many ways, these are two entirely separate technologies – something which is often misunderstood, not just by the general public, but policymakers and employers too. This is something that has to be addressed.”

One particular area the report highlights where robotics can have a positive impact is in the world of hazardous working environments, such a nuclear decommissioning and clean-up.

Professor Barry Lennox, Professor of Applied Control and Head of the UOM Robotics Group, adds: “The transfer of robotics technology into industry, and in particular the nuclear industry, requires cultural and societal changes as well as technological advances.

“It is really important that regulators are aware of what robotic technology is and is not capable of doing today, as well as understanding what the technology might be capable of doing over the next -5 years.”

The report also highlights the importance of big data and AI in healthcare, for example in the fight against antimicrobial resistance (AMR).

Lord Jim O’Neill, Honorary Professor of Economics at The University of Manchester and Chair of the Review on Antimicrobial Resistance explains: “An important example of this is the international effort to limit the spread of antimicrobial resistance (AMR). The AMR Review gave 27 specific recommendations covering 10 broad areas, which became known as the ‘10 Commandments’.

“All 10 are necessary, and none are sufficient on their own, but if there is one that I find myself increasingly believing is a permanent game-changer, it is state of the art diagnostics. We need a ‘Google for doctors’ to reduce the rate of over prescription.”

The versatile nature of AI and robotics is leading many experts to predict that the technologies will have a significant impact on a wide variety of fields in the coming years. Policy@Manchester hopes that the On AI and Robotics report will contribute to helping policymakers, industry stakeholders and regulators better understand the range of issues they will face as the technologies play ever greater roles in our everyday lives.

As far as I can tell, the report has been designed for online viewing only. There are none of the markers (imprint date, publisher, etc.) that I expect to see on a print document. There is no bibliography or list of references but there are links to outside sources throughout the document.

It’s an interesting approach to publishing a report that calls for social justice, especially since the issue of ‘trust’ is increasingly being emphasized where all AI is concerned. With regard to this report, I’m not sure I can trust it. With a print document or a PDF I have markers. I can examine the index, the bibliography, etc. and determine if this material has covered the subject area with reference to well known authorities. It’s much harder to do that with this report. As well, this ‘souped up’ document also looks like it might be easy to change something without my knowledge. With a print or PDF version, I can compare the documents but not with this one.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (1 of 2)

Before launching into the assessment, a brief explanation of my theme: Hedy Lamarr was considered to be one of the great beauties of her day,

“Ziegfeld Girl” Hedy Lamarr 1941 MGM *M.V.
Titles: Ziegfeld Girl
People: Hedy Lamarr
Image courtesy mptvimages.com [downloaded from https://www.imdb.com/title/tt0034415/mediaviewer/rm1566611456]

Aside from starring in Hollywood movies and, before that, movies in Europe, she was also an inventor and not just any inventor (from a Dec. 4, 2017 article by Laura Barnett for The Guardian), Note: Links have been removed,

Let’s take a moment to reflect on the mercurial brilliance of Hedy Lamarr. Not only did the Vienna-born actor flee a loveless marriage to a Nazi arms dealer to secure a seven-year, $3,000-a-week contract with MGM, and become (probably) the first Hollywood star to simulate a female orgasm on screen – she also took time out to invent a device that would eventually revolutionise mobile communications.

As described in unprecedented detail by the American journalist and historian Richard Rhodes in his new book, Hedy’s Folly, Lamarr and her business partner, the composer George Antheil, were awarded a patent in 1942 for a “secret communication system”. It was meant for radio-guided torpedoes, and the pair gave to the US Navy. It languished in their files for decades before eventually becoming a constituent part of GPS, Wi-Fi and Bluetooth technology.

(The article goes on to mention other celebrities [Marlon Brando, Barbara Cartland, Mark Twain, etc] and their inventions.)

Lamarr’s work as an inventor was largely overlooked until the 1990’s when the technology community turned her into a ‘cultish’ favourite and from there her reputation grew and acknowledgement increased culminating in Rhodes’ book and the documentary by Alexandra Dean, ‘Bombshell: The Hedy Lamarr Story (to be broadcast as part of PBS’s American Masters series on May 18, 2018).

Canada as Hedy Lamarr

There are some parallels to be drawn between Canada’s S&T and R&D (science and technology; research and development) and Ms. Lamarr. Chief amongst them, we’re not always appreciated for our brains. Not even by people who are supposed to know better such as the experts on the panel for the ‘Third assessment of The State of Science and Technology and Industrial Research and Development in Canada’ (proper title: Competing in a Global Innovation Economy: The Current State of R&D in Canada) from the Expert Panel on the State of Science and Technology and Industrial Research and Development in Canada.

A little history

Before exploring the comparison to Hedy Lamarr further, here’s a bit more about the history of this latest assessment from the Council of Canadian Academies (CCA), from the report released April 10, 2018,

This assessment of Canada’s performance indicators in science, technology, research, and innovation comes at an opportune time. The Government of Canada has expressed a renewed commitment in several tangible ways to this broad domain of activity including its Innovation and Skills Plan, the announcement of five superclusters, its appointment of a new Chief Science Advisor, and its request for the Fundamental Science Review. More specifically, the 2018 Federal Budget demonstrated the government’s strong commitment to research and innovation with historic investments in science.

The CCA has a decade-long history of conducting evidence-based assessments about Canada’s research and development activities, producing seven assessments of relevance:

The State of Science and Technology in Canada (2006) [emphasis mine]
•Innovation and Business Strategy: Why Canada Falls Short (2009)
•Catalyzing Canada’s Digital Economy (2010)
•Informing Research Choices: Indicators and Judgment (2012)
The State of Science and Technology in Canada (2012) [emphasis mine]
The State of Industrial R&D in Canada (2013) [emphasis mine]
•Paradox Lost: Explaining Canada’s Research Strength and Innovation Weakness (2013)

Using similar methods and metrics to those in The State of Science and Technology in Canada (2012) and The State of Industrial R&D in Canada (2013), this assessment tells a similar and familiar story: Canada has much to be proud of, with world-class researchers in many domains of knowledge, but the rest of the world is not standing still. Our peers are also producing high quality results, and many countries are making significant commitments to supporting research and development that will position them to better leverage their strengths to compete globally. Canada will need to take notice as it determines how best to take action. This assessment provides valuable material for that conversation to occur, whether it takes place in the lab or the legislature, the bench or the boardroom. We also hope it will be used to inform public discussion. [p. ix Print, p. 11 PDF]

This latest assessment succeeds the general 2006 and 2012 reports, which were mostly focused on academic research, and combines it with an assessment of industrial research, which was previously separate. Also, this third assessment’s title (Competing in a Global Innovation Economy: The Current State of R&D in Canada) makes what was previously quietly declared in the text, explicit from the cover onwards. It’s all about competition, despite noises such as the 2017 Naylor report (Review of fundamental research) about the importance of fundamental research.

One other quick comment, I did wonder in my July 1, 2016 posting (featuring the announcement of the third assessment) how combining two assessments would impact the size of the expert panel and the size of the final report,

Given the size of the 2012 assessment of science and technology at 232 pp. (PDF) and the 2013 assessment of industrial research and development at 220 pp. (PDF) with two expert panels, the imagination boggles at the potential size of the 2016 expert panel and of the 2016 assessment combining the two areas.

I got my answer with regard to the panel as noted in my Oct. 20, 2016 update (which featured a list of the members),

A few observations, given the size of the task, this panel is lean. As well, there are three women in a group of 13 (less than 25% representation) in 2016? It’s Ontario and Québec-dominant; only BC and Alberta rate a representative on the panel. I hope they will find ways to better balance this panel and communicate that ‘balanced story’ to the rest of us. On the plus side, the panel has representatives from the humanities, arts, and industry in addition to the expected representatives from the sciences.

The imbalance I noted then was addressed, somewhat, with the selection of the reviewers (from the report released April 10, 2018),

The CCA wishes to thank the following individuals for their review of this report:

Ronald Burnett, C.M., O.B.C., RCA, Chevalier de l’ordre des arts et des
lettres, President and Vice-Chancellor, Emily Carr University of Art and Design
(Vancouver, BC)

Michelle N. Chretien, Director, Centre for Advanced Manufacturing and Design
Technologies, Sheridan College; Former Program and Business Development
Manager, Electronic Materials, Xerox Research Centre of Canada (Brampton,
ON)

Lisa Crossley, CEO, Reliq Health Technologies, Inc. (Ancaster, ON)
Natalie Dakers, Founding President and CEO, Accel-Rx Health Sciences
Accelerator (Vancouver, BC)

Fred Gault, Professorial Fellow, United Nations University-MERIT (Maastricht,
Netherlands)

Patrick D. Germain, Principal Engineering Specialist, Advanced Aerodynamics,
Bombardier Aerospace (Montréal, QC)

Robert Brian Haynes, O.C., FRSC, FCAHS, Professor Emeritus, DeGroote
School of Medicine, McMaster University (Hamilton, ON)

Susan Holt, Chief, Innovation and Business Relationships, Government of
New Brunswick (Fredericton, NB)

Pierre A. Mohnen, Professor, United Nations University-MERIT and Maastricht
University (Maastricht, Netherlands)

Peter J. M. Nicholson, C.M., Retired; Former and Founding President and
CEO, Council of Canadian Academies (Annapolis Royal, NS)

Raymond G. Siemens, Distinguished Professor, English and Computer Science
and Former Canada Research Chair in Humanities Computing, University of
Victoria (Victoria, BC) [pp. xii- xiv Print; pp. 15-16 PDF]

The proportion of women to men as reviewers jumped up to about 36% (4 of 11 reviewers) and there are two reviewers from the Maritime provinces. As usual, reviewers external to Canada were from Europe. Although this time, they came from Dutch institutions rather than UK or German institutions. Interestingly and unusually, there was no one from a US institution. When will they start using reviewers from other parts of the world?

As for the report itself, it is 244 pp. (PDF). (For the really curious, I have a  December 15, 2016 post featuring my comments on the preliminary data for the third assessment.)

To sum up, they had a lean expert panel tasked with bringing together two inquiries and two reports. I imagine that was daunting. Good on them for finding a way to make it manageable.

Bibliometrics, patents, and a survey

I wish more attention had been paid to some of the issues around open science, open access, and open data, which are changing how science is being conducted. (I have more about this from an April 5, 2018 article by James Somers for The Atlantic but more about that later.) If I understand rightly, they may not have been possible due to the nature of the questions posed by the government when requested the assessment.

As was done for the second assessment, there is an acknowledgement that the standard measures/metrics (bibliometrics [no. of papers published, which journals published them; number of times papers were cited] and technometrics [no. of patent applications, etc.] of scientific accomplishment and progress are not the best and new approaches need to be developed and adopted (from the report released April 10, 2018),

It is also worth noting that the Panel itself recognized the limits that come from using traditional historic metrics. Additional approaches will be needed the next time this assessment is done. [p. ix Print; p. 11 PDF]

For the second assessment and as a means of addressing some of the problems with metrics, the panel decided to take a survey which the panel for the third assessment has also done (from the report released April 10, 2018),

The Panel relied on evidence from multiple sources to address its charge, including a literature review and data extracted from statistical agencies and organizations such as Statistics Canada and the OECD. For international comparisons, the Panel focused on OECD countries along with developing countries that are among the top 20 producers of peer-reviewed research publications (e.g., China, India, Brazil, Iran, Turkey). In addition to the literature review, two primary research approaches informed the Panel’s assessment:
•a comprehensive bibliometric and technometric analysis of Canadian research publications and patents; and,
•a survey of top-cited researchers around the world.

Despite best efforts to collect and analyze up-to-date information, one of the Panel’s findings is that data limitations continue to constrain the assessment of R&D activity and excellence in Canada. This is particularly the case with industrial R&D and in the social sciences, arts, and humanities. Data on industrial R&D activity continue to suffer from time lags for some measures, such as internationally comparable data on R&D intensity by sector and industry. These data also rely on industrial categories (i.e., NAICS and ISIC codes) that can obscure important trends, particularly in the services sector, though Statistics Canada’s recent revisions to how this data is reported have improved this situation. There is also a lack of internationally comparable metrics relating to R&D outcomes and impacts, aside from those based on patents.

For the social sciences, arts, and humanities, metrics based on journal articles and other indexed publications provide an incomplete and uneven picture of research contributions. The expansion of bibliometric databases and methodological improvements such as greater use of web-based metrics, including paper views/downloads and social media references, will support ongoing, incremental improvements in the availability and accuracy of data. However, future assessments of R&D in Canada may benefit from more substantive integration of expert review, capable of factoring in different types of research outputs (e.g., non-indexed books) and impacts (e.g., contributions to communities or impacts on public policy). The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity. It is vital that such contributions are better measured and assessed. [p. xvii Print; p. 19 PDF]

My reading: there’s a problem and we’re not going to try and fix it this time. Good luck to those who come after us. As for this line: “The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity.” Did no one explain that when you use ‘no doubt’, you are introducing doubt? It’s a cousin to ‘don’t take this the wrong way’ and ‘I don’t mean to be rude but …’ .

Good news

This is somewhat encouraging (from the report released April 10, 2018),

Canada’s international reputation for its capacity to participate in cutting-edge R&D is strong, with 60% of top-cited researchers surveyed internationally indicating that Canada hosts world-leading infrastructure or programs in their fields. This share increased by four percentage points between 2012 and 2017. Canada continues to benefit from a highly educated population and deep pools of research skills and talent. Its population has the highest level of educational attainment in the OECD in the proportion of the population with
a post-secondary education. However, among younger cohorts (aged 25 to 34), Canada has fallen behind Japan and South Korea. The number of researchers per capita in Canada is on a par with that of other developed countries, andincreased modestly between 2004 and 2012. Canada’s output of PhD graduates has also grown in recent years, though it remains low in per capita terms relative to many OECD countries. [pp. xvii-xviii; pp. 19-20]

Don’t let your head get too big

Most of the report observes that our international standing is slipping in various ways such as this (from the report released April 10, 2018),

In contrast, the number of R&D personnel employed in Canadian businesses
dropped by 20% between 2008 and 2013. This is likely related to sustained and
ongoing decline in business R&D investment across the country. R&D as a share
of gross domestic product (GDP) has steadily declined in Canada since 2001,
and now stands well below the OECD average (Figure 1). As one of few OECD
countries with virtually no growth in total national R&D expenditures between
2006 and 2015, Canada would now need to more than double expenditures to
achieve an R&D intensity comparable to that of leading countries.

Low and declining business R&D expenditures are the dominant driver of this
trend; however, R&D spending in all sectors is implicated. Government R&D
expenditures declined, in real terms, over the same period. Expenditures in the
higher education sector (an indicator on which Canada has traditionally ranked
highly) are also increasing more slowly than the OECD average. Significant
erosion of Canada’s international competitiveness and capacity to participate
in R&D and innovation is likely to occur if this decline and underinvestment
continue.

Between 2009 and 2014, Canada produced 3.8% of the world’s research
publications, ranking ninth in the world. This is down from seventh place for
the 2003–2008 period. India and Italy have overtaken Canada although the
difference between Italy and Canada is small. Publication output in Canada grew
by 26% between 2003 and 2014, a growth rate greater than many developed
countries (including United States, France, Germany, United Kingdom, and
Japan), but below the world average, which reflects the rapid growth in China
and other emerging economies. Research output from the federal government,
particularly the National Research Council Canada, dropped significantly
between 2009 and 2014.(emphasis mine)  [p. xviii Print; p. 20 PDF]

For anyone unfamiliar with Canadian politics,  2009 – 2014 were years during which Stephen Harper’s Conservatives formed the government. Justin Trudeau’s Liberals were elected to form the government in late 2015.

During Harper’s years in government, the Conservatives were very interested in changing how the National Research Council of Canada operated and, if memory serves, the focus was on innovation over research. Consequently, the drop in their research output is predictable.

Given my interest in nanotechnology and other emerging technologies, this popped out (from the report released April 10, 2018),

When it comes to research on most enabling and strategic technologies, however, Canada lags other countries. Bibliometric evidence suggests that, with the exception of selected subfields in Information and Communication Technologies (ICT) such as Medical Informatics and Personalized Medicine, Canada accounts for a relatively small share of the world’s research output for promising areas of technology development. This is particularly true for Biotechnology, Nanotechnology, and Materials science [emphasis mine]. Canada’s research impact, as reflected by citations, is also modest in these areas. Aside from Biotechnology, none of the other subfields in Enabling and Strategic Technologies has an ARC rank among the top five countries. Optoelectronics and photonics is the next highest ranked at 7th place, followed by Materials, and Nanoscience and Nanotechnology, both of which have a rank of 9th. Even in areas where Canadian researchers and institutions played a seminal role in early research (and retain a substantial research capacity), such as Artificial Intelligence and Regenerative Medicine, Canada has lost ground to other countries.

Arguably, our early efforts in artificial intelligence wouldn’t have garnered us much in the way of ranking and yet we managed some cutting edge work such as machine learning. I’m not suggesting the expert panel should have or could have found some way to measure these kinds of efforts but I’m wondering if there could have been some acknowledgement in the text of the report. I’m thinking a couple of sentences in a paragraph about the confounding nature of scientific research where areas that are ignored for years and even decades then become important (e.g., machine learning) but are not measured as part of scientific progress until after they are universally recognized.

Still, point taken about our diminishing returns in ’emerging’ technologies and sciences (from the report released April 10, 2018),

The impression that emerges from these data is sobering. With the exception of selected ICT subfields, such as Medical Informatics, bibliometric evidence does not suggest that Canada excels internationally in most of these research areas. In areas such as Nanotechnology and Materials science, Canada lags behind other countries in levels of research output and impact, and other countries are outpacing Canada’s publication growth in these areas — leading to declining shares of world publications. Even in research areas such as AI, where Canadian researchers and institutions played a foundational role, Canadian R&D activity is not keeping pace with that of other countries and some researchers trained in Canada have relocated to other countries (Section 4.4.1). There are isolated exceptions to these trends, but the aggregate data reviewed by this Panel suggest that Canada is not currently a world leader in research on most emerging technologies.

The Hedy Lamarr treatment

We have ‘good looks’ (arts and humanities) but not the kind of brains (physical sciences and engineering) that people admire (from the report released April 10, 2018),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphases mine] It accounts for more than 5% of world researchin these fields. Conversely, Canada has lower research output than expected
in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

Couldn’t they have used a more buoyant tone? After all, science was known as ‘natural philosophy’ up until the 19th century. As for visual and performing arts, let’s include poetry as a performing and literary art (both have been the case historically and cross-culturally) and let’s also note that one of the great physics texts, (De rerum natura by Lucretius) was a multi-volume poem (from Lucretius’ Wikipedia entry; Note: Links have been removed).

His poem De rerum natura (usually translated as “On the Nature of Things” or “On the Nature of the Universe”) transmits the ideas of Epicureanism, which includes Atomism [the concept of atoms forming materials] and psychology. Lucretius was the first writer to introduce Roman readers to Epicurean philosophy.[15] The poem, written in some 7,400 dactylic hexameters, is divided into six untitled books, and explores Epicurean physics through richly poetic language and metaphors. Lucretius presents the principles of atomism; the nature of the mind and soul; explanations of sensation and thought; the development of the world and its phenomena; and explains a variety of celestial and terrestrial phenomena. The universe described in the poem operates according to these physical principles, guided by fortuna, “chance”, and not the divine intervention of the traditional Roman deities.[16]

Should you need more proof that the arts might have something to contribute to physical sciences, there’s this in my March 7, 2018 posting,

It’s not often you see research that combines biologically inspired engineering and a molecular biophysicist with a professional animator who worked at Peter Jackson’s (Lord of the Rings film trilogy, etc.) Park Road Post film studio. An Oct. 18, 2017 news item on ScienceDaily describes the project,

Like many other scientists, Don Ingber, M.D., Ph.D., the Founding Director of the Wyss Institute, [emphasis mine] is concerned that non-scientists have become skeptical and even fearful of his field at a time when technology can offer solutions to many of the world’s greatest problems. “I feel that there’s a huge disconnect between science and the public because it’s depicted as rote memorization in schools, when by definition, if you can memorize it, it’s not science,” says Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Professor of Bioengineering at the Harvard Paulson School of Engineering and Applied Sciences (SEAS). [emphasis mine] “Science is the pursuit of the unknown. We have a responsibility to reach out to the public and convey that excitement of exploration and discovery, and fortunately, the film industry is already great at doing that.”

“Not only is our physics-based simulation and animation system as good as other data-based modeling systems, it led to the new scientific insight [emphasis mine] that the limited motion of the dynein hinge focuses the energy released by ATP hydrolysis, which causes dynein’s shape change and drives microtubule sliding and axoneme motion,” says Ingber. “Additionally, while previous studies of dynein have revealed the molecule’s two different static conformations, our animation visually depicts one plausible way that the protein can transition between those shapes at atomic resolution, which is something that other simulations can’t do. The animation approach also allows us to visualize how rows of dyneins work in unison, like rowers pulling together in a boat, which is difficult using conventional scientific simulation approaches.”

It comes down to how we look at things. Yes, physical sciences and engineering are very important. If the report is to be believed we have a very highly educated population and according to PISA scores our students rank highly in mathematics, science, and reading skills. (For more information on Canada’s latest PISA scores from 2015 see this OECD page. As for PISA itself, it’s an OECD [Organization for Economic Cooperation and Development] programme where 15-year-old students from around the world are tested on their reading, mathematics, and science skills, you can get some information from my Oct. 9, 2013 posting.)

Is it really so bad that we choose to apply those skills in fields other than the physical sciences and engineering? It’s a little bit like Hedy Lamarr’s problem except instead of being judged for our looks and having our inventions dismissed, we’re being judged for not applying ourselves to physical sciences and engineering and having our work in other closely aligned fields dismissed as less important.

Canada’s Industrial R&D: an oft-told, very sad story

Bemoaning the state of Canada’s industrial research and development efforts has been a national pastime as long as I can remember. Here’s this from the report released April 10, 2018,

There has been a sustained erosion in Canada’s industrial R&D capacity and competitiveness. Canada ranks 33rd among leading countries on an index assessing the magnitude, intensity, and growth of industrial R&D expenditures. Although Canada is the 11th largest spender, its industrial R&D intensity (0.9%) is only half the OECD average and total spending is declining (−0.7%). Compared with G7 countries, the Canadian portfolio of R&D investment is more concentrated in industries that are intrinsically not as R&D intensive. Canada invests more heavily than the G7 average in oil and gas, forestry, machinery and equipment, and finance where R&D has been less central to business strategy than in many other industries. …  About 50% of Canada’s industrial R&D spending is in high-tech sectors (including industries such as ICT, aerospace, pharmaceuticals, and automotive) compared with the G7 average of 80%. Canadian Business Enterprise Expenditures on R&D (BERD) intensity is also below the OECD average in these sectors. In contrast, Canadian investment in low and medium-low tech sectors is substantially higher than the G7 average. Canada’s spending reflects both its long-standing industrial structure and patterns of economic activity.

R&D investment patterns in Canada appear to be evolving in response to global and domestic shifts. While small and medium-sized enterprises continue to perform a greater share of industrial R&D in Canada than in the United States, between 2009 and 2013, there was a shift in R&D from smaller to larger firms. Canada is an increasingly attractive place to conduct R&D. Investment by foreign-controlled firms in Canada has increased to more than 35% of total R&D investment, with the United States accounting for more than half of that. [emphasis mine]  Multinational enterprises seem to be increasingly locating some of their R&D operations outside their country of ownership, possibly to gain proximity to superior talent. Increasing foreign-controlled R&D, however, also could signal a long-term strategic loss of control over intellectual property (IP) developed in this country, ultimately undermining the government’s efforts to support high-growth firms as they scale up. [pp. xxii-xxiii Print; pp. 24-25 PDF]

Canada has been known as a ‘branch plant’ economy for decades. For anyone unfamiliar with the term, it means that companies from other countries come here, open up a branch and that’s how we get our jobs as we don’t have all that many large companies here. Increasingly, multinationals are locating R&D shops here.

While our small to medium size companies fund industrial R&D, it’s large companies (multinationals) which can afford long-term and serious investment in R&D. Luckily for companies from other countries, we have a well-educated population of people looking for jobs.

In 2017, we opened the door more widely so we can scoop up talented researchers and scientists from other countries, from a June 14, 2017 article by Beckie Smith for The PIE News,

Universities have welcomed the inclusion of the work permit exemption for academic stays of up to 120 days in the strategy, which also introduces expedited visa processing for some highly skilled professions.

Foreign researchers working on projects at a publicly funded degree-granting institution or affiliated research institution will be eligible for one 120-day stay in Canada every 12 months.

And universities will also be able to access a dedicated service channel that will support employers and provide guidance on visa applications for foreign talent.

The Global Skills Strategy, which came into force on June 12 [2017], aims to boost the Canadian economy by filling skills gaps with international talent.

As well as the short term work permit exemption, the Global Skills Strategy aims to make it easier for employers to recruit highly skilled workers in certain fields such as computer engineering.

“Employers that are making plans for job-creating investments in Canada will often need an experienced leader, dynamic researcher or an innovator with unique skills not readily available in Canada to make that investment happen,” said Ahmed Hussen, Minister of Immigration, Refugees and Citizenship.

“The Global Skills Strategy aims to give those employers confidence that when they need to hire from abroad, they’ll have faster, more reliable access to top talent.”

Coincidentally, Microsoft, Facebook, Google, etc. have announced, in 2017, new jobs and new offices in Canadian cities. There’s a also Chinese multinational telecom company Huawei Canada which has enjoyed success in Canada and continues to invest here (from a Jan. 19, 2018 article about security concerns by Matthew Braga for the Canadian Broadcasting Corporation (CBC) online news,

For the past decade, Chinese tech company Huawei has found no shortage of success in Canada. Its equipment is used in telecommunications infrastructure run by the country’s major carriers, and some have sold Huawei’s phones.

The company has struck up partnerships with Canadian universities, and say it is investing more than half a billion dollars in researching next generation cellular networks here. [emphasis mine]

While I’m not thrilled about using patents as an indicator of progress, this is interesting to note (from the report released April 10, 2018),

Canada produces about 1% of global patents, ranking 18th in the world. It lags further behind in trademark (34th) and design applications (34th). Despite relatively weak performance overall in patents, Canada excels in some technical fields such as Civil Engineering, Digital Communication, Other Special Machines, Computer Technology, and Telecommunications. [emphases mine] Canada is a net exporter of patents, which signals the R&D strength of some technology industries. It may also reflect increasing R&D investment by foreign-controlled firms. [emphasis mine] [p. xxiii Print; p. 25 PDF]

Getting back to my point, we don’t have large companies here. In fact, the dream for most of our high tech startups is to build up the company so it’s attractive to buyers, sell, and retire (hopefully before the age of 40). Strangely, the expert panel doesn’t seem to share my insight into this matter,

Canada’s combination of high performance in measures of research output and impact, and low performance on measures of industrial R&D investment and innovation (e.g., subpar productivity growth), continue to be viewed as a paradox, leading to the hypothesis that barriers are impeding the flow of Canada’s research achievements into commercial applications. The Panel’s analysis suggests the need for a more nuanced view. The process of transforming research into innovation and wealth creation is a complex multifaceted process, making it difficult to point to any definitive cause of Canada’s deficit in R&D investment and productivity growth. Based on the Panel’s interpretation of the evidence, Canada is a highly innovative nation, but significant barriers prevent the translation of innovation into wealth creation. The available evidence does point to a number of important contributing factors that are analyzed in this report. Figure 5 represents the relationships between R&D, innovation, and wealth creation.

The Panel concluded that many factors commonly identified as points of concern do not adequately explain the overall weakness in Canada’s innovation performance compared with other countries. [emphasis mine] Academia-business linkages appear relatively robust in quantitative terms given the extent of cross-sectoral R&D funding and increasing academia-industry partnerships, though the volume of academia-industry interactions does not indicate the nature or the quality of that interaction, nor the extent to which firms are capitalizing on the research conducted and the resulting IP. The educational system is high performing by international standards and there does not appear to be a widespread lack of researchers or STEM (science, technology, engineering, and mathematics) skills. IP policies differ across universities and are unlikely to explain a divergence in research commercialization activity between Canadian and U.S. institutions, though Canadian universities and governments could do more to help Canadian firms access university IP and compete in IP management and strategy. Venture capital availability in Canada has improved dramatically in recent years and is now competitive internationally, though still overshadowed by Silicon Valley. Technology start-ups and start-up ecosystems are also flourishing in many sectors and regions, demonstrating their ability to build on research advances to develop and deliver innovative products and services.

You’ll note there’s no mention of a cultural issue where start-ups are designed for sale as soon as possible and this isn’t new. Years ago, there was an accounting firm that published a series of historical maps (the last one I saw was in 2005) of technology companies in the Vancouver region. Technology companies were being developed and sold to large foreign companies from the 19th century to present day.

Part 2