Tag Archives: regenerative medicine

Canadian researchers develop bone implant material from cellulose nanocrystals (CNC) while Russian scientists restore internal structure of bone with polycaprolactone nanofibers

Two research groups are working to the same end where bone marrow is concerned, encourage bone cell growth, but they are using different strategies.

University of British Columbia and McMaster University (Canada)

Caption: Researchers treated nanocrystals derived from plant cellulose so that they can link up and form a strong but lightweight sponge (an aerogel) that can compress or expand as needed to completely fill out a bone cavity. Credit: Clare Kiernan, UBC

The samples look a little like teeth, don’t they?

Before diving into the research news, there’s a terminology issue that should be noted as you’ll see when you read the news/press releases. Nanocrystal cellulose/nanocrystalline cellulose (NCC) is a term coined by Canadian researchers. Since those early day, most researchers, internationally, have adopted the term cellulose nanocrystals (CNC) as the standard term. It fits better with the naming conventions for other nnanocellulose materials such as cellulose nanofibrils, etc. By the way, a Canadian company (CelluForce) that produces CNC retained the term nanocrystalline cellulose (NCC) as a trademark for the product, CelluForce NCC®.

For anyone not familiar with aerogels, what the University of British Columbia (UBC) and McMaster University researchers are developing, are also popularly known known as ‘frozen smoke’ (see the Aerogel Wikipedia entry for more).

A March 19, 2019 news item on ScienceDaily announces the research,

Researchers from the University of British Columbia and McMaster University have developed what could be the bone implant material of the future: an airy, foamlike substance that can be injected into the body and provide scaffolding for the growth of new bone.

It’s made by treating nanocrystals derived from plant cellulose so that they link up and form a strong but lightweight sponge — technically speaking, an aerogel — that can compress or expand as needed to completely fill out a bone cavity.

A March 19, 2019 UBC news release (also on EurekAlert), which originated the news item, describes the research in more detail,

“Most bone graft or implants are made of hard, brittle ceramic that doesn’t always conform to the shape of the hole, and those gaps can lead to poor growth of the bone and implant failure,” said study author Daniel Osorio, a PhD student in chemical engineering at McMaster. “We created this cellulose nanocrystal aerogel as a more effective alternative to these synthetic materials.”

For their research, the team worked with two groups of rats, with the first group receiving the aerogel implants and the second group receiving none. Results showed that the group with implants saw 33 per cent more bone growth at the three-week mark and 50 per cent more bone growth at the 12-week mark, compared to the controls.

“These findings show, for the first time in a lab setting, that a cellulose nanocrystal aerogel can support new bone growth,” said study co-author Emily Cranston, a professor of wood science and chemical and biological engineering who holds the President’s Excellence Chair in Forest Bio-products at UBC. She added that the implant should break down into non-toxic components in the body as the bone starts to heal.

The innovation can potentially fill a niche in the $2-billion bone graft market in North America, said study co-author Kathryn Grandfield, a professor of materials science and engineering, and biomedical engineering at McMaster who supervised the work.

“We can see this aerogel being used for a number of applications including dental implants and spinal and joint replacement surgeries,” said Grandfield. “And it will be economical because the raw material, the nanocellulose, is already being produced in commercial quantities.”

The researchers say it will be some time before the aerogel makes it out of the lab and into the operating room.

“This summer, we will study the mechanisms between the bone and implant that lead to bone growth,” said Grandfield. “We’ll also look at how the implant degrades using advanced microscopes. After that, more biological testing will be required before it is ready for clinical trials.”

Here’s a link to and a citation for the paper,

Cross-linked cellulose nanocrystal aerogels as viable bone tissue scaffolds by Daniel A. Osorio, Bryan E. J. Lee, Jacek M. Kwiecien, Xiaoyue Wang, Iflah Shahid, Ariana L. Hurley, Emily D. Cranston and Kathryn Grandfield. Acta Biomaterialia Volume 87, 15 March 2019, Pages 152-165 DOI: https://doi.org/10.1016/j.actbio.2019.01.049

This paper is behind a paywall

Now for the Russian team.

National University of Science and Technology “MISIS” (formerly part of the Moscow Mining Academy)

These scientists have adopted a different strategy as you’ll see in the March 19, 2019 news item on Nanwerk, which, coincidentally, was published on the same day as the Canadian research,

Scientists from the National University of Science and Technology “MISIS” developed a nanomaterial, which will be able to rstore the internal structure of bones damaged due to osteoporosis and osteomyelitis. A special bioactive coating of the material helped to increase the rate of division of bone cells by 3 times. In the future, it can allow to abandon bone marrow transplantation and patients will no longer need to wait for suitable donor material.

A March 19, 2019 National University of Science and Technology (MISIS) press release (also on EurekAlert), which originated the news item, provides detail about the impetus for the research and the technique being developed,

Such diseases as osteoporosis and osteomyelitis cause irreversible degenerative changes in the bone structure. Such diseases require serious complex treatment and surgery and transplantation of the destroyed bone marrow in severe stages. Donor material should have a number of compatibility indicators and even close relationship with the donor cannot guarantee full compatibility.

Research group from the National University of Science and Technology “MISIS” (NUST MISIS), led by Anton Manakhov (Laboratory for Inorganic Nanomaterials) developed material that will allow to restore damaged internal bone structure without bone marrow transplantation.
It is based on nanofibers of polycaprolactone, which is biocompatible self-dissolvable material. Earlier, the same research group has already worked with this material: by adding antibiotics to the nanofibers, scientists have managed to create non-changeable healing bandages.

“If we want the implant to take, not only biocompatibility is needed, but also activation of the natural cell growth on the surface of the material. Polycaprolactone as such is a hydrophobic material, meaning, and cells feel uncomfortable on its surface. They gather on the smooth surface and divide extremely slow”, Elizaveta Permyakova, one of the co-authors and researcher at NUST MISIS Laboratory for Inorganic Nanomaterials, explains.

To increase the hydrophilicity of the material, a thin layer of bioactive film consisting of titanium, calcium, phosphorus, carbon, oxygen and nitrogen (TiCaPCON) was deposited on it. The structure of nanofibers identical to the cell surface was preserved. These films, when immersed in a special salt medium, which chemical composition is identical to human blood plasma, are able to form on its surface a special layer of calcium and phosphorus, which in natural conditions forms the main part of the bone. Due to the chemical similarity and the structure of nanofibers, new bone tissue begins to grow rapidly on this layer. Most importantly, polycaprolactone nanofibers dissolve, having fulfilled their functions. Only new “native” tissue remains in the bone.

In the experimental part of the study, the researchers compared the rate of division of osteoblastic bone cells on the surface of the modified and unmodified material. It was found that the modified material TiCaPCON has a high hydrophilicity. In contrast to the unmodified material, the cells on its surface felt clearly more comfortable, and divided three times faster.

According to scientists, such results open up great prospects for further work with modified polycaprolactone nanofibers as an alternative to bone marrow transplantation.

Here’s a link to and a citation for the paper,

Bioactive TiCaPCON-coated PCL nanofibers as a promising material for bone tissue engineering by Anton Manakhov, Elizaveta S. Permyakova, Sergey Ershov, Alexander Sheveyko, Andrey Kovalskii, Josef Polčák, Irina Y. Zhitnyak, Natalia A. Gloushankova, Lenka Zajíčková, Dmitry V. Shtansky. Applied Surface Science Volume 479, 15 June 2019, Pages 796-802 DOI: https://doi.org/10.1016/j.apsusc.2019.02.163

This paper is behind a paywall.

Cooking up a lung one way or the other

I have two stories about lungs and they are entirely different with the older one being a bioengineering story from the US and the more recent one being an artificial tissue story from the University of Toronto and the University of Ottawa (both in Canada).

Lab grown lungs

The Canadian Broadcasting Corporation’s Quirks and Quarks radio programme posted a December 29, 2018 news item (with embedded radio files) about bioengineered lunjgs,

There are two major components to building an organ: the structure and the right cells on that structure. A team led by Dr. Joan Nichols, a Professor of Internal Medicine, Microbiology and Immunology at the University of Texas Medical Branch in Galveston, were able to tackle both parts of the problem

In their experiment they used a donor organ for the structure. They took a lung from an unrelated pig, and stripped it of its cells, leaving a scaffold of collagen, a tough, flexible protein.  This provided a pre-made appropriate structure, though in future they think it may be possible to use 3-D printing technology to get the same result.

They then added cultured cells from the animal who would be receiving the transplant – so the lung was made of the animal’s own cells. Cultured lung and blood vessel cells were placed on the scaffold and it was  placed in a tank for 30 days with a cocktail of nutrients to help the cells stick to the scaffold and proliferate. The result was a kind of baby lung.

They then transplanted the bio-engineered, though immature, lung into the recipient animal where they hoped it would continue to develop and mature – growing to become a healthy, functioning organ.

The recipients of the bio-engineered lungs were four pigs adult pigs, which appeared to tolerate the transplants well. In order to study the development of the bio-engineered lungs, they euthanized the animals at different times: 10 hours, two weeks, one month and two months after transplantation.

They found that as early as two weeks, the bio-engineered lung had integrated into the recipient animals’ body, building a strong network of blood vessels essential for the lung to survive. There was no evidence of pulmonary edema, the build of fluid in the lungs, which is usually a sign of the blood vessels not working efficiently.  There was no sign of rejection of the transplanted organs, and the pigs were healthy up to the point where they were euthanized.

One lingering concern is how well the bio-engineered lungs delivered oxygen. The four pigs who received the trasplant [sic] had one original functioning lung, so they didn’t depend on their new bio-engineered lung for breathing. The scientists were not sure that the bio-engineered lung was mature enough to handle the full load of oxygen on its own.

You can hear Bob McDonald’s (host of Quirks & Quarks, a Canadian Broadcasting Corporation science radio programme) interview lead scientist, Dr. Joan Nichols if you go to here. (Note: I find he overmodulates his voice but some may find he has a ‘friendly’ voice.)

This is an image of the lung scaffold produced by the team,

Lung scaffold in the bioreactor chamber on Day 1 of the experiment, before the cells from the study pig were added. (Credit: Joan Nichols) [downloaded from https://www.cbc.ca/radio/quirks/dec-29-2018-water-on-mars-lab-grown-lungs-and-more-the-biggest-science-stories-of-2018-1.4940811/lab-grown-lungs-are-transplanted-in-pigs-today-they-may-help-humans-tomorrow-1.4940822]

Here’s more technical detail in an August 1, 2018i University of Texas Medical Branch (UTMB) news release (also on EurekAlert), which originally announced the research,

A research team at the University of Texas Medical Branch at Galveston have bioengineered lungs and transplanted them into adult pigs with no medical complication.

In 2014, Joan Nichols and Joaquin Cortiella from The University of Texas Medical Branch at Galveston were the first research team to successfully bioengineer human lungs in a lab. In a paper now available in Science Translational Medicine, they provide details of how their work has progressed from 2014 to the point no complications have occurred in the pigs as part of standard preclinical testing.

“The number of people who have developed severe lung injuries has increased worldwide, while the number of available transplantable organs have decreased,” said Cortiella, professor of pediatric anesthesia. “Our ultimate goal is to eventually provide new options for the many people awaiting a transplant,” said Nichols, professor of internal medicine and associate director of the Galveston National Laboratory at UTMB.

To produce a bioengineered lung, a support scaffold is needed that meets the structural needs of a lung. A support scaffold was created using a lung from an unrelated animal that was treated using a special mixture of sugar and detergent to eliminate all cells and blood in the lung, leaving only the scaffolding proteins or skeleton of the lung behind. This is a lung-shaped scaffold made totally from lung proteins.

The cells used to produce each bioengineered lung came from a single lung removed from each of the study animals. This was the source of the cells used to produce a tissue-matched bioengineered lung for each animal in the study. The lung scaffold was placed into a tank filled with a carefully blended cocktail of nutrients and the animals’ own cells were added to the scaffold following a carefully designed protocol or recipe. The bioengineered lungs were grown in a bioreactor for 30 days prior to transplantation. Animal recipients were survived for 10 hours, two weeks, one month and two months after transplantation, allowing the research team to examine development of the lung tissue following transplantation and how the bioengineered lung would integrate with the body.

All of the pigs that received a bioengineered lung stayed healthy. As early as two weeks post-transplant, the bioengineered lung had established the strong network of blood vessels needed for the lung to survive.

“We saw no signs of pulmonary edema, which is usually a sign of the vasculature not being mature enough,” said Nichols and Cortiella. “The bioengineered lungs continued to develop post-transplant without any infusions of growth factors, the body provided all of the building blocks that the new lungs needed.”

Nichols said that the focus of the study was to learn how well the bioengineered lung adapted and continued to mature within a large, living body. They didn’t evaluate how much the bioengineered lung provided oxygenation to the animal.

“We do know that the animals had 100 percent oxygen saturation, as they had one normal functioning lung,” said Cortiella. “Even after two months, the bioengineered lung was not yet mature enough for us to stop the animal from breathing on the normal lung and switch to just the bioengineered lung.”

For this reason, future studies will look at long-term survival and maturation of the tissues as well as gas exchange capability.

The researchers said that with enough funding, they could grow lungs to transplant into people in compassionate use circumstances within five to 10 years.

“It has taken a lot of heart and 15 years of research to get us this far, our team has done something incredible with a ridiculously small budget and an amazingly dedicated group of people,” Nichols and Cortiella said.

Here’s a citation and another link for the paper,

Production and transplantation of bioengineered lung into a large-animal model by Joan E. Nichols, Saverio La Francesca, Jean A. Niles, Stephanie P. Vega, Lissenya B. Argueta, Luba Frank, David C. Christiani, Richard B. Pyles, Blanca E. Himes, Ruyang Zhang, Su Li, Jason Sakamoto, Jessica Rhudy, Greg Hendricks, Filippo Begarani, Xuewu Liu, Igor Patrikeev, Rahul Pal, Emiliya Usheva, Grace Vargas, Aaron Miller, Lee Woodson, Adam Wacher, Maria Grimaldo, Daniil Weaver, Ron Mlcak, and Joaquin Cortiella. Science Translational Medicine 01 Aug 2018: Vol. 10, Issue 452, eaao3926 DOI: 10.1126/scitranslmed.aao3926

This paper is behind a paywall.

Artificial lung cancer tissue

The research teams at the University of Toronto and the University of Ottawa worked on creating artificial lung tissue but other applications are possible too. First, there’s the announcement in a February 25, 2019 news item on phys.org,

A 3-D hydrogel created by researchers in U of T Engineering Professor Molly Shoichet’s lab is helping University of Ottawa researchers to quickly screen hundreds of potential drugs for their ability to fight highly invasive cancers.

Cell invasion is a critical hallmark of metastatic cancers, such as certain types of lung and brain cancer. Fighting these cancers requires therapies that can both kill cancer cells as well as prevent cell invasion of healthy tissue. Today, most cancer drugs are only screened for their ability to kill cancer cells.

“In highly invasive diseases, there is a crucial need to screen for both of these functions,” says Shoichet. “We now have a way to do this.”

A February 25, 2019 University of Toronto news release (also on EurekAlert), which originated the news item, offers more detail ,

In their latest research, the team used hydrogels to mimic the environment of lung cancer, selectively allowing cancer cells, and not healthy cells, to invade. In their latest research, the team used hydrogels to mimic the environment of lung cancer, selectively allowing cancer cells, and not healthy cells, to invade. This emulated environment enabled their collaborators in Professor Bill Stanford’s lab at University of Ottawa to screen for both cancer-cell growth and invasion. The study, led by Roger Y. Tam, a research associate in Shochet’s lab, was recently published in Advanced Materials.

“We can conduct this in a 384-well plate, which is no bigger than your hand. And with image-analysis software, we can automate this method to enable quick, targeted screenings for hundreds of potential cancer treatments,” says Shoichet.

One example is the researchers’ drug screening for lymphangioleiomyomatosis (LAM), a rare lung disease affecting women. Shoichet and her team were inspired by the work of Green Eggs and LAM, a Toronto-based organization raising awareness of the disease.

Using their hydrogels, they were able to automate and screen more than 800 drugs, thereby uncovering treatments that could target disease growth and invasion.

In the ongoing collaboration, the researchers plan to next screen multiple drugs at different doses to gain greater insight into new treatment methods for LAM. The strategies and insights they gain could also help identify new drugs for other invasive cancers.

Shoichet, who was recently named a Distinguished Woman in Chemistry or Chemical Engineering, also plans to patent the hydrogel technology.

“This has, and continues to be, a great collaboration that is advancing knowledge at the intersection of engineering and biology,” says Shoichet.

I note that Shoichet (pronounced ShoyKet) is getting ready to patent this work. I do have a question about this and it’s not up to Shoichet to answer as she didn’t create the system. Will the taxpayers who funded her work receive any financial benefits should the hydrogel prove to be successful or will we be paying double, both supporting her research and paying for the hydrogel through our healthcare costs?

Getting back to the research, here’s a link to and a citation for the paper,

Rationally Designed 3D Hydrogels Model Invasive Lung Diseases Enabling High‐Content Drug Screening by Roger Y. Tam, Julien Yockell‐Lelièvre, Laura J. Smith, Lisa M. Julian, Alexander E. G. Baker, Chandarong Choey, Mohamed S. Hasim, Jim Dimitroulakos, William L. Stanford, Molly S. Shoichet. Advanced Materials Volume 31, Issue 7 February 15, 2019 1806214 First published online: 27 December 2018 DOI: https://doi.org/10.1002/adma.201806214

This paper is behind a paywall.

Cellulose biosensor heralds new bioimaging approach to tissue engineering

I keep an eye on how nanocellulose is being used in various applications and I’m not sure that this cellulose biosensor quite fits the bill as nanocellulose, nonetheless, it’s interesting and that’s enough for me. From a December 12, 2018 Sechenov University (Russia) press release on EurekAlert,

I.M. Sechenov First Moscow State Medical University teamed up together with Irish colleagues to develop a new imaging approach for tissue engineering. The team produced so-called ‘hybrid biosensor’ scaffold materials, which are based on cellulose matrices labeled with pH- and calcium-sensitive fluorescent proteins. These materials enable visualization of the metabolism and other important biomarkers in the engineered artificial tissues by microscopy. The results of the work were published in the Acta Biomaterialia journal.
The success of tissue engineering is based on the use of scaffold matrices – materials that support the viability and direct the growth of cells, tissues, and organoids. Scaffolds are important for basic and applied biomedical research, tissue engineering and regenerative medicine, and are promising for development of new therapeutics. However, the ability ‘to see’ what happens within the scaffolds during the tissue growth poses a significant research challenge

“We developed a new approach allowing visualization of scaffold-grown tissue and cells by using labeling with biosensor fluorescent proteins. Due to the high specificity of labeling and the use of fluorescence microscopy FLIM, we can quantify changes in pH and calcium in the vicinity of cells,” says Dr. Ruslan Dmitriev, Group Leader at the University College Cork and the Institute for Regenerative Medicine (I.M. Sechenov First Moscow State Medical University).
To achieve the specific labeling of cellulose matrices, researchers used well-known cellulose-binding proteins. The use of extracellular pH- and calcium-sensitive biosensors allow for analysis of cell metabolism: indeed, the extracellular acidification is directly associated with the balance of cell energy production pathways and the glycolytic flux (release of lactate). It is also a frequent hallmark of cancer and transformed cell types. On the other hand, calcium plays a key role in the extra- and intracellular signaling affecting cell growth and differentiation.

The approach was tested on different types of cellulose matrices (bacterial and produced from decellularised plant tissues) using 3D culture of human colon cancer cells and stem-cell derived mouse small intestinal organoids. The scaffolds informed on changes in the extracellular acidification and were used together with the analysis of real-time oxygenation of intestinal organoids. The resulting data can be presented in the form of colour maps, corresponding to the areas of cell growth within different microenvironments.

“Our results open new prospects in the imaging of tissue-engineered constructs for regenerative medicine. They enable deeper understanding of tissue metabolism in 3D and are also highly promising for commercialisation,” concludes Dr. Dmitriev.

The researchers have provided an image to illustrate their work,

Caption: A 3D reconstruction of a cellulose matrix stained with a pH-sensitive biosensor. Credit: Dr. R. Dmitriev

Here’s a link to and a citation for the paper,

Cellulose-based scaffolds for fluorescence lifetime imaging-assisted tissue engineering by Neil O’Donnell, Irina A. Okkelman, Peter Timashev, Tatyana I.Gromovykh, Dmitri B. Papkovsky, Ruslan I.Dmitriev. Acta Biomaterialia Volume 80, 15 October 2018, Pages 85-96 DOI: https://doi.org/10.1016/j.actbio.2018.09.034


This paper is behind a paywall.

Injectable bandages for internal bleeding and hydrogel for the brain

This injectable bandage could be a gamechanger (as they say) if it can be taken beyond the ‘in vitro’ (i.e., petri dish) testing stage. A May 22, 2018 news item on Nanowerk makes the announcement (Note: A link has been removed),

While several products are available to quickly seal surface wounds, rapidly stopping fatal internal bleeding has proven more difficult. Now researchers from the Department of Biomedical Engineering at Texas A&M University are developing an injectable hydrogel bandage that could save lives in emergencies such as penetrating shrapnel wounds on the battlefield (Acta Biomaterialia, “Nanoengineered injectable hydrogels for wound healing application”).

A May 22, 2018 US National Institute of Biomedical Engineering and Bioengiineering news release, which originated the news item, provides more detail (Note: Links have been removed),

The researchers combined a hydrogel base (a water-swollen polymer) and nanoparticles that interact with the body’s natural blood-clotting mechanism. “The hydrogel expands to rapidly fill puncture wounds and stop blood loss,” explained Akhilesh Gaharwar, Ph.D., assistant professor and senior investigator on the work. “The surface of the nanoparticles attracts blood platelets that become activated and start the natural clotting cascade of the body.”

Enhanced clotting when the nanoparticles were added to the hydrogel was confirmed by standard laboratory blood clotting tests. Clotting time was reduced from eight minutes to six minutes when the hydrogel was introduced into the mixture. When nanoparticles were added, clotting time was significantly reduced, to less than three minutes.

In addition to the rapid clotting mechanism of the hydrogel composite, the engineers took advantage of special properties of the nanoparticle component. They found they could use the electric charge of the nanoparticles to add growth factors that efficiently adhered to the particles. “Stopping fatal bleeding rapidly was the goal of our work,” said Gaharwar. “However, we found that we could attach growth factors to the nanoparticles. This was an added bonus because the growth factors act to begin the body’s natural wound healing process—the next step needed after bleeding has stopped.”

The researchers were able to attach vascular endothelial growth factor (VEGF) to the nanoparticles. They tested the hydrogel/nanoparticle/VEGF combination in a cell culture test that mimics the wound healing process. The test uses a petri dish with a layer of endothelial cells on the surface that create a solid skin-like sheet. The sheet is then scratched down the center creating a rip or hole in the sheet that resembles a wound.

When the hydrogel containing VEGF bound to the nanoparticles was added to the damaged endothelial cell wound, the cells were induced to grow back and fill-in the scratched region—essentially mimicking the healing of a wound.

“Our laboratory experiments have verified the effectiveness of the hydrogel for initiating both blood clotting and wound healing,” said Gaharwar. “We are anxious to begin tests in animals with the hope of testing and eventual use in humans where we believe our formulation has great potential to have a significant impact on saving lives in critical situations.”

The work was funded by grant EB023454 from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), and the National Science Foundation. The results were reported in the February issue of the journal Acta Biomaterialia.

The paper was published back in April 2018 and there was an April 2, 2018 Texas A&M University news release on EurekAlert making the announcement (and providing a few unique details),

A penetrating injury from shrapnel is a serious obstacle in overcoming battlefield wounds that can ultimately lead to death.Given the high mortality rates due to hemorrhaging, there is an unmet need to quickly self-administer materials that prevent fatality due to excessive blood loss.

With a gelling agent commonly used in preparing pastries, researchers from the Inspired Nanomaterials and Tissue Engineering Laboratory have successfully fabricated an injectable bandage to stop bleeding and promote wound healing.

In a recent article “Nanoengineered Injectable Hydrogels for Wound Healing Application” published in Acta Biomaterialia, Dr. Akhilesh K. Gaharwar, assistant professor in the Department of Biomedical Engineering at Texas A&M University, uses kappa-carrageenan and nanosilicates to form injectable hydrogels to promote hemostasis (the process to stop bleeding) and facilitate wound healing via a controlled release of therapeutics.

“Injectable hydrogels are promising materials for achieving hemostasis in case of internal injuries and bleeding, as these biomaterials can be introduced into a wound site using minimally invasive approaches,” said Gaharwar. “An ideal injectable bandage should solidify after injection in the wound area and promote a natural clotting cascade. In addition, the injectable bandage should initiate wound healing response after achieving hemostasis.”

The study uses a commonly used thickening agent known as kappa-carrageenan, obtained from seaweed, to design injectable hydrogels. Hydrogels are a 3-D water swollen polymer network, similar to Jell-O, simulating the structure of human tissues.

When kappa-carrageenan is mixed with clay-based nanoparticles, injectable gelatin is obtained. The charged characteristics of clay-based nanoparticles provide hemostatic ability to the hydrogels. Specifically, plasma protein and platelets form blood adsorption on the gel surface and trigger a blood clotting cascade.

“Interestingly, we also found that these injectable bandages can show a prolonged release of therapeutics that can be used to heal the wound” said Giriraj Lokhande, a graduate student in Gaharwar’s lab and first author of the paper. “The negative surface charge of nanoparticles enabled electrostatic interactions with therapeutics thus resulting in the slow release of therapeutics.”

Nanoparticles that promote blood clotting and wound healing (red discs), attached to the wound-filling hydrogel component (black) form a nanocomposite hydrogel. The gel is designed to be self-administered to stop bleeding and begin wound-healing in emergency situations. Credit: Lokhande, et al. 1

Here’s a link to and a citation for the paper,

Nanoengineered injectable hydrogels for wound healing application by Giriraj Lokhande, James K. Carrow, Teena Thakur, Janet R. Xavier, Madasamy Parani, Kayla J. Bayless, Akhilesh K. Gaharwar. Acta Biomaterialia Volume 70, 1 April 2018, Pages 35-47
https://doi.org/10.1016/j.actbio.2018.01.045

This paper is behind a paywall.

Hydrogel and the brain

It’s been an interesting week for hydrogels. On May 21, 2018 there was a news item on ScienceDaily about a bioengineered hydrogel which stimulated brain tissue growth after a stroke (mouse model),

In a first-of-its-kind finding, a new stroke-healing gel helped regrow neurons and blood vessels in mice with stroke-damaged brains, UCLA researchers report in the May 21 issue of Nature Materials.

“We tested this in laboratory mice to determine if it would repair the brain in a model of stroke, and lead to recovery,” said Dr. S. Thomas Carmichael, Professor and Chair of neurology at UCLA. “This study indicated that new brain tissue can be regenerated in what was previously just an inactive brain scar after stroke.”

The brain has a limited capacity for recovery after stroke and other diseases. Unlike some other organs in the body, such as the liver or skin, the brain does not regenerate new connections, blood vessels or new tissue structures. Tissue that dies in the brain from stroke is absorbed, leaving a cavity, devoid of blood vessels, neurons or axons, the thin nerve fibers that project from neurons.

After 16 weeks, stroke cavities in mice contained regenerated brain tissue, including new neural networks — a result that had not been seen before. The mice with new neurons showed improved motor behavior, though the exact mechanism wasn’t clear.

Remarkable stuff.

Bone regeneration with a mix of 21st century techniques and an age-old natural cure

Curry was how I was introduced to turmeric. My father who came from Mauritius loved curry and we had it at least once a week. Nobody mentioned healing properties, which I was to discover them only after I started this blog. Usually, turmeric is mentioned in cancer cures but not this time.

Turmeric Courtesy: Washington State University

From a May 2, 2018 Washington State University news release by Tina Hilding (also on EurekAlert but dated May 3, 2018),

A WSU research team is bringing together natural medical cures with modern biomedical devices in hopes of bringing about better health outcomes for people with bone diseases.

In this first-ever effort, the team improved bone-growing capabilities on 3D-printed, ceramic bone scaffolds by 30-45 percent when coated with curcumin, a compound found in the spice, turmeric. They have published their work in the journal, Materials Today Chemistry.

The work could be important for the millions of Americans who suffer from injuries or bone diseases like osteoporosis.

Human bone includes bone forming and resorbing cells that constantly remodel throughout our lives. As people age, the bone cell cycling process often doesn’t work as well. Bones become weaker and likely to fracture. Many of the medicines used for osteoporosis work by slowing down or stopping the destruction of old bone or by forming new bone. While they may increase bone density, they also create an imbalance in the natural bone remodeling cycle and may create poorer quality bone.

Turmeric has been used as medicine for centuries in Asian countries, and curcumin has been shown to have antioxidant, anti-inflammatory and bone-building capabilities. It can also prevent various forms of cancers. However, when taken orally as medicine, the compound can’t be absorbed well in the body. It is metabolized and eliminated too quickly.

Led by Susmita Bose, Herman and Brita Lindholm Endowed Chair Professor in the School of Mechanical and Materials Engineering, the researchers encased the curcumin within a water-loving polymer, a large molecule, so that it could be gradually released from their ceramic scaffolds. The curcumin increased the viability and proliferation of new bone cells and blood vessels in surrounding tissue as well as accelerated the healing process.

Bose hopes that the work will lead to medicines that naturally create healthier bone without affecting the bone remodeling cycle.

“In the end, it’s the bone quality that matters,” she said.

The researchers are continuing the studies, looking at the protein and cellular level to gain better understanding of exactly how the natural compound works. They are also working to improve the process’ efficiency and control. The challenge with the natural compounds, said Bose, is that they are often large organic molecules.

“You have to use the right vehicle for delivery,” she said. “We need to load and get it released in a controlled and sustained way. The chemistry of vehicle delivery is very important.”

In addition to curcumin, the researchers are studying other natural remedies, including compounds from aloe vera, saffron, Vitamin D, garlic, oregano and ginger. Bose is focused on compounds that might help with bone disorders, including those that encourage bone growth or that have anti-inflammatory, infection control, or anti-cancer properties.

Starting with her own health issues, Bose has had a longtime interest in bridging natural medicinal compounds with modern medicine. That interest increased after she had her children.

“As a mother and having a chemistry background, I realized I didn’t want my children to be exposed to so many chemicals for every illness,” Bose said. “I started looking at home remedies.”

To her students, she always emphasizes healthy living as the best way to guarantee the best health outcomes, including healthy eating, proper sleep, interesting hobbies, and exercise.

Courtesy Washington State University

Here’s a link to and a citation for the paper,

Effects of PCL, PEG and PLGA polymers on curcumin release from calcium phosphate matrix for in vitro and in vivo bone regeneration by Susmita Bose, Naboneeta Sarkar, Dishary Banerjee. Materials Today Chemistry Vol. 8 June 2018, pp. 110-130 [Published online May 2, 2018] https://doi.org/10.1016/j.mtchem.2018.03.005

This paper is behind a paywall.

New wound dressings with nanofibres for tissue regeneration

The Rotary Jet-Spinning manufacturing system was developed specifically as a therapeutic for the wounds of war. The dressings could be a good option for large wounds, such as burns, as well as smaller wounds on the face and hands, where preventing scarring is important. Illustration courtesy of Michael Rosnach/Harvard University

This image really gets the idea of regeneration across to the viewer while also informing you that this is medicine that comes from the military. A March 19,2018 news item on phys.org announces the work,

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering have developed new wound dressings that dramatically accelerate healing and improve tissue regeneration. The two different types of nanofiber dressings, described in separate papers, use naturally-occurring proteins in plants and animals to promote healing and regrow tissue.

Our fiber manufacturing system was developed specifically for the purpose of developing therapeutics for the wounds of war,” said Kit Parker, the Tarr Family Professor of Bioengineering and Applied Physics at SEAS and senior author of the research. “As a soldier in Afghanistan, I witnessed horrible wounds and, at times, the healing process for those wounds was a horror unto itself. This research is a years-long effort by many people on my team to help with these problems.”

Parker is also a Core Faculty Member of the Wyss Institute.

The most recent paper, published in Biomaterials, describes a wound dressing inspired by fetal tissue.

A March 19, 2018 Harvard University John A. Paulson School of Engineering and Applied Science news release by Leah Burrows (also on EurekAlert), which originated the news item, provides some background information before launching into more detail about this latest work,

In the late 1970s, when scientists first started studying the wound-healing process early in development, they discovered something unexpected: Wounds incurred before the third trimester left no scars. This opened a range of possibilities for regenerative medicine. But for decades, researchers have struggled to replicate those unique properties of fetal skin.

Unlike adult skin, fetal skin has high levels of a protein called fibronectin, which assembles into the extracellular matrix and promotes cell binding and adhesion. Fibronectin has two structures: globular, which is found in blood, and fibrous, which is found in tissue. Even though fibrous fibronectin holds the most promise for wound healing, previous research focused on the globular structure, in part because manufacturing fibrous fibronectin was a major engineering challenge.

But Parker and his team are pioneers in the field of nanofiber engineering.

The researchers made fibrous fibronectin using a fiber-manufacturing platform called Rotary Jet-Spinning (RJS), developed by Parker’s Disease Biophysics Group. RJS works likes a cotton-candy machine — a liquid polymer solution, in this case globular fibronectin dissolved in a solvent, is loaded into a reservoir and pushed out through a tiny opening by centrifugal force as the device spins. As the solution leaves the reservoir, the solvent evaporates and the polymers solidify. The centrifugal force unfolds the globular protein into small, thin fibers. These fibers — less than one micrometer in diameter — can be collected to form a large-scale wound dressing or bandage.

“The dressing integrates into the wound and acts like an instructive scaffold, recruiting different stem cells that are relevant for regeneration and assisting in the healing process before being absorbed into the body,” said Christophe Chantre, a graduate student in the Disease Biophysics Group and first author of the paper.

In in vivo testing, the researchers found that wounds treated with the fibronectin dressing showed 84 percent tissue restoration within 20 days, compared with 55.6 percent restoration in wounds treated with a standard dressing.

The researchers also demonstrated that wounds treated with the fibronectin dressing had almost normal epidermal thickness and dermal architecture, and even regrew hair follicles — often considered one of the biggest challenges in the field of wound healing.

“This is an important step forward,” said Chantre. “Most work done on skin regeneration to date involves complex treatments combining scaffolds, cells, and even growth factors. Here we were able to demonstrate tissue repair and hair follicle regeneration using an entirely material approach. This has clear advantages for clinical translation.”

In another paper published in Advanced Healthcare Materials, the Disease Biophysics Group demonstrated a soy-based nanofiber that also enhances and promotes wound healing.

Soy protein contains both estrogen-like molecules — which have been shown to accelerate wound healing — and bioactive molecules similar to those that build and support human cells.

“Both the soy- and fibronectin-fiber technologies owe their success to keen observations in reproductive medicine,” said Parker. “During a woman’s cycle, when her estrogen levels go high, a cut will heal faster. If you do a surgery on a baby still in the womb, they have scar-less wound healing. Both of these new technologies are rooted in the most fascinating of all the topics in human biology — how we reproduce.”

In a similar way to fibronectin fibers, the research team used RJS to spin ultrathin soy fibers into wound dressings. In experiments, the soy- and cellulose-based dressing demonstrated a 72 percent increase in healing over wounds with no dressing and a 21 percent increase in healing over wounds dressed without soy protein.

“These findings show the great promise of soy-based nanofibers for wound healing,” said Seungkuk Ahn, a graduate student in the Disease Biophysics Group and first author of the paper. “These one-step, cost-effective scaffolds could be the next generation of regenerative dressings and push the envelope of nanofiber technology and the wound-care market.”

Both kinds of dressing, according to researchers, have advantages in the wound-healing space. The soy-based nanofibers — consisting of cellulose acetate and soy protein hydrolysate — are inexpensive, making them a good option for large-scale use, such as on burns. The fibronectin dressings, on the other hand, could be used for smaller wounds on the face and hands, where preventing scarring is important.

Here’s are links and citations for both papers mentioned in the news release,

Soy Protein/Cellulose Nanofiber Scaffolds Mimicking Skin Extracellular Matrix for Enhanced Wound Healing by Seungkuk Ahn, Christophe O. Chantre, Alanna R. Gannon, Johan U. Lind, Patrick H. Campbell, Thomas Grevesse, Blakely B. O’Connor, Kevin Kit Parker. Advanced Healthcare Materials https://doi.org/10.1002/adhm.201701175 First published: 23 January 2018

Production-scale fibronectin nanofibers promote wound closure and tissue repair in a dermal mouse model by Christophe O. Chantre, Patrick H. Campbell, Holly M. Golecki, Adrian T. Buganza, Andrew K. Capulli, Leila F. Deravi, Stephanie Dauth, Sean P. Sheehy, Jeffrey A.Paten. KarlGledhill, Yanne S. Doucet, Hasan E.Abaci, Seungkuk Ahn, Benjamin D.Pope, Jeffrey W.Ruberti, Simon P.Hoerstrup, Angela M.Christiano, Kevin Kit Parker. Biomaterials Volume 166, June 2018, Pages 96-108 https://doi.org/10.1016/j.biomaterials.2018.03.006 Available online 5 March 2018

Both papers are behind paywalls although you may want to check with ResearchGate where many researchers make their papers available for free.

One last comment, I noticed this at the end of Burrows’ news release,

The Harvard Office of Technology Development has protected the intellectual property relating to these projects and is exploring commercialization opportunities.

It reminded me of the patent battle between the Broad Institute (a Harvard University and Massachusetts Institute of Technology joint venture) and the University of California at Berkeley over CRISPR (clustered regularly interspaced short palindromic repeats) technology. (My March 15, 2017 posting describes the battle’s outcome.)

Lest we forget, there could be major financial rewards from this work.

“Living” bandages made from biocompatible anti-burn nanofibers

A February 16, 2018 news item on Nanowerk announces research from a Russian team about their work on “living” bandages,

In regenerative medicine, and particularly in burn therapy, the effective regeneration of damaged skin tissue and the prevention of scarring are usually the main goals. Scars form when skin is badly damaged, whether through a cut, burn, or a skin problem such as acne or fungal infection.

Scar tissue mainly consists of irreversible collagen and significantly differs from the tissue it replaces, having reduced functional properties. For example, scars on skin are more sensitive to ultraviolet radiation, are not elastic, and the sweat glands and hair follicles are not restored in the area.

The solution of this medical problem was proposed by the researchers from the NUST MISIS [National University of Science and Technology {formerly Moscow Institute of Steel and Alloys State Technological University})] Inorganic Nanomaterials Laboratory, led by PhD Anton Manakhov, a senior researcher. The team of nanotechnology scientists has managed to create multi-layer ‘bandages’ made of biodegradable fibers and multifunctional bioactive nanofilms, which [the bandages] prevent scarring and accelerate tissue regeneration.

A February 14, 2018 NUST MISIS press release, which originated the news item, provides more detail,

The addition of the antibacterial effect by the introduction of silver nanoparticles or joining antibiotics, as well as the increase of biological activity to the surface of hydrophilic groups (-COOH) and the blood plasma proteins have provided unique healing properties to the material.

A significant acceleration of the healing process, the successful regeneration of normal skin covering tissue, and the prevention of scarring on the site of burnt or damaged skin have been observed when applying these bandages made of the developed material to an injured area. The antibacterial components of multifunctional nanofibers decrease inflammation, and the blood plasma with an increased platelet level — vital and multi-purposed for every element in the healing process — stimulates the regeneration of tissues. The bandages should not be removed or changed during treatment as it may cause additional pain to the patient. After a certain period of time, the biodegradable fiber simply “dissolves” without any side effects.

“With the help of chemical bonds, we were able to create a stable layer containing blood plasma components (growth factors, fibrinogens, and other important proteins that promote cell growth) on a polycaprolactone base. The base fibers were synthesized by electroforming. Then, with the help of plasma treatment, to increase the material`s hydrophilic properties, a polymer layer containing carboxyl groups was applied to the surface. The resulting layer was enriched with antibacterial and protein components”, noted Elizabeth Permyakova, one of the project members and laboratory scientists.

The researchers have made images of their work available including this one,

Courtesy NUST MISS [downloaded from http://en.misis.ru/university/news/science/2018-02/5219/]

There is doesn’t appear to be an accompanying published paper.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (1 of 2)

Before launching into the assessment, a brief explanation of my theme: Hedy Lamarr was considered to be one of the great beauties of her day,

“Ziegfeld Girl” Hedy Lamarr 1941 MGM *M.V.
Titles: Ziegfeld Girl
People: Hedy Lamarr
Image courtesy mptvimages.com [downloaded from https://www.imdb.com/title/tt0034415/mediaviewer/rm1566611456]

Aside from starring in Hollywood movies and, before that, movies in Europe, she was also an inventor and not just any inventor (from a Dec. 4, 2017 article by Laura Barnett for The Guardian), Note: Links have been removed,

Let’s take a moment to reflect on the mercurial brilliance of Hedy Lamarr. Not only did the Vienna-born actor flee a loveless marriage to a Nazi arms dealer to secure a seven-year, $3,000-a-week contract with MGM, and become (probably) the first Hollywood star to simulate a female orgasm on screen – she also took time out to invent a device that would eventually revolutionise mobile communications.

As described in unprecedented detail by the American journalist and historian Richard Rhodes in his new book, Hedy’s Folly, Lamarr and her business partner, the composer George Antheil, were awarded a patent in 1942 for a “secret communication system”. It was meant for radio-guided torpedoes, and the pair gave to the US Navy. It languished in their files for decades before eventually becoming a constituent part of GPS, Wi-Fi and Bluetooth technology.

(The article goes on to mention other celebrities [Marlon Brando, Barbara Cartland, Mark Twain, etc] and their inventions.)

Lamarr’s work as an inventor was largely overlooked until the 1990’s when the technology community turned her into a ‘cultish’ favourite and from there her reputation grew and acknowledgement increased culminating in Rhodes’ book and the documentary by Alexandra Dean, ‘Bombshell: The Hedy Lamarr Story (to be broadcast as part of PBS’s American Masters series on May 18, 2018).

Canada as Hedy Lamarr

There are some parallels to be drawn between Canada’s S&T and R&D (science and technology; research and development) and Ms. Lamarr. Chief amongst them, we’re not always appreciated for our brains. Not even by people who are supposed to know better such as the experts on the panel for the ‘Third assessment of The State of Science and Technology and Industrial Research and Development in Canada’ (proper title: Competing in a Global Innovation Economy: The Current State of R&D in Canada) from the Expert Panel on the State of Science and Technology and Industrial Research and Development in Canada.

A little history

Before exploring the comparison to Hedy Lamarr further, here’s a bit more about the history of this latest assessment from the Council of Canadian Academies (CCA), from the report released April 10, 2018,

This assessment of Canada’s performance indicators in science, technology, research, and innovation comes at an opportune time. The Government of Canada has expressed a renewed commitment in several tangible ways to this broad domain of activity including its Innovation and Skills Plan, the announcement of five superclusters, its appointment of a new Chief Science Advisor, and its request for the Fundamental Science Review. More specifically, the 2018 Federal Budget demonstrated the government’s strong commitment to research and innovation with historic investments in science.

The CCA has a decade-long history of conducting evidence-based assessments about Canada’s research and development activities, producing seven assessments of relevance:

The State of Science and Technology in Canada (2006) [emphasis mine]
•Innovation and Business Strategy: Why Canada Falls Short (2009)
•Catalyzing Canada’s Digital Economy (2010)
•Informing Research Choices: Indicators and Judgment (2012)
The State of Science and Technology in Canada (2012) [emphasis mine]
The State of Industrial R&D in Canada (2013) [emphasis mine]
•Paradox Lost: Explaining Canada’s Research Strength and Innovation Weakness (2013)

Using similar methods and metrics to those in The State of Science and Technology in Canada (2012) and The State of Industrial R&D in Canada (2013), this assessment tells a similar and familiar story: Canada has much to be proud of, with world-class researchers in many domains of knowledge, but the rest of the world is not standing still. Our peers are also producing high quality results, and many countries are making significant commitments to supporting research and development that will position them to better leverage their strengths to compete globally. Canada will need to take notice as it determines how best to take action. This assessment provides valuable material for that conversation to occur, whether it takes place in the lab or the legislature, the bench or the boardroom. We also hope it will be used to inform public discussion. [p. ix Print, p. 11 PDF]

This latest assessment succeeds the general 2006 and 2012 reports, which were mostly focused on academic research, and combines it with an assessment of industrial research, which was previously separate. Also, this third assessment’s title (Competing in a Global Innovation Economy: The Current State of R&D in Canada) makes what was previously quietly declared in the text, explicit from the cover onwards. It’s all about competition, despite noises such as the 2017 Naylor report (Review of fundamental research) about the importance of fundamental research.

One other quick comment, I did wonder in my July 1, 2016 posting (featuring the announcement of the third assessment) how combining two assessments would impact the size of the expert panel and the size of the final report,

Given the size of the 2012 assessment of science and technology at 232 pp. (PDF) and the 2013 assessment of industrial research and development at 220 pp. (PDF) with two expert panels, the imagination boggles at the potential size of the 2016 expert panel and of the 2016 assessment combining the two areas.

I got my answer with regard to the panel as noted in my Oct. 20, 2016 update (which featured a list of the members),

A few observations, given the size of the task, this panel is lean. As well, there are three women in a group of 13 (less than 25% representation) in 2016? It’s Ontario and Québec-dominant; only BC and Alberta rate a representative on the panel. I hope they will find ways to better balance this panel and communicate that ‘balanced story’ to the rest of us. On the plus side, the panel has representatives from the humanities, arts, and industry in addition to the expected representatives from the sciences.

The imbalance I noted then was addressed, somewhat, with the selection of the reviewers (from the report released April 10, 2018),

The CCA wishes to thank the following individuals for their review of this report:

Ronald Burnett, C.M., O.B.C., RCA, Chevalier de l’ordre des arts et des
lettres, President and Vice-Chancellor, Emily Carr University of Art and Design
(Vancouver, BC)

Michelle N. Chretien, Director, Centre for Advanced Manufacturing and Design
Technologies, Sheridan College; Former Program and Business Development
Manager, Electronic Materials, Xerox Research Centre of Canada (Brampton,
ON)

Lisa Crossley, CEO, Reliq Health Technologies, Inc. (Ancaster, ON)
Natalie Dakers, Founding President and CEO, Accel-Rx Health Sciences
Accelerator (Vancouver, BC)

Fred Gault, Professorial Fellow, United Nations University-MERIT (Maastricht,
Netherlands)

Patrick D. Germain, Principal Engineering Specialist, Advanced Aerodynamics,
Bombardier Aerospace (Montréal, QC)

Robert Brian Haynes, O.C., FRSC, FCAHS, Professor Emeritus, DeGroote
School of Medicine, McMaster University (Hamilton, ON)

Susan Holt, Chief, Innovation and Business Relationships, Government of
New Brunswick (Fredericton, NB)

Pierre A. Mohnen, Professor, United Nations University-MERIT and Maastricht
University (Maastricht, Netherlands)

Peter J. M. Nicholson, C.M., Retired; Former and Founding President and
CEO, Council of Canadian Academies (Annapolis Royal, NS)

Raymond G. Siemens, Distinguished Professor, English and Computer Science
and Former Canada Research Chair in Humanities Computing, University of
Victoria (Victoria, BC) [pp. xii- xiv Print; pp. 15-16 PDF]

The proportion of women to men as reviewers jumped up to about 36% (4 of 11 reviewers) and there are two reviewers from the Maritime provinces. As usual, reviewers external to Canada were from Europe. Although this time, they came from Dutch institutions rather than UK or German institutions. Interestingly and unusually, there was no one from a US institution. When will they start using reviewers from other parts of the world?

As for the report itself, it is 244 pp. (PDF). (For the really curious, I have a  December 15, 2016 post featuring my comments on the preliminary data for the third assessment.)

To sum up, they had a lean expert panel tasked with bringing together two inquiries and two reports. I imagine that was daunting. Good on them for finding a way to make it manageable.

Bibliometrics, patents, and a survey

I wish more attention had been paid to some of the issues around open science, open access, and open data, which are changing how science is being conducted. (I have more about this from an April 5, 2018 article by James Somers for The Atlantic but more about that later.) If I understand rightly, they may not have been possible due to the nature of the questions posed by the government when requested the assessment.

As was done for the second assessment, there is an acknowledgement that the standard measures/metrics (bibliometrics [no. of papers published, which journals published them; number of times papers were cited] and technometrics [no. of patent applications, etc.] of scientific accomplishment and progress are not the best and new approaches need to be developed and adopted (from the report released April 10, 2018),

It is also worth noting that the Panel itself recognized the limits that come from using traditional historic metrics. Additional approaches will be needed the next time this assessment is done. [p. ix Print; p. 11 PDF]

For the second assessment and as a means of addressing some of the problems with metrics, the panel decided to take a survey which the panel for the third assessment has also done (from the report released April 10, 2018),

The Panel relied on evidence from multiple sources to address its charge, including a literature review and data extracted from statistical agencies and organizations such as Statistics Canada and the OECD. For international comparisons, the Panel focused on OECD countries along with developing countries that are among the top 20 producers of peer-reviewed research publications (e.g., China, India, Brazil, Iran, Turkey). In addition to the literature review, two primary research approaches informed the Panel’s assessment:
•a comprehensive bibliometric and technometric analysis of Canadian research publications and patents; and,
•a survey of top-cited researchers around the world.

Despite best efforts to collect and analyze up-to-date information, one of the Panel’s findings is that data limitations continue to constrain the assessment of R&D activity and excellence in Canada. This is particularly the case with industrial R&D and in the social sciences, arts, and humanities. Data on industrial R&D activity continue to suffer from time lags for some measures, such as internationally comparable data on R&D intensity by sector and industry. These data also rely on industrial categories (i.e., NAICS and ISIC codes) that can obscure important trends, particularly in the services sector, though Statistics Canada’s recent revisions to how this data is reported have improved this situation. There is also a lack of internationally comparable metrics relating to R&D outcomes and impacts, aside from those based on patents.

For the social sciences, arts, and humanities, metrics based on journal articles and other indexed publications provide an incomplete and uneven picture of research contributions. The expansion of bibliometric databases and methodological improvements such as greater use of web-based metrics, including paper views/downloads and social media references, will support ongoing, incremental improvements in the availability and accuracy of data. However, future assessments of R&D in Canada may benefit from more substantive integration of expert review, capable of factoring in different types of research outputs (e.g., non-indexed books) and impacts (e.g., contributions to communities or impacts on public policy). The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity. It is vital that such contributions are better measured and assessed. [p. xvii Print; p. 19 PDF]

My reading: there’s a problem and we’re not going to try and fix it this time. Good luck to those who come after us. As for this line: “The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity.” Did no one explain that when you use ‘no doubt’, you are introducing doubt? It’s a cousin to ‘don’t take this the wrong way’ and ‘I don’t mean to be rude but …’ .

Good news

This is somewhat encouraging (from the report released April 10, 2018),

Canada’s international reputation for its capacity to participate in cutting-edge R&D is strong, with 60% of top-cited researchers surveyed internationally indicating that Canada hosts world-leading infrastructure or programs in their fields. This share increased by four percentage points between 2012 and 2017. Canada continues to benefit from a highly educated population and deep pools of research skills and talent. Its population has the highest level of educational attainment in the OECD in the proportion of the population with
a post-secondary education. However, among younger cohorts (aged 25 to 34), Canada has fallen behind Japan and South Korea. The number of researchers per capita in Canada is on a par with that of other developed countries, andincreased modestly between 2004 and 2012. Canada’s output of PhD graduates has also grown in recent years, though it remains low in per capita terms relative to many OECD countries. [pp. xvii-xviii; pp. 19-20]

Don’t let your head get too big

Most of the report observes that our international standing is slipping in various ways such as this (from the report released April 10, 2018),

In contrast, the number of R&D personnel employed in Canadian businesses
dropped by 20% between 2008 and 2013. This is likely related to sustained and
ongoing decline in business R&D investment across the country. R&D as a share
of gross domestic product (GDP) has steadily declined in Canada since 2001,
and now stands well below the OECD average (Figure 1). As one of few OECD
countries with virtually no growth in total national R&D expenditures between
2006 and 2015, Canada would now need to more than double expenditures to
achieve an R&D intensity comparable to that of leading countries.

Low and declining business R&D expenditures are the dominant driver of this
trend; however, R&D spending in all sectors is implicated. Government R&D
expenditures declined, in real terms, over the same period. Expenditures in the
higher education sector (an indicator on which Canada has traditionally ranked
highly) are also increasing more slowly than the OECD average. Significant
erosion of Canada’s international competitiveness and capacity to participate
in R&D and innovation is likely to occur if this decline and underinvestment
continue.

Between 2009 and 2014, Canada produced 3.8% of the world’s research
publications, ranking ninth in the world. This is down from seventh place for
the 2003–2008 period. India and Italy have overtaken Canada although the
difference between Italy and Canada is small. Publication output in Canada grew
by 26% between 2003 and 2014, a growth rate greater than many developed
countries (including United States, France, Germany, United Kingdom, and
Japan), but below the world average, which reflects the rapid growth in China
and other emerging economies. Research output from the federal government,
particularly the National Research Council Canada, dropped significantly
between 2009 and 2014.(emphasis mine)  [p. xviii Print; p. 20 PDF]

For anyone unfamiliar with Canadian politics,  2009 – 2014 were years during which Stephen Harper’s Conservatives formed the government. Justin Trudeau’s Liberals were elected to form the government in late 2015.

During Harper’s years in government, the Conservatives were very interested in changing how the National Research Council of Canada operated and, if memory serves, the focus was on innovation over research. Consequently, the drop in their research output is predictable.

Given my interest in nanotechnology and other emerging technologies, this popped out (from the report released April 10, 2018),

When it comes to research on most enabling and strategic technologies, however, Canada lags other countries. Bibliometric evidence suggests that, with the exception of selected subfields in Information and Communication Technologies (ICT) such as Medical Informatics and Personalized Medicine, Canada accounts for a relatively small share of the world’s research output for promising areas of technology development. This is particularly true for Biotechnology, Nanotechnology, and Materials science [emphasis mine]. Canada’s research impact, as reflected by citations, is also modest in these areas. Aside from Biotechnology, none of the other subfields in Enabling and Strategic Technologies has an ARC rank among the top five countries. Optoelectronics and photonics is the next highest ranked at 7th place, followed by Materials, and Nanoscience and Nanotechnology, both of which have a rank of 9th. Even in areas where Canadian researchers and institutions played a seminal role in early research (and retain a substantial research capacity), such as Artificial Intelligence and Regenerative Medicine, Canada has lost ground to other countries.

Arguably, our early efforts in artificial intelligence wouldn’t have garnered us much in the way of ranking and yet we managed some cutting edge work such as machine learning. I’m not suggesting the expert panel should have or could have found some way to measure these kinds of efforts but I’m wondering if there could have been some acknowledgement in the text of the report. I’m thinking a couple of sentences in a paragraph about the confounding nature of scientific research where areas that are ignored for years and even decades then become important (e.g., machine learning) but are not measured as part of scientific progress until after they are universally recognized.

Still, point taken about our diminishing returns in ’emerging’ technologies and sciences (from the report released April 10, 2018),

The impression that emerges from these data is sobering. With the exception of selected ICT subfields, such as Medical Informatics, bibliometric evidence does not suggest that Canada excels internationally in most of these research areas. In areas such as Nanotechnology and Materials science, Canada lags behind other countries in levels of research output and impact, and other countries are outpacing Canada’s publication growth in these areas — leading to declining shares of world publications. Even in research areas such as AI, where Canadian researchers and institutions played a foundational role, Canadian R&D activity is not keeping pace with that of other countries and some researchers trained in Canada have relocated to other countries (Section 4.4.1). There are isolated exceptions to these trends, but the aggregate data reviewed by this Panel suggest that Canada is not currently a world leader in research on most emerging technologies.

The Hedy Lamarr treatment

We have ‘good looks’ (arts and humanities) but not the kind of brains (physical sciences and engineering) that people admire (from the report released April 10, 2018),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphases mine] It accounts for more than 5% of world researchin these fields. Conversely, Canada has lower research output than expected
in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

Couldn’t they have used a more buoyant tone? After all, science was known as ‘natural philosophy’ up until the 19th century. As for visual and performing arts, let’s include poetry as a performing and literary art (both have been the case historically and cross-culturally) and let’s also note that one of the great physics texts, (De rerum natura by Lucretius) was a multi-volume poem (from Lucretius’ Wikipedia entry; Note: Links have been removed).

His poem De rerum natura (usually translated as “On the Nature of Things” or “On the Nature of the Universe”) transmits the ideas of Epicureanism, which includes Atomism [the concept of atoms forming materials] and psychology. Lucretius was the first writer to introduce Roman readers to Epicurean philosophy.[15] The poem, written in some 7,400 dactylic hexameters, is divided into six untitled books, and explores Epicurean physics through richly poetic language and metaphors. Lucretius presents the principles of atomism; the nature of the mind and soul; explanations of sensation and thought; the development of the world and its phenomena; and explains a variety of celestial and terrestrial phenomena. The universe described in the poem operates according to these physical principles, guided by fortuna, “chance”, and not the divine intervention of the traditional Roman deities.[16]

Should you need more proof that the arts might have something to contribute to physical sciences, there’s this in my March 7, 2018 posting,

It’s not often you see research that combines biologically inspired engineering and a molecular biophysicist with a professional animator who worked at Peter Jackson’s (Lord of the Rings film trilogy, etc.) Park Road Post film studio. An Oct. 18, 2017 news item on ScienceDaily describes the project,

Like many other scientists, Don Ingber, M.D., Ph.D., the Founding Director of the Wyss Institute, [emphasis mine] is concerned that non-scientists have become skeptical and even fearful of his field at a time when technology can offer solutions to many of the world’s greatest problems. “I feel that there’s a huge disconnect between science and the public because it’s depicted as rote memorization in schools, when by definition, if you can memorize it, it’s not science,” says Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Professor of Bioengineering at the Harvard Paulson School of Engineering and Applied Sciences (SEAS). [emphasis mine] “Science is the pursuit of the unknown. We have a responsibility to reach out to the public and convey that excitement of exploration and discovery, and fortunately, the film industry is already great at doing that.”

“Not only is our physics-based simulation and animation system as good as other data-based modeling systems, it led to the new scientific insight [emphasis mine] that the limited motion of the dynein hinge focuses the energy released by ATP hydrolysis, which causes dynein’s shape change and drives microtubule sliding and axoneme motion,” says Ingber. “Additionally, while previous studies of dynein have revealed the molecule’s two different static conformations, our animation visually depicts one plausible way that the protein can transition between those shapes at atomic resolution, which is something that other simulations can’t do. The animation approach also allows us to visualize how rows of dyneins work in unison, like rowers pulling together in a boat, which is difficult using conventional scientific simulation approaches.”

It comes down to how we look at things. Yes, physical sciences and engineering are very important. If the report is to be believed we have a very highly educated population and according to PISA scores our students rank highly in mathematics, science, and reading skills. (For more information on Canada’s latest PISA scores from 2015 see this OECD page. As for PISA itself, it’s an OECD [Organization for Economic Cooperation and Development] programme where 15-year-old students from around the world are tested on their reading, mathematics, and science skills, you can get some information from my Oct. 9, 2013 posting.)

Is it really so bad that we choose to apply those skills in fields other than the physical sciences and engineering? It’s a little bit like Hedy Lamarr’s problem except instead of being judged for our looks and having our inventions dismissed, we’re being judged for not applying ourselves to physical sciences and engineering and having our work in other closely aligned fields dismissed as less important.

Canada’s Industrial R&D: an oft-told, very sad story

Bemoaning the state of Canada’s industrial research and development efforts has been a national pastime as long as I can remember. Here’s this from the report released April 10, 2018,

There has been a sustained erosion in Canada’s industrial R&D capacity and competitiveness. Canada ranks 33rd among leading countries on an index assessing the magnitude, intensity, and growth of industrial R&D expenditures. Although Canada is the 11th largest spender, its industrial R&D intensity (0.9%) is only half the OECD average and total spending is declining (−0.7%). Compared with G7 countries, the Canadian portfolio of R&D investment is more concentrated in industries that are intrinsically not as R&D intensive. Canada invests more heavily than the G7 average in oil and gas, forestry, machinery and equipment, and finance where R&D has been less central to business strategy than in many other industries. …  About 50% of Canada’s industrial R&D spending is in high-tech sectors (including industries such as ICT, aerospace, pharmaceuticals, and automotive) compared with the G7 average of 80%. Canadian Business Enterprise Expenditures on R&D (BERD) intensity is also below the OECD average in these sectors. In contrast, Canadian investment in low and medium-low tech sectors is substantially higher than the G7 average. Canada’s spending reflects both its long-standing industrial structure and patterns of economic activity.

R&D investment patterns in Canada appear to be evolving in response to global and domestic shifts. While small and medium-sized enterprises continue to perform a greater share of industrial R&D in Canada than in the United States, between 2009 and 2013, there was a shift in R&D from smaller to larger firms. Canada is an increasingly attractive place to conduct R&D. Investment by foreign-controlled firms in Canada has increased to more than 35% of total R&D investment, with the United States accounting for more than half of that. [emphasis mine]  Multinational enterprises seem to be increasingly locating some of their R&D operations outside their country of ownership, possibly to gain proximity to superior talent. Increasing foreign-controlled R&D, however, also could signal a long-term strategic loss of control over intellectual property (IP) developed in this country, ultimately undermining the government’s efforts to support high-growth firms as they scale up. [pp. xxii-xxiii Print; pp. 24-25 PDF]

Canada has been known as a ‘branch plant’ economy for decades. For anyone unfamiliar with the term, it means that companies from other countries come here, open up a branch and that’s how we get our jobs as we don’t have all that many large companies here. Increasingly, multinationals are locating R&D shops here.

While our small to medium size companies fund industrial R&D, it’s large companies (multinationals) which can afford long-term and serious investment in R&D. Luckily for companies from other countries, we have a well-educated population of people looking for jobs.

In 2017, we opened the door more widely so we can scoop up talented researchers and scientists from other countries, from a June 14, 2017 article by Beckie Smith for The PIE News,

Universities have welcomed the inclusion of the work permit exemption for academic stays of up to 120 days in the strategy, which also introduces expedited visa processing for some highly skilled professions.

Foreign researchers working on projects at a publicly funded degree-granting institution or affiliated research institution will be eligible for one 120-day stay in Canada every 12 months.

And universities will also be able to access a dedicated service channel that will support employers and provide guidance on visa applications for foreign talent.

The Global Skills Strategy, which came into force on June 12 [2017], aims to boost the Canadian economy by filling skills gaps with international talent.

As well as the short term work permit exemption, the Global Skills Strategy aims to make it easier for employers to recruit highly skilled workers in certain fields such as computer engineering.

“Employers that are making plans for job-creating investments in Canada will often need an experienced leader, dynamic researcher or an innovator with unique skills not readily available in Canada to make that investment happen,” said Ahmed Hussen, Minister of Immigration, Refugees and Citizenship.

“The Global Skills Strategy aims to give those employers confidence that when they need to hire from abroad, they’ll have faster, more reliable access to top talent.”

Coincidentally, Microsoft, Facebook, Google, etc. have announced, in 2017, new jobs and new offices in Canadian cities. There’s a also Chinese multinational telecom company Huawei Canada which has enjoyed success in Canada and continues to invest here (from a Jan. 19, 2018 article about security concerns by Matthew Braga for the Canadian Broadcasting Corporation (CBC) online news,

For the past decade, Chinese tech company Huawei has found no shortage of success in Canada. Its equipment is used in telecommunications infrastructure run by the country’s major carriers, and some have sold Huawei’s phones.

The company has struck up partnerships with Canadian universities, and say it is investing more than half a billion dollars in researching next generation cellular networks here. [emphasis mine]

While I’m not thrilled about using patents as an indicator of progress, this is interesting to note (from the report released April 10, 2018),

Canada produces about 1% of global patents, ranking 18th in the world. It lags further behind in trademark (34th) and design applications (34th). Despite relatively weak performance overall in patents, Canada excels in some technical fields such as Civil Engineering, Digital Communication, Other Special Machines, Computer Technology, and Telecommunications. [emphases mine] Canada is a net exporter of patents, which signals the R&D strength of some technology industries. It may also reflect increasing R&D investment by foreign-controlled firms. [emphasis mine] [p. xxiii Print; p. 25 PDF]

Getting back to my point, we don’t have large companies here. In fact, the dream for most of our high tech startups is to build up the company so it’s attractive to buyers, sell, and retire (hopefully before the age of 40). Strangely, the expert panel doesn’t seem to share my insight into this matter,

Canada’s combination of high performance in measures of research output and impact, and low performance on measures of industrial R&D investment and innovation (e.g., subpar productivity growth), continue to be viewed as a paradox, leading to the hypothesis that barriers are impeding the flow of Canada’s research achievements into commercial applications. The Panel’s analysis suggests the need for a more nuanced view. The process of transforming research into innovation and wealth creation is a complex multifaceted process, making it difficult to point to any definitive cause of Canada’s deficit in R&D investment and productivity growth. Based on the Panel’s interpretation of the evidence, Canada is a highly innovative nation, but significant barriers prevent the translation of innovation into wealth creation. The available evidence does point to a number of important contributing factors that are analyzed in this report. Figure 5 represents the relationships between R&D, innovation, and wealth creation.

The Panel concluded that many factors commonly identified as points of concern do not adequately explain the overall weakness in Canada’s innovation performance compared with other countries. [emphasis mine] Academia-business linkages appear relatively robust in quantitative terms given the extent of cross-sectoral R&D funding and increasing academia-industry partnerships, though the volume of academia-industry interactions does not indicate the nature or the quality of that interaction, nor the extent to which firms are capitalizing on the research conducted and the resulting IP. The educational system is high performing by international standards and there does not appear to be a widespread lack of researchers or STEM (science, technology, engineering, and mathematics) skills. IP policies differ across universities and are unlikely to explain a divergence in research commercialization activity between Canadian and U.S. institutions, though Canadian universities and governments could do more to help Canadian firms access university IP and compete in IP management and strategy. Venture capital availability in Canada has improved dramatically in recent years and is now competitive internationally, though still overshadowed by Silicon Valley. Technology start-ups and start-up ecosystems are also flourishing in many sectors and regions, demonstrating their ability to build on research advances to develop and deliver innovative products and services.

You’ll note there’s no mention of a cultural issue where start-ups are designed for sale as soon as possible and this isn’t new. Years ago, there was an accounting firm that published a series of historical maps (the last one I saw was in 2005) of technology companies in the Vancouver region. Technology companies were being developed and sold to large foreign companies from the 19th century to present day.

Part 2

A transatlantic report highlighting the risks and opportunities associated with synthetic biology and bioengineering

I love e-Life, the open access journal where its editors noted that a submitted synthetic biology and bioengineering report was replete with US and UK experts (along with a European or two) but no expert input from other parts of the world. In response the authors added ‘transatlantic’ to the title. It was a good decision since it was too late to add any new experts if the authors planned to have their paper published in the foreseeable future.

I’ve commented many times here when panels of experts include only Canadian, US, UK, and, sometimes, European or Commonwealth (Australia/New Zealand) experts that we need to broaden our perspectives and now I can add: or at least acknowledge (e.g. transatlantic) that the perspectives taken are reflective of a rather narrow range of countries.

Now getting to the report, here’s more from a November 21, 2017 University of Cambridge press release,

Human genome editing, 3D-printed replacement organs and artificial photosynthesis – the field of bioengineering offers great promise for tackling the major challenges that face our society. But as a new article out today highlights, these developments provide both opportunities and risks in the short and long term.

Rapid developments in the field of synthetic biology and its associated tools and methods, including more widely available gene editing techniques, have substantially increased our capabilities for bioengineering – the application of principles and techniques from engineering to biological systems, often with the goal of addressing ‘real-world’ problems.

In a feature article published in the open access journal eLife, an international team of experts led by Dr Bonnie Wintle and Dr Christian R. Boehm from the Centre for the Study of Existential Risk at the University of Cambridge, capture perspectives of industry, innovators, scholars, and the security community in the UK and US on what they view as the major emerging issues in the field.

Dr Wintle says: “The growth of the bio-based economy offers the promise of addressing global environmental and societal challenges, but as our paper shows, it can also present new kinds of challenges and risks. The sector needs to proceed with caution to ensure we can reap the benefits safely and securely.”

The report is intended as a summary and launching point for policy makers across a range of sectors to further explore those issues that may be relevant to them.

Among the issues highlighted by the report as being most relevant over the next five years are:

Artificial photosynthesis and carbon capture for producing biofuels

If technical hurdles can be overcome, such developments might contribute to the future adoption of carbon capture systems, and provide sustainable sources of commodity chemicals and fuel.

Enhanced photosynthesis for agricultural productivity

Synthetic biology may hold the key to increasing yields on currently farmed land – and hence helping address food security – by enhancing photosynthesis and reducing pre-harvest losses, as well as reducing post-harvest and post-consumer waste.

Synthetic gene drives

Gene drives promote the inheritance of preferred genetic traits throughout a species, for example to prevent malaria-transmitting mosquitoes from breeding. However, this technology raises questions about whether it may alter ecosystems [emphasis mine], potentially even creating niches where a new disease-carrying species or new disease organism may take hold.

Human genome editing

Genome engineering technologies such as CRISPR/Cas9 offer the possibility to improve human lifespans and health. However, their implementation poses major ethical dilemmas. It is feasible that individuals or states with the financial and technological means may elect to provide strategic advantages to future generations.

Defence agency research in biological engineering

The areas of synthetic biology in which some defence agencies invest raise the risk of ‘dual-use’. For example, one programme intends to use insects to disseminate engineered plant viruses that confer traits to the target plants they feed on, with the aim of protecting crops from potential plant pathogens – but such technologies could plausibly also be used by others to harm targets.

In the next five to ten years, the authors identified areas of interest including:

Regenerative medicine: 3D printing body parts and tissue engineering

While this technology will undoubtedly ease suffering caused by traumatic injuries and a myriad of illnesses, reversing the decay associated with age is still fraught with ethical, social and economic concerns. Healthcare systems would rapidly become overburdened by the cost of replenishing body parts of citizens as they age and could lead new socioeconomic classes, as only those who can pay for such care themselves can extend their healthy years.

Microbiome-based therapies

The human microbiome is implicated in a large number of human disorders, from Parkinson’s to colon cancer, as well as metabolic conditions such as obesity and type 2 diabetes. Synthetic biology approaches could greatly accelerate the development of more effective microbiota-based therapeutics. However, there is a risk that DNA from genetically engineered microbes may spread to other microbiota in the human microbiome or into the wider environment.

Intersection of information security and bio-automation

Advancements in automation technology combined with faster and more reliable engineering techniques have resulted in the emergence of robotic ‘cloud labs’ where digital information is transformed into DNA then expressed in some target organisms. This opens the possibility of new kinds of information security threats, which could include tampering with digital DNA sequences leading to the production of harmful organisms, and sabotaging vaccine and drug production through attacks on critical DNA sequence databases or equipment.

Over the longer term, issues identified include:

New makers disrupt pharmaceutical markets

Community bio-labs and entrepreneurial startups are customizing and sharing methods and tools for biological experiments and engineering. Combined with open business models and open source technologies, this could herald opportunities for manufacturing therapies tailored to regional diseases that multinational pharmaceutical companies might not find profitable. But this raises concerns around the potential disruption of existing manufacturing markets and raw material supply chains as well as fears about inadequate regulation, less rigorous product quality control and misuse.

Platform technologies to address emerging disease pandemics

Emerging infectious diseases—such as recent Ebola and Zika virus disease outbreaks—and potential biological weapons attacks require scalable, flexible diagnosis and treatment. New technologies could enable the rapid identification and development of vaccine candidates, and plant-based antibody production systems.

Shifting ownership models in biotechnology

The rise of off-patent, generic tools and the lowering of technical barriers for engineering biology has the potential to help those in low-resource settings, benefit from developing a sustainable bioeconomy based on local needs and priorities, particularly where new advances are made open for others to build on.

Dr Jenny Molloy comments: “One theme that emerged repeatedly was that of inequality of access to the technology and its benefits. The rise of open source, off-patent tools could enable widespread sharing of knowledge within the biological engineering field and increase access to benefits for those in developing countries.”

Professor Johnathan Napier from Rothamsted Research adds: “The challenges embodied in the Sustainable Development Goals will require all manner of ideas and innovations to deliver significant outcomes. In agriculture, we are on the cusp of new paradigms for how and what we grow, and where. Demonstrating the fairness and usefulness of such approaches is crucial to ensure public acceptance and also to delivering impact in a meaningful way.”

Dr Christian R. Boehm concludes: “As these technologies emerge and develop, we must ensure public trust and acceptance. People may be willing to accept some of the benefits, such as the shift in ownership away from big business and towards more open science, and the ability to address problems that disproportionately affect the developing world, such as food security and disease. But proceeding without the appropriate safety precautions and societal consensus—whatever the public health benefits—could damage the field for many years to come.”

The research was made possible by the Centre for the Study of Existential Risk, the Synthetic Biology Strategic Research Initiative (both at the University of Cambridge), and the Future of Humanity Institute (University of Oxford). It was based on a workshop co-funded by the Templeton World Charity Foundation and the European Research Council under the European Union’s Horizon 2020 research and innovation programme.

Here’s a link to and a citation for the paper,

A transatlantic perspective on 20 emerging issues in biological engineering by Bonnie C Wintle, Christian R Boehm, Catherine Rhodes, Jennifer C Molloy, Piers Millett, Laura Adam, Rainer Breitling, Rob Carlson, Rocco Casagrande, Malcolm Dando, Robert Doubleday, Eric Drexler, Brett Edwards, Tom Ellis, Nicholas G Evans, Richard Hammond, Jim Haseloff, Linda Kahl, Todd Kuiken, Benjamin R Lichman, Colette A Matthewman, Johnathan A Napier, Seán S ÓhÉigeartaigh, Nicola J Patron, Edward Perello, Philip Shapira, Joyce Tait, Eriko Takano, William J Sutherland. eLife; 14 Nov 2017; DOI: 10.7554/eLife.30247

This paper is open access and the editors have included their notes to the authors and the authors’ response.

You may have noticed that I highlighted a portion of the text concerning synthetic gene drives. Coincidentally I ran across a November 16, 2017 article by Ed Yong for The Atlantic where the topic is discussed within the context of a project in New Zealand, ‘Predator Free 2050’ (Note: A link has been removed),

Until the 13th century, the only land mammals in New Zealand were bats. In this furless world, local birds evolved a docile temperament. Many of them, like the iconic kiwi and the giant kakapo parrot, lost their powers of flight. Gentle and grounded, they were easy prey for the rats, dogs, cats, stoats, weasels, and possums that were later introduced by humans. Between them, these predators devour more than 26 million chicks and eggs every year. They have already driven a quarter of the nation’s unique birds to extinction.

Many species now persist only in offshore islands where rats and their ilk have been successfully eradicated, or in small mainland sites like Zealandia where they are encircled by predator-proof fences. The songs in those sanctuaries are echoes of the New Zealand that was.

But perhaps, they also represent the New Zealand that could be.

In recent years, many of the country’s conservationists and residents have rallied behind Predator-Free 2050, an extraordinarily ambitious plan to save the country’s birds by eradicating its invasive predators. Native birds of prey will be unharmed, but Predator-Free 2050’s research strategy, which is released today, spells doom for rats, possums, and stoats (a large weasel). They are to die, every last one of them. No country, anywhere in the world, has managed such a task in an area that big. The largest island ever cleared of rats, Australia’s Macquarie Island, is just 50 square miles in size. New Zealand is 2,000 times bigger. But, the country has committed to fulfilling its ecological moonshot within three decades.

In 2014, Kevin Esvelt, a biologist at MIT, drew a Venn diagram that troubles him to this day. In it, he and his colleagues laid out several possible uses for gene drives—a nascent technology for spreading designer genes through groups of wild animals. Typically, a given gene has a 50-50 chance of being passed to the next generation. But gene drives turn that coin toss into a guarantee, allowing traits to zoom through populations in just a few generations. There are a few natural examples, but with CRISPR, scientists can deliberately engineer such drives.

Suppose you have a population of rats, roughly half of which are brown, and the other half white. Now, imagine there is a gene that affects each rat’s color. It comes in two forms, one leading to brown fur, and the other leading to white fur. A male with two brown copies mates with a female with two white copies, and all their offspring inherit one of each. Those offspring breed themselves, and the brown and white genes continue cascading through the generations in a 50-50 split. This is the usual story of inheritance. But you can subvert it with CRISPR, by programming the brown gene to cut its counterpart and replace it with another copy of itself. Now, the rats’ children are all brown-furred, as are their grandchildren, and soon the whole population is brown.

Forget fur. The same technique could spread an antimalarial gene through a mosquito population, or drought-resistance through crop plants. The applications are vast, but so are the risks. In theory, gene drives spread so quickly and relentlessly that they could rewrite an entire wild population, and once released, they would be hard to contain. If the concept of modifying the genes of organisms is already distasteful to some, gene drives magnify that distaste across national, continental, and perhaps even global scales.

These excerpts don’t do justice to this thought-provoking article. If you have time, I recommend reading it in its entirety  as it provides some insight into gene drives and, with some imagination on the reader’s part, the potential for the other technologies discussed in the report.

One last comment, I notice that Eric Drexler is cited as on the report’s authors. He’s familiar to me as K. Eric Drexler, the author of the book that popularized nanotechnology in the US and other countries, Engines of Creation (1986) .