Tag Archives: UK

‘Hunting’ pharmaceuticals and removing them from water

Pharmaceuticals are not the first pollutants people think of when discussing water pollution but, for those who don’t know, it’s a big issue and scientists at the University of Surrey (UK) have developed a technology they believe will help to relieve the contamination. From an April 10, 2017 University of Surrey press release (also on EurekAlert),

The research involves the detection and removal of pharmaceuticals in or from water, as contamination from pharmaceuticals can enter the aquatic environment as a result of their use for the treatment of humans and animals. This contamination can be excreted unchanged, as metabolites, as unused discharge or by drug manufacturers.

The research has found that a new type of ‘supermolecule’, calix[4], actively seeks certain pharmaceuticals and removes them from water.

Contamination of water is a serious concern for environmental scientists around the world, as substances include hormones from the contraceptive pill, and pesticides and herbicides from allotments. Contamination can also include toxic metals such as mercury, arsenic, or cadmium, which was previously used in paint, or substances that endanger vital species such as bees.

Professor Danil de Namor, University of Surrey Emeritus Professor and leader of the research, said: “Preliminary extraction data are encouraging as far as the use of this receptor for the selective removal of these drugs from water and the possibility of constructing a calix[4]-based sensing devices.

“From here, we can design receptors so that they can bind selectively with pollutants in the water so the pollutants can be effectively removed. This research will allow us to know exactly what is in the water, and from here it will be tested in industrial water supplies, so there will be cleaner water for everyone.

“The research also creates the possibility of using these materials for on-site monitoring of water, without having to transport samples to the laboratory.”

Dr Brendan Howlin, University of Surrey co-investigator, said: “This study allows us to visualise the specific receptor-drug interactions leading to the selective behaviour of the receptor. As well as the health benefits of this research, molecular simulation is a powerful technique that is applicable to a wide range of materials.

“We were very proud that the work was carried out with PhD students and a final year project student, and research activities are already taking place with the Department of Chemical and Processing Engineering (CPI) and the Advanced Technology Institute (ATI).

“We are also very pleased to see that as soon as the paper was published online by the European Journal of Pharmaceutical Sciences, we received invitations to give keynote lectures at two international conferences on pharmaceuticals in Europe later this year.”

That last paragraph is intriguing and it marks the first time I’ve seen that claim in a press release announcing the publication of a piece of research.

Here’s a link to and a citation for the paper,

A calix[4]arene derivative and its selective interaction with drugs (clofibric acid, diclofenac and aspirin) by Angela F Danil de Namor, Maan Al Nuaim, Jose A Villanueva Salas, Sophie Bryant, Brendan Howlin. European Journal of Pharmaceutical Sciences Volume 100, 30 March 2017, Pages 1–8 https://doi.org/10.1016/j.ejps.2016.12.027

This paper is behind a paywall.

Nanoparticles and strange forces

An April 10, 2017 news item on Nanowerk announces work from the University of New Mexico (UNM), Note: A link has been removed,

A new scientific paper published, in part, by a University of New Mexico physicist is shedding light on a strange force impacting particles at the smallest level of the material world.

The discovery, published in Physical Review Letters (“Lateral Casimir Force on a Rotating Particle near a Planar Surface”), was made by an international team of researchers lead by UNM Assistant Professor Alejandro Manjavacas in the Department of Physics & Astronomy. Collaborators on the project include Francisco Rodríguez-Fortuño (King’s College London, U.K.), F. Javier García de Abajo (The Institute of Photonic Sciences, Spain) and Anatoly Zayats (King’s College London, U.K.).

An April 7,2017 UNM news release by Aaron Hill, which originated the news item, expands on the theme,

The findings relate to an area of theoretical nanophotonics and quantum theory known as the Casimir Effect, a measurable force that exists between objects inside a vacuum caused by the fluctuations of electromagnetic waves. When studied using classical physics, the vacuum would not produce any force on the objects. However, when looked at using quantum field theory, the vacuum is filled with photons, creating a small but potentially significant force on the objects.

“These studies are important because we are developing nanotechnologies where we’re getting into distances and sizes that are so small that these types of forces can dominate everything else,” said Manjavacas. “We know these Casimir forces exist, so, what we’re trying to do is figure out the overall impact they have very small particles.”

Manjavacas’ research expands on the Casimir effect by developing an analytical expression for the lateral Casimir force experienced by nanoparticles rotating near a flat surface.

Imagine a tiny sphere (nanoparticle) rotating over a surface. While the sphere slows down due to photons colliding with it, that rotation also causes the sphere to move in a lateral direction. In our physical world, friction between the sphere and the surface would be needed to achieve lateral movement. However, the nano-world does not follow the same set of rules, eliminating the need for contact between the sphere and the surface for movement to occur.

“The nanoparticle experiences a lateral force as if it were in contact with the surface, even though is actually separated from it,” said Manjavacas. “It’s a strange reaction but one that may prove to have significant impact for engineers.”

While the discovery may seem somewhat obscure, it is also extremely useful for researchers working in the always evolving nanotechnology industry. As part of their work, Manjavacas says they’ve also learned the direction of the force can be controlled by changing the distance between the particle and surface, an understanding that may help nanotech engineers develop better nanoscale objects for healthcare, computing or a variety of other areas.

For Manjavacas, the project and this latest publication are just another step forward in his research into these Casimir forces, which he has been studying throughout his scientific career. After receiving his Ph.D. from Complutense University of Madrid (UCM) in 2013, Manjavacas worked as a postdoctoral research fellow at Rice University before coming to UNM in 2015.

Currently, Manjavacas heads UNM’s Theoretical Nanophotonics research group, collaborating with scientists around the world and locally in New Mexico. In fact, Manjavacas credits Los Alamos National Laboratory Researcher Diego Dalvit, a leading expert on Casimir forces, for helping much of his work progress.

“If I had to name the person who knows the most about Casimir forces, I’d say it was him,” said Manjavacas. “He published a book that’s considered one of the big references on the topic. So, having him nearby and being able to collaborate with other UNM faculty is a big advantage for our research.”

Here’s a link to and a citation for the paper,

Lateral Casimir Force on a Rotating Particle near a Planar Surface by Alejandro Manjavacas, Francisco J. Rodríguez-Fortuño, F. Javier García de Abajo, and Anatoly V. Zayats. Phys. Rev. Lett. (Vol. 118, Iss. 13 — 31 March 2017) 118, 133605 DOI:https://doi.org/10.1103/PhysRevLett.118.133605 Published 31 March 2017

This paper is behind a paywall.

Explaining the link between air pollution and heart disease?

An April 26, 2017 news item on Nanowerk announces research that may explain the link between heart disease and air pollution (Note: A link has been removed),

Tiny particles in air pollution have been associated with cardiovascular disease, which can lead to premature death. But how particles inhaled into the lungs can affect blood vessels and the heart has remained a mystery.

Now, scientists have found evidence in human and animal studies that inhaled nanoparticles can travel from the lungs into the bloodstream, potentially explaining the link between air pollution and cardiovascular disease. Their results appear in the journal ACS Nano (“Inhaled Nanoparticles Accumulate at Sites of Vascular Disease”).

An April 26, 2017 American Chemical Society news release on EurekAlert, which originated the news item,  expands on the theme,

The World Health Organization estimates that in 2012, about 72 percent of premature deaths related to outdoor air pollution were due to ischemic heart disease and strokes. Pulmonary disease, respiratory infections and lung cancer were linked to the other 28 percent. Many scientists have suspected that fine particles travel from the lungs into the bloodstream, but evidence supporting this assumption in humans has been challenging to collect. So Mark Miller and colleagues at the University of Edinburgh in the United Kingdom and the National Institute for Public Health and the Environment in the Netherlands used a selection of specialized techniques to track the fate of inhaled gold nanoparticles.

In the new study, 14 healthy volunteers, 12 surgical patients and several mouse models inhaled gold nanoparticles, which have been safely used in medical imaging and drug delivery. Soon after exposure, the nanoparticles were detected in blood and urine. Importantly, the nanoparticles appeared to preferentially accumulate at inflamed vascular sites, including carotid plaques in patients at risk of a stroke. The findings suggest that nanoparticles can travel from the lungs into the bloodstream and reach susceptible areas of the cardiovascular system where they could possibly increase the likelihood of a heart attack or stroke, the researchers say.

Here’s a link to and a citation for the paper,

Inhaled Nanoparticles Accumulate at Sites of Vascular Disease by Mark R. Miller, Jennifer B. Raftis, Jeremy P. Langrish, Steven G. McLean, Pawitrabhorn Samutrtai, Shea P. Connell, Simon Wilson, Alex T. Vesey, Paul H. B. Fokkens, A. John F. Boere, Petra Krystek, Colin J. Campbell, Patrick W. F. Hadoke, Ken Donaldson, Flemming R. Cassee, David E. Newby, Rodger Duffin, and Nicholas L. Mills. ACS Nano, Article ASAP DOI: 10.1021/acsnano.6b08551 Publication Date (Web): April 26, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Gold spring-shaped coils for detecting twisted molecules

An April 3, 2017 news item on ScienceDaily describes a technique that could improve nanorobotics and more,

University of Bath scientists have used gold spring-shaped coils 5,000 times thinner than human hairs with powerful lasers to enable the detection of twisted molecules, and the applications could improve pharmaceutical design, telecommunications and nanorobotics.

An April 3, 2017 University of Bath press release (also on EurekAlert), which originated the news item, provides more detail (Note: A link has been removed),

Molecules, including many pharmaceuticals, twist in certain ways and can exist in left or right ‘handed’ forms depending on how they twist. This twisting, called chirality, is crucial to understand because it changes the way a molecule behaves, for example within our bodies.

Scientists can study chiral molecules using particular laser light, which itself twists as it travels. Such studies get especially difficult for small amounts of molecules. This is where the minuscule gold springs can be helpful. Their shape twists the light and could better fit it to the molecules, making it easier to detect minute amounts.

Using some of the smallest springs ever created, the researchers from the University of Bath Department of Physics, working with colleagues from the Max Planck Institute for Intelligent Systems, examined how effective the gold springs could be at enhancing interactions between light and chiral molecules. They based their study on a colour-conversion method for light, known as Second Harmonic Generation (SHG), whereby the better the performance of the spring, the more red laser light converts into blue laser light.

They found that the springs were indeed very promising but that how well they performed depended on the direction they were facing.

Physics PhD student David Hooper who is the first author of the study, said: “It is like using a kaleidoscope to look at a picture; the picture becomes distorted when you rotate the kaleidoscope. We need to minimise the distortion.”

In order to reduce the distortions, the team is now working on ways to optimise the springs, which are known as chiral nanostructures.

“Closely observing the chirality of molecules has lots of potential applications, for example it could help improve the design and purity of pharmaceuticals and fine chemicals, help develop motion controls for nanorobotics and miniaturise components in telecommunications,” said Dr Ventsislav Valev who led the study and the University of Bath research team.

Gold spring shaped coils help reveal information about chiral molecules. Credit Ventsi Valev.

Here’s a link to and a citation for the paper,

Strong Rotational Anisotropies Affect Nonlinear Chiral Metamaterials by David C. Hooper, Andrew G. Mark, Christian Kuppe, Joel T. Collins, Peer Fischer, Ventsislav K. Valev. Advanced Materials DOI: 10.1002/adma.201605110  View/save citation First published: 31 January 2017

This is an open access paper.

Gamechanging electronics with new ultrafast, flexible, and transparent electronics

There are two news bits about game-changing electronics, one from the UK and the other from the US.

United Kingdom (UK)

An April 3, 2017 news item on Azonano announces the possibility of a future golden age of electronics courtesy of the University of Exeter,

Engineering experts from the University of Exeter have come up with a breakthrough way to create the smallest, quickest, highest-capacity memories for transparent and flexible applications that could lead to a future golden age of electronics.

A March 31, 2017 University of Exeter press release (also on EurekAlert), which originated the news item, expands on the theme (Note: Links have been removed),

Engineering experts from the University of Exeter have developed innovative new memory using a hybrid of graphene oxide and titanium oxide. Their devices are low cost and eco-friendly to produce, are also perfectly suited for use in flexible electronic devices such as ‘bendable’ mobile phone, computer and television screens, and even ‘intelligent’ clothing.

Crucially, these devices may also have the potential to offer a cheaper and more adaptable alternative to ‘flash memory’, which is currently used in many common devices such as memory cards, graphics cards and USB computer drives.

The research team insist that these innovative new devices have the potential to revolutionise not only how data is stored, but also take flexible electronics to a new age in terms of speed, efficiency and power.

Professor David Wright, an Electronic Engineering expert from the University of Exeter and lead author of the paper said: “Using graphene oxide to produce memory devices has been reported before, but they were typically very large, slow, and aimed at the ‘cheap and cheerful’ end of the electronics goods market.

“Our hybrid graphene oxide-titanium oxide memory is, in contrast, just 50 nanometres long and 8 nanometres thick and can be written to and read from in less than five nanoseconds – with one nanometre being one billionth of a metre and one nanosecond a billionth of a second.”

Professor Craciun, a co-author of the work, added: “Being able to improve data storage is the backbone of tomorrow’s knowledge economy, as well as industry on a global scale. Our work offers the opportunity to completely transform graphene-oxide memory technology, and the potential and possibilities it offers.”

Here’s a link to and a citation for the paper,

Multilevel Ultrafast Flexible Nanoscale Nonvolatile Hybrid Graphene Oxide–Titanium Oxide Memories by V. Karthik Nagareddy, Matthew D. Barnes, Federico Zipoli, Khue T. Lai, Arseny M. Alexeev, Monica Felicia Craciun, and C. David Wright. ACS Nano, 2017, 11 (3), pp 3010–3021 DOI: 10.1021/acsnano.6b08668 Publication Date (Web): February 21, 2017

Copyright © 2017 American Chemical Society

This paper appears to be open access.

United States (US)

Researchers from Stanford University have developed flexible, biodegradable electronics.

A newly developed flexible, biodegradable semiconductor developed by Stanford engineers shown on a human hair. (Image credit: Bao lab)

A human hair? That’s amazing and this May 3, 2017 news item on Nanowerk reveals more,

As electronics become increasingly pervasive in our lives – from smart phones to wearable sensors – so too does the ever rising amount of electronic waste they create. A United Nations Environment Program report found that almost 50 million tons of electronic waste were thrown out in 2017–more than 20 percent higher than waste in 2015.

Troubled by this mounting waste, Stanford engineer Zhenan Bao and her team are rethinking electronics. “In my group, we have been trying to mimic the function of human skin to think about how to develop future electronic devices,” Bao said. She described how skin is stretchable, self-healable and also biodegradable – an attractive list of characteristics for electronics. “We have achieved the first two [flexible and self-healing], so the biodegradability was something we wanted to tackle.”

The team created a flexible electronic device that can easily degrade just by adding a weak acid like vinegar. The results were published in the Proceedings of the National Academy of Sciences (“Biocompatible and totally disintegrable semiconducting polymer for ultrathin and ultralightweight transient electronics”).

“This is the first example of a semiconductive polymer that can decompose,” said lead author Ting Lei, a postdoctoral fellow working with Bao.

A May 1, 2017 Stanford University news release by Sarah Derouin, which originated the news item, provides more detail,

In addition to the polymer – essentially a flexible, conductive plastic – the team developed a degradable electronic circuit and a new biodegradable substrate material for mounting the electrical components. This substrate supports the electrical components, flexing and molding to rough and smooth surfaces alike. When the electronic device is no longer needed, the whole thing can biodegrade into nontoxic components.

Biodegradable bits

Bao, a professor of chemical engineering and materials science and engineering, had previously created a stretchable electrode modeled on human skin. That material could bend and twist in a way that could allow it to interface with the skin or brain, but it couldn’t degrade. That limited its application for implantable devices and – important to Bao – contributed to waste.

Flexible, biodegradable semiconductor on an avacado

The flexible semiconductor can adhere to smooth or rough surfaces and biodegrade to nontoxic products. (Image credit: Bao lab)

Bao said that creating a robust material that is both a good electrical conductor and biodegradable was a challenge, considering traditional polymer chemistry. “We have been trying to think how we can achieve both great electronic property but also have the biodegradability,” Bao said.

Eventually, the team found that by tweaking the chemical structure of the flexible material it would break apart under mild stressors. “We came up with an idea of making these molecules using a special type of chemical linkage that can retain the ability for the electron to smoothly transport along the molecule,” Bao said. “But also this chemical bond is sensitive to weak acid – even weaker than pure vinegar.” The result was a material that could carry an electronic signal but break down without requiring extreme measures.

In addition to the biodegradable polymer, the team developed a new type of electrical component and a substrate material that attaches to the entire electronic component. Electronic components are usually made of gold. But for this device, the researchers crafted components from iron. Bao noted that iron is a very environmentally friendly product and is nontoxic to humans.

The researchers created the substrate, which carries the electronic circuit and the polymer, from cellulose. Cellulose is the same substance that makes up paper. But unlike paper, the team altered cellulose fibers so the “paper” is transparent and flexible, while still breaking down easily. The thin film substrate allows the electronics to be worn on the skin or even implanted inside the body.

From implants to plants

The combination of a biodegradable conductive polymer and substrate makes the electronic device useful in a plethora of settings – from wearable electronics to large-scale environmental surveys with sensor dusts.

“We envision these soft patches that are very thin and conformable to the skin that can measure blood pressure, glucose value, sweat content,” Bao said. A person could wear a specifically designed patch for a day or week, then download the data. According to Bao, this short-term use of disposable electronics seems a perfect fit for a degradable, flexible design.

And it’s not just for skin surveys: the biodegradable substrate, polymers and iron electrodes make the entire component compatible with insertion into the human body. The polymer breaks down to product concentrations much lower than the published acceptable levels found in drinking water. Although the polymer was found to be biocompatible, Bao said that more studies would need to be done before implants are a regular occurrence.

Biodegradable electronics have the potential to go far beyond collecting heart disease and glucose data. These components could be used in places where surveys cover large areas in remote locations. Lei described a research scenario where biodegradable electronics are dropped by airplane over a forest to survey the landscape. “It’s a very large area and very hard for people to spread the sensors,” he said. “Also, if you spread the sensors, it’s very hard to gather them back. You don’t want to contaminate the environment so we need something that can be decomposed.” Instead of plastic littering the forest floor, the sensors would biodegrade away.

As the number of electronics increase, biodegradability will become more important. Lei is excited by their advancements and wants to keep improving performance of biodegradable electronics. “We currently have computers and cell phones and we generate millions and billions of cell phones, and it’s hard to decompose,” he said. “We hope we can develop some materials that can be decomposed so there is less waste.”

Other authors on the study include Ming Guan, Jia Liu, Hung-Cheng Lin, Raphael Pfattner, Leo Shaw, Allister McGuire, and Jeffrey Tok of Stanford University; Tsung-Ching Huang of Hewlett Packard Enterprise; and Lei-Lai Shao and Kwang-Ting Cheng of University of California, Santa Barbara.

The research was funded by the Air Force Office for Scientific Research; BASF; Marie Curie Cofund; Beatriu de Pinós fellowship; and the Kodak Graduate Fellowship.

Here’s a link to and a citation for the team’s latest paper,

Biocompatible and totally disintegrable semiconducting polymer for ultrathin and ultralightweight transient electronics by Ting Lei, Ming Guan, Jia Liu, Hung-Cheng Lin, Raphael Pfattner, Leo Shaw, Allister F. McGuire, Tsung-Ching Huang, Leilai Shao, Kwang-Ting Cheng, Jeffrey B.-H. Tok, and Zhenan Bao. PNAS 2017 doi: 10.1073/pnas.1701478114 published ahead of print May 1, 2017

This paper is behind a paywall.

The mention of cellulose in the second item piqued my interest so I checked to see if they’d used nanocellulose. No, they did not. Microcrystalline cellulose powder was used to constitute a cellulose film but they found a way to render this film at the nanoscale. From the Stanford paper (Note: Links have been removed),

… Moreover, cellulose films have been previously used as biodegradable substrates in electronics (28⇓–30). However, these cellulose films are typically made with thicknesses well over 10 μm and thus cannot be used to fabricate ultrathin electronics with substrate thicknesses below 1–2 μm (7, 18, 19). To the best of our knowledge, there have been no reports on ultrathin (1–2 μm) biodegradable substrates for electronics. Thus, to realize them, we subsequently developed a method described herein to obtain ultrathin (800 nm) cellulose films (Fig. 1B and SI Appendix, Fig. S8). First, microcrystalline cellulose powders were dissolved in LiCl/N,N-dimethylacetamide (DMAc) and reacted with hexamethyldisilazane (HMDS) (31, 32), providing trimethylsilyl-functionalized cellulose (TMSC) (Fig. 1B). To fabricate films or devices, TMSC in chlorobenzene (CB) (70 mg/mL) was spin-coated on a thin dextran sacrificial layer. The TMSC film was measured to be 1.2 μm. After hydrolyzing the film in 95% acetic acid vapor for 2 h, the trimethylsilyl groups were removed, giving a 400-nm-thick cellulose film. The film thickness significantly decreased to one-third of the original film thickness, largely due to the removal of the bulky trimethylsilyl groups. The hydrolyzed cellulose film is insoluble in most organic solvents, for example, toluene, THF, chloroform, CB, and water. Thus, we can sequentially repeat the above steps to obtain an 800-nm-thick film, which is robust enough for further device fabrication and peel-off. By soaking the device in water, the dextran layer is dissolved, starting from the edges of the device to the center. This process ultimately releases the ultrathin substrate and leaves it floating on water surface (Fig. 3A, Inset).

Finally, I don’t have any grand thoughts; it’s just interesting to see different approaches to flexible electronics.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

Revisiting the scientific past for new breakthroughs

A March 2, 2017 article on phys.org features a thought-provoking (and, for some of us, confirming) take on scientific progress  (Note: Links have been removed),

The idea that science isn’t a process of constant progress might make some modern scientists feel a bit twitchy. Surely we know more now than we did 100 years ago? We’ve sequenced the genome, explored space and considerably lengthened the average human lifespan. We’ve invented aircraft, computers and nuclear energy. We’ve developed theories of relativity and quantum mechanics to explain how the universe works.

However, treating the history of science as a linear story of progression doesn’t reflect wholly how ideas emerge and are adapted, forgotten, rediscovered or ignored. While we are happy with the notion that the arts can return to old ideas, for example in neoclassicism, this idea is not commonly recognised in science. Is this constraint really present in principle? Or is it more a comment on received practice or, worse, on the general ignorance of the scientific community of its own intellectual history?

For one thing, not all lines of scientific enquiry are pursued to conclusion. For example, a few years ago, historian of science Hasok Chang undertook a careful examination of notebooks from scientists working in the 19th century. He unearthed notes from experiments in electrochemistry whose results received no explanation at the time. After repeating the experiments himself, Chang showed the results still don’t have a full explanation today. These research programmes had not been completed, simply put to one side and forgotten.

A March 1, 2017 essay by Giles Gasper (Durham University), Hannah Smithson (University of Oxford) and Tom Mcleish (Durham University) for The Conversation, which originated the article, expands on the theme (Note: Links have been removed),

… looping back into forgotten scientific history might also provide an alternative, regenerative way of thinking that doesn’t rely on what has come immediately before it.

Collaborating with an international team of colleagues, we have taken this hypothesis further by bringing scientists into close contact with scientific treatises from the early 13th century. The treatises were composed by the English polymath Robert Grosseteste – who later became Bishop of Lincoln – between 1195 and 1230. They cover a wide range of topics we would recognise as key to modern physics, including sound, light, colour, comets, the planets, the origin of the cosmos and more.

We have worked with paleographers (handwriting experts) and Latinists to decipher Grosseteste’s manuscripts, and with philosophers, theologians, historians and scientists to provide intellectual interpretation and context to his work. As a result, we’ve discovered that scientific and mathematical minds today still resonate with Grosseteste’s deeply physical and structured thinking.

Our first intuition and hope was that the scientists might bring a new analytic perspective to these very technical texts. And so it proved: the deep mathematical structure of a small treatise on colour, the De colore, was shown to describe what we would now call a three-dimensional abstract co-ordinate space for colour.

But more was true. During the examination of each treatise, at some point one of the group would say: “Did anyone ever try doing …?” or “What would happen if we followed through with this calculation, supposing he meant …”. Responding to this thinker from eight centuries ago has, to our delight and surprise, inspired new scientific work of a rather fresh cut. It isn’t connected in a linear way to current research programmes, but sheds light on them from new directions.

I encourage you to read the essay in its entirety.

The Canadian science scene and the 2017 Canadian federal budget

There’s not much happening in the 2017-18 budget in terms of new spending according to Paul Wells’ March 22, 2017 article for TheStar.com,

This is the 22nd or 23rd federal budget I’ve covered. And I’ve never seen the like of the one Bill Morneau introduced on Wednesday [March 22, 2017].

Not even in the last days of the Harper Conservatives did a budget provide for so little new spending — $1.3 billion in the current budget year, total, in all fields of government. That’s a little less than half of one per cent of all federal program spending for this year.

But times are tight. The future is a place where we can dream. So the dollars flow more freely in later years. In 2021-22, the budget’s fifth planning year, new spending peaks at $8.2 billion. Which will be about 2.4 per cent of all program spending.

He’s not alone in this 2017 federal budget analysis; CBC (Canadian Broadcasting Corporation) pundits, Chantal Hébert, Andrew Coyne, and Jennifer Ditchburn said much the same during their ‘At Issue’ segment of the March 22, 2017 broadcast of The National (news).

Before I focus on the science and technology budget, here are some general highlights from the CBC’s March 22, 2017 article on the 2017-18 budget announcement (Note: Links have been removed,

Here are highlights from the 2017 federal budget:

  • Deficit: $28.5 billion, up from $25.4 billion projected in the fall.
  • Trend: Deficits gradually decline over next five years — but still at $18.8 billion in 2021-22.
  • Housing: $11.2 billion over 11 years, already budgeted, will go to a national housing strategy.
  • Child care: $7 billion over 10 years, already budgeted, for new spaces, starting 2018-19.
  • Indigenous: $3.4 billion in new money over five years for infrastructure, health and education.
  • Defence: $8.4 billion in capital spending for equipment pushed forward to 2035.
  • Care givers: New care-giving benefit up to 15 weeks, starting next year.
  • Skills: New agency to research and measure skills development, starting 2018-19.
  • Innovation: $950 million over five years to support business-led “superclusters.”
  • Startups: $400 million over three years for a new venture capital catalyst initiative.
  • AI: $125 million to launch a pan-Canadian Artificial Intelligence Strategy.
  • Coding kids: $50 million over two years for initiatives to teach children to code.
  • Families: Option to extend parental leave up to 18 months.
  • Uber tax: GST to be collected on ride-sharing services.
  • Sin taxes: One cent more on a bottle of wine, five cents on 24 case of beer.
  • Bye-bye: No more Canada Savings Bonds.
  • Transit credit killed: 15 per cent non-refundable public transit tax credit phased out this year.

You can find the entire 2017-18 budget here.

Science and the 2017-18 budget

For anyone interested in the science news, you’ll find most of that in the 2017 budget’s Chapter 1 — Skills, Innovation and Middle Class jobs. As well, Wayne Kondro has written up a précis in his March 22, 2017 article for Science (magazine),

Finance officials, who speak on condition of anonymity during the budget lock-up, indicated the budgets of the granting councils, the main source of operational grants for university researchers, will be “static” until the government can assess recommendations that emerge from an expert panel formed in 2015 and headed by former University of Toronto President David Naylor to review basic science in Canada [highlighted in my June 15, 2016 posting ; $2M has been allocated for the advisor and associated secretariat]. Until then, the officials said, funding for the Natural Sciences and Engineering Research Council of Canada (NSERC) will remain at roughly $848 million, whereas that for the Canadian Institutes of Health Research (CIHR) will remain at $773 million, and for the Social Sciences and Humanities Research Council [SSHRC] at $547 million.

NSERC, though, will receive $8.1 million over 5 years to administer a PromoScience Program that introduces youth, particularly unrepresented groups like Aboriginal people and women, to science, technology, engineering, and mathematics through measures like “space camps and conservation projects.” CIHR, meanwhile, could receive modest amounts from separate plans to identify climate change health risks and to reduce drug and substance abuse, the officials added.

… Canada’s Innovation and Skills Plan, would funnel $600 million over 5 years allocated in 2016, and $112.5 million slated for public transit and green infrastructure, to create Silicon Valley–like “super clusters,” which the budget defined as “dense areas of business activity that contain large and small companies, post-secondary institutions and specialized talent and infrastructure.” …

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

… Among more specific measures are vows to: Use $87.7 million in previous allocations to the Canada Research Chairs program to create 25 “Canada 150 Research Chairs” honoring the nation’s 150th year of existence, provide $1.5 million per year to support the operations of the office of the as-yet-unappointed national science adviser [see my Dec. 7, 2016 post for information about the job posting, which is now closed]; provide $165.7 million [emphasis mine] over 5 years for the nonprofit organization Mitacs to create roughly 6300 more co-op positions for university students and grads, and provide $60.7 million over five years for new Canadian Space Agency projects, particularly for Canadian participation in the National Aeronautics and Space Administration’s next Mars Orbiter Mission.

Kondros was either reading an earlier version of the budget or made an error regarding Mitacs (from the budget in the “A New, Ambitious Approach to Work-Integrated Learning” subsection),

Mitacs has set an ambitious goal of providing 10,000 work-integrated learning placements for Canadian post-secondary students and graduates each year—up from the current level of around 3,750 placements. Budget 2017 proposes to provide $221 million [emphasis mine] over five years, starting in 2017–18, to achieve this goal and provide relevant work experience to Canadian students.

As well, the budget item for the Pan-Canadian Artificial Intelligence Strategy is $125M.

Moving from Kondros’ précis, the budget (in the “Positioning National Research Council Canada Within the Innovation and Skills Plan” subsection) announces support for these specific areas of science,

Stem Cell Research

The Stem Cell Network, established in 2001, is a national not-for-profit organization that helps translate stem cell research into clinical applications, commercial products and public policy. Its research holds great promise, offering the potential for new therapies and medical treatments for respiratory and heart diseases, cancer, diabetes, spinal cord injury, multiple sclerosis, Crohn’s disease, auto-immune disorders and Parkinson’s disease. To support this important work, Budget 2017 proposes to provide the Stem Cell Network with renewed funding of $6 million in 2018–19.

Space Exploration

Canada has a long and proud history as a space-faring nation. As our international partners prepare to chart new missions, Budget 2017 proposes investments that will underscore Canada’s commitment to innovation and leadership in space. Budget 2017 proposes to provide $80.9 million on a cash basis over five years, starting in 2017–18, for new projects through the Canadian Space Agency that will demonstrate and utilize Canadian innovations in space, including in the field of quantum technology as well as for Mars surface observation. The latter project will enable Canada to join the National Aeronautics and Space Administration’s (NASA’s) next Mars Orbiter Mission.

Quantum Information

The development of new quantum technologies has the potential to transform markets, create new industries and produce leading-edge jobs. The Institute for Quantum Computing is a world-leading Canadian research facility that furthers our understanding of these innovative technologies. Budget 2017 proposes to provide the Institute with renewed funding of $10 million over two years, starting in 2017–18.

Social Innovation

Through community-college partnerships, the Community and College Social Innovation Fund fosters positive social outcomes, such as the integration of vulnerable populations into Canadian communities. Following the success of this pilot program, Budget 2017 proposes to invest $10 million over two years, starting in 2017–18, to continue this work.

International Research Collaborations

The Canadian Institute for Advanced Research (CIFAR) connects Canadian researchers with collaborative research networks led by eminent Canadian and international researchers on topics that touch all humanity. Past collaborations facilitated by CIFAR are credited with fostering Canada’s leadership in artificial intelligence and deep learning. Budget 2017 proposes to provide renewed and enhanced funding of $35 million over five years, starting in 2017–18.

Earlier this week, I highlighted Canada’s strength in the field of regenerative medicine, specifically stem cells in a March 21, 2017 posting. The $6M in the current budget doesn’t look like increased funding but rather a one-year extension. I’m sure they’re happy to receive it  but I imagine it’s a little hard to plan major research projects when you’re not sure how long your funding will last.

As for Canadian leadership in artificial intelligence, that was news to me. Here’s more from the budget,

Canada a Pioneer in Deep Learning in Machines and Brains

CIFAR’s Learning in Machines & Brains program has shaken up the field of artificial intelligence by pioneering a technique called “deep learning,” a computer technique inspired by the human brain and neural networks, which is now routinely used by the likes of Google and Facebook. The program brings together computer scientists, biologists, neuroscientists, psychologists and others, and the result is rich collaborations that have propelled artificial intelligence research forward. The program is co-directed by one of Canada’s foremost experts in artificial intelligence, the Université de Montréal’s Yoshua Bengio, and for his many contributions to the program, the University of Toronto’s Geoffrey Hinton, another Canadian leader in this field, was awarded the title of Distinguished Fellow by CIFAR in 2014.

Meanwhile, from chapter 1 of the budget in the subsection titled “Preparing for the Digital Economy,” there is this provision for children,

Providing educational opportunities for digital skills development to Canadian girls and boys—from kindergarten to grade 12—will give them the head start they need to find and keep good, well-paying, in-demand jobs. To help provide coding and digital skills education to more young Canadians, the Government intends to launch a competitive process through which digital skills training organizations can apply for funding. Budget 2017 proposes to provide $50 million over two years, starting in 2017–18, to support these teaching initiatives.

I wonder if BC Premier Christy Clark is heaving a sigh of relief. At the 2016 #BCTECH Summit, she announced that students in BC would learn to code at school and in newly enhanced coding camp programmes (see my Jan. 19, 2016 posting). Interestingly, there was no mention of additional funding to support her initiative. I guess this money from the federal government comes at a good time as we will have a provincial election later this spring where she can announce the initiative again and, this time, mention there’s money for it.

Attracting brains from afar

Ivan Semeniuk in his March 23, 2017 article (for the Globe and Mail) reads between the lines to analyze the budget’s possible impact on Canadian science,

But a between-the-lines reading of the budget document suggests the government also has another audience in mind: uneasy scientists from the United States and Britain.

The federal government showed its hand at the 2017 #BCTECH Summit. From a March 16, 2017 article by Meera Bains for the CBC news online,

At the B.C. tech summit, Navdeep Bains, Canada’s minister of innovation, said the government will act quickly to fast track work permits to attract highly skilled talent from other countries.

“We’re taking the processing time, which takes months, and reducing it to two weeks for immigration processing for individuals [who] need to come here to help companies grow and scale up,” Bains said.

“So this is a big deal. It’s a game changer.”

That change will happen through the Global Talent Stream, a new program under the federal government’s temporary foreign worker program.  It’s scheduled to begin on June 12, 2017.

U.S. companies are taking notice and a Canadian firm, True North, is offering to help them set up shop.

“What we suggest is that they think about moving their operations, or at least a chunk of their operations, to Vancouver, set up a Canadian subsidiary,” said the company’s founder, Michael Tippett.

“And that subsidiary would be able to house and accommodate those employees.”

Industry experts says while the future is unclear for the tech sector in the U.S., it’s clear high tech in B.C. is gearing up to take advantage.

US business attempts to take advantage of Canada’s relative stability and openness to immigration would seem to be the motive for at least one cross border initiative, the Cascadia Urban Analytics Cooperative. From my Feb. 28, 2017 posting,

There was some big news about the smallest version of the Cascadia region on Thursday, Feb. 23, 2017 when the University of British Columbia (UBC) , the University of Washington (state; UW), and Microsoft announced the launch of the Cascadia Urban Analytics Cooperative. From the joint Feb. 23, 2017 news release (read on the UBC website or read on the UW website),

In an expansion of regional cooperation, the University of British Columbia and the University of Washington today announced the establishment of the Cascadia Urban Analytics Cooperative to use data to help cities and communities address challenges from traffic to homelessness. The largest industry-funded research partnership between UBC and the UW, the collaborative will bring faculty, students and community stakeholders together to solve problems, and is made possible thanks to a $1-million gift from Microsoft.

Today’s announcement follows last September’s [2016] Emerging Cascadia Innovation Corridor Conference in Vancouver, B.C. The forum brought together regional leaders for the first time to identify concrete opportunities for partnerships in education, transportation, university research, human capital and other areas.

A Boston Consulting Group study unveiled at the conference showed the region between Seattle and Vancouver has “high potential to cultivate an innovation corridor” that competes on an international scale, but only if regional leaders work together. The study says that could be possible through sustained collaboration aided by an educated and skilled workforce, a vibrant network of research universities and a dynamic policy environment.

It gets better, it seems Microsoft has been positioning itself for a while if Matt Day’s analysis is correct (from my Feb. 28, 2017 posting),

Matt Day in a Feb. 23, 2017 article for the The Seattle Times provides additional perspective (Note: Links have been removed),

Microsoft’s effort to nudge Seattle and Vancouver, B.C., a bit closer together got an endorsement Thursday [Feb. 23, 2017] from the leading university in each city.

The partnership has its roots in a September [2016] conference in Vancouver organized by Microsoft’s public affairs and lobbying unit [emphasis mine.] That gathering was aimed at tying business, government and educational institutions in Microsoft’s home region in the Seattle area closer to its Canadian neighbor.

Microsoft last year [2016] opened an expanded office in downtown Vancouver with space for 750 employees, an outpost partly designed to draw to the Northwest more engineers than the company can get through the U.S. guest worker system [emphasis mine].

This was all prior to President Trump’s legislative moves in the US, which have at least one Canadian observer a little more gleeful than I’m comfortable with. From a March 21, 2017 article by Susan Lum  for CBC News online,

U.S. President Donald Trump’s efforts to limit travel into his country while simultaneously cutting money from science-based programs provides an opportunity for Canada’s science sector, says a leading Canadian researcher.

“This is Canada’s moment. I think it’s a time we should be bold,” said Alan Bernstein, president of CIFAR [which on March 22, 2017 was awarded $125M to launch the Pan Canada Artificial Intelligence Strategy in the Canadian federal budget announcement], a global research network that funds hundreds of scientists in 16 countries.

Bernstein believes there are many reasons why Canada has become increasingly attractive to scientists around the world, including the political climate in the United States and the Trump administration’s travel bans.

Thankfully, Bernstein calms down a bit,

“It used to be if you were a bright young person anywhere in the world, you would want to go to Harvard or Berkeley or Stanford, or what have you. Now I think you should give pause to that,” he said. “We have pretty good universities here [emphasis mine]. We speak English. We’re a welcoming society for immigrants.”​

Bernstein cautions that Canada should not be seen to be poaching scientists from the United States — but there is an opportunity.

“It’s as if we’ve been in a choir of an opera in the back of the stage and all of a sudden the stars all left the stage. And the audience is expecting us to sing an aria. So we should sing,” Bernstein said.

Bernstein said the federal government, with this week’s so-called innovation budget, can help Canada hit the right notes.

“Innovation is built on fundamental science, so I’m looking to see if the government is willing to support, in a big way, fundamental science in the country.”

Pretty good universities, eh? Thank you, Dr. Bernstein, for keeping some of the boosterism in check. Let’s leave the chest thumping to President Trump and his cronies.

Ivan Semeniuk’s March 23, 2017 article (for the Globe and Mail) provides more details about the situation in the US and in Britain,

Last week, Donald Trump’s first budget request made clear the U.S. President would significantly reduce or entirely eliminate research funding in areas such as climate science and renewable energy if permitted by Congress. Even the National Institutes of Health, which spearheads medical research in the United States and is historically supported across party lines, was unexpectedly targeted for a $6-billion (U.S.) cut that the White House said could be achieved through “efficiencies.”

In Britain, a recent survey found that 42 per cent of academics were considering leaving the country over worries about a less welcoming environment and the loss of research money that a split with the European Union is expected to bring.

In contrast, Canada’s upbeat language about science in the budget makes a not-so-subtle pitch for diversity and talent from abroad, including $117.6-million to establish 25 research chairs with the aim of attracting “top-tier international scholars.”

For good measure, the budget also includes funding for science promotion and $2-million annually for Canada’s yet-to-be-hired Chief Science Advisor, whose duties will include ensuring that government researchers can speak freely about their work.

“What we’ve been hearing over the last few months is that Canada is seen as a beacon, for its openness and for its commitment to science,” said Ms. Duncan [Kirsty Duncan, Minister of Science], who did not refer directly to either the United States or Britain in her comments.

Providing a less optimistic note, Erica Alini in her March 22, 2017 online article for Global News mentions a perennial problem, the Canadian brain drain,

The budget includes a slew of proposed reforms and boosted funding for existing training programs, as well as new skills-development resources for unemployed and underemployed Canadians not covered under current EI-funded programs.

There are initiatives to help women and indigenous people get degrees or training in science, technology, engineering and mathematics (the so-called STEM subjects) and even to teach kids as young as kindergarten-age to code.

But there was no mention of how to make sure Canadians with the right skills remain in Canada, TD’s DePratto {Toronto Dominion Bank} Economics; TD is currently experiencing a scandal {March 13, 2017 Huffington Post news item}] told Global News.

Canada ranks in the middle of the pack compared to other advanced economies when it comes to its share of its graduates in STEM fields, but the U.S. doesn’t shine either, said DePratto [Brian DePratto, senior economist at TD .

The key difference between Canada and the U.S. is the ability to retain domestic talent and attract brains from all over the world, he noted.

To be blunt, there may be some opportunities for Canadian science but it does well to remember (a) US businesses have no particular loyalty to Canada and (b) all it takes is an election to change any perceived advantages to disadvantages.

Digital policy and intellectual property issues

Dubbed by some as the ‘innovation’ budget (official title:  Building a Strong Middle Class), there is an attempt to address a longstanding innovation issue (from a March 22, 2017 posting by Michael Geist on his eponymous blog (Note: Links have been removed),

The release of today’s [march 22, 2017] federal budget is expected to include a significant emphasis on innovation, with the government revealing how it plans to spend (or re-allocate) hundreds of millions of dollars that is intended to support innovation. Canada’s dismal innovation record needs attention, but spending our way to a more innovative economy is unlikely to yield the desired results. While Navdeep Bains, the Innovation, Science and Economic Development Minister, has talked for months about the importance of innovation, Toronto Star columnist Paul Wells today delivers a cutting but accurate assessment of those efforts:

“This government is the first with a minister for innovation! He’s Navdeep Bains. He frequently posts photos of his meetings on Twitter, with the hashtag “#innovation.” That’s how you know there is innovation going on. A year and a half after he became the minister for #innovation, it’s not clear what Bains’s plans are. It’s pretty clear that within the government he has less than complete control over #innovation. There’s an advisory council on economic growth, chaired by the McKinsey guru Dominic Barton, which periodically reports to the government urging more #innovation.

There’s a science advisory panel, chaired by former University of Toronto president David Naylor, that delivered a report to Science Minister Kirsty Duncan more than three months ago. That report has vanished. One presumes that’s because it offered some advice. Whatever Bains proposes, it will have company.”

Wells is right. Bains has been very visible with plenty of meetings and public photo shoots but no obvious innovation policy direction. This represents a missed opportunity since Bains has plenty of policy tools at his disposal that could advance Canada’s innovation framework without focusing on government spending.

For example, Canada’s communications system – wireless and broadband Internet access – falls directly within his portfolio and is crucial for both business and consumers. Yet Bains has been largely missing in action on the file. He gave approval for the Bell – MTS merger that virtually everyone concedes will increase prices in the province and make the communications market less competitive. There are potential policy measures that could bring new competitors into the market (MVNOs [mobile virtual network operators] and municipal broadband) and that could make it easier for consumers to switch providers (ban on unlocking devices). Some of this falls to the CRTC, but government direction and emphasis would make a difference.

Even more troubling has been his near total invisibility on issues relating to new fees or taxes on Internet access and digital services. Canadian Heritage Minister Mélanie Joly has taken control of the issue with the possibility that Canadians could face increased costs for their Internet access or digital services through mandatory fees to contribute to Canadian content.  Leaving aside the policy objections to such an approach (reducing affordable access and the fact that foreign sources now contribute more toward Canadian English language TV production than Canadian broadcasters and distributors), Internet access and e-commerce are supposed to be Bains’ issue and they have a direct connection to the innovation file. How is it possible for the Innovation, Science and Economic Development Minister to have remained silent for months on the issue?

Bains has been largely missing on trade related innovation issues as well. My Globe and Mail column today focuses on a digital-era NAFTA, pointing to likely U.S. demands on data localization, data transfers, e-commerce rules, and net neutrality.  These are all issues that fall under Bains’ portfolio and will impact investment in Canadian networks and digital services. There are innovation opportunities for Canada here, but Bains has been content to leave the policy issues to others, who will be willing to sacrifice potential gains in those areas.

Intellectual property policy is yet another area that falls directly under Bains’ mandate with an obvious link to innovation, but he has done little on the file. Canada won a huge NAFTA victory late last week involving the Canadian patent system, which was challenged by pharmaceutical giant Eli Lilly. Why has Bains not promoted the decision as an affirmation of how Canada’s intellectual property rules?

On the copyright front, the government is scheduled to conduct a review of the Copyright Act later this year, but it is not clear whether Bains will take the lead or again cede responsibility to Joly. The Copyright Act is statutorily under the Industry Minister and reform offers the chance to kickstart innovation. …

For anyone who’s not familiar with this area, innovation is often code for commercialization of science and technology research efforts. These days, digital service and access policies and intellectual property policies are all key to research and innovation efforts.

The country that’s most often (except in mainstream Canadian news media) held up as an example of leadership in innovation is Estonia. The Economist profiled the country in a July 31, 2013 article and a July 7, 2016 article on apolitical.co provides and update.

Conclusions

Science monies for the tri-council science funding agencies (NSERC, SSHRC, and CIHR) are more or less flat but there were a number of line items in the federal budget which qualify as science funding. The $221M over five years for Mitacs, the $125M for the Pan-Canadian Artificial Intelligence Strategy, additional funding for the Canada research chairs, and some of the digital funding could also be included as part of the overall haul. This is in line with the former government’s (Stephen Harper’s Conservatives) penchant for keeping the tri-council’s budgets under control while spreading largesse elsewhere (notably the Perimeter Institute, TRIUMF [Canada’s National Laboratory for Particle and Nuclear Physics], and, in the 2015 budget, $243.5-million towards the Thirty Metre Telescope (TMT) — a massive astronomical observatory to be constructed on the summit of Mauna Kea, Hawaii, a $1.5-billion project). This has lead to some hard feelings in the past with regard to ‘big science’ projects getting what some have felt is an undeserved boost in finances while the ‘small fish’ are left scrabbling for the ever-diminishing (due to budget cuts in years past and inflation) pittances available from the tri-council agencies.

Mitacs, which started life as a federally funded Network Centre for Excellence focused on mathematics, has since shifted focus to become an innovation ‘champion’. You can find Mitacs here and you can find the organization’s March 2016 budget submission to the House of Commons Standing Committee on Finance here. At the time, they did not request a specific amount of money; they just asked for more.

The amount Mitacs expects to receive this year is over $40M which represents more than double what they received from the federal government and almost of 1/2 of their total income in the 2015-16 fiscal year according to their 2015-16 annual report (see p. 327 for the Mitacs Statement of Operations to March 31, 2016). In fact, the federal government forked over $39,900,189. in the 2015-16 fiscal year to be their largest supporter while Mitacs’ total income (receipts) was $81,993,390.

It’s a strange thing but too much money, etc. can be as bad as too little. I wish the folks Mitacs nothing but good luck with their windfall.

I don’t see anything in the budget that encourages innovation and investment from the industrial sector in Canada.

Finallyl, innovation is a cultural issue as much as it is a financial issue and having worked with a number of developers and start-up companies, the most popular business model is to develop a successful business that will be acquired by a large enterprise thereby allowing the entrepreneurs to retire before the age of 30 (or 40 at the latest). I don’t see anything from the government acknowledging the problem let alone any attempts to tackle it.

All in all, it was a decent budget with nothing in it to seriously offend anyone.

Brown recluse spider, one of the world’s most venomous spiders, shows off unique spinning technique

Caption: American Brown Recluse Spider is pictured. Credit: Oxford University

According to scientists from Oxford University this deadly spider could teach us a thing or two about strength. From a Feb. 15, 2017 news item on ScienceDaily,

Brown recluse spiders use a unique micro looping technique to make their threads stronger than that of any other spider, a newly published UK-US collaboration has discovered.

One of the most feared and venomous arachnids in the world, the American brown recluse spider has long been known for its signature necro-toxic venom, as well as its unusual silk. Now, new research offers an explanation for how the spider is able to make its silk uncommonly strong.

Researchers suggest that if applied to synthetic materials, the technique could inspire scientific developments and improve impact absorbing structures used in space travel.

The study, published in the journal Material Horizons, was produced by scientists from Oxford University’s Department of Zoology, together with a team from the Applied Science Department at Virginia’s College of William & Mary. Their surveillance of the brown recluse spider’s spinning behaviour shows how, and to what extent, the spider manages to strengthen the silk it makes.

A Feb. 15, 2017 University of Oxford press release, which originated the news item,  provides more detail about the research,

From observing the arachnid, the team discovered that unlike other spiders, who produce round ribbons of thread, recluse silk is thin and flat. This structural difference is key to the thread’s strength, providing the flexibility needed to prevent premature breakage and withstand the knots created during spinning which give each strand additional strength.

Professor Hannes Schniepp from William & Mary explains: “The theory of knots adding strength is well proven. But adding loops to synthetic filaments always seems to lead to premature fibre failure. Observation of the recluse spider provided the breakthrough solution; unlike all spiders its silk is not round, but a thin, nano-scale flat ribbon. The ribbon shape adds the flexibility needed to prevent premature failure, so that all the microloops can provide additional strength to the strand.”

By using computer simulations to apply this technique to synthetic fibres, the team were able to test and prove that adding even a single loop significantly enhances the strength of the material.

William & Mary PhD student Sean Koebley adds: “We were able to prove that adding even a single loop significantly enhances the toughness of a simple synthetic sticky tape. Our observations open the door to new fibre technology inspired by the brown recluse.”

Speaking on how the recluse’s technique could be applied more broadly in the future, Professor Fritz Vollrath, of the Department of Zoology at Oxford University, expands: “Computer simulations demonstrate that fibres with many loops would be much, much tougher than those without loops. This right away suggests possible applications. For example carbon filaments could be looped to make them less brittle, and thus allow their use in novel impact absorbing structures. One example would be spider-like webs of carbon-filaments floating in outer space, to capture the drifting space debris that endangers astronaut lives’ and satellite integrity.”

Here’s a link to and a citation for the paper,

Toughness-enhancing metastructure in the recluse spider’s looped ribbon silk by
S. R. Koebley, F. Vollrath, and H. C. Schniepp. Mater. Horiz., 2017, Advance Article DOI: 10.1039/C6MH00473C First published online 15 Feb 2017

This paper is open access although you may need to register with the Royal Society of Chemistry’s publishing site to get access.