Tag Archives: US

Thanks for the memory: the US National Institute of Standards and Technology (NIST) and memristors

In January 2018 it seemed like I was tripping across a lot of memristor stories . This came from a January 19, 2018 news item on Nanowerk,

In the race to build a computer that mimics the massive computational power of the human brain, researchers are increasingly turning to memristors, which can vary their electrical resistance based on the memory of past activity. Scientists at the National Institute of Standards and Technology (NIST) have now unveiled the long-mysterious inner workings of these semiconductor elements, which can act like the short-term memory of nerve cells.

A January 18, 2018 NIST news release (also on EurekAlert), which originated the news item, fills in the details,

Just as the ability of one nerve cell to signal another depends on how often the cells have communicated in the recent past, the resistance of a memristor depends on the amount of current that recently flowed through it. Moreover, a memristor retains that memory even when electrical power is switched off.

But despite the keen interest in memristors, scientists have lacked a detailed understanding of how these devices work and have yet to develop a standard toolset to study them.

Now, NIST scientists have identified such a toolset and used it to more deeply probe how memristors operate. Their findings could lead to more efficient operation of the devices and suggest ways to minimize the leakage of current.

Brian Hoskins of NIST and the University of California, Santa Barbara, along with NIST scientists Nikolai Zhitenev, Andrei Kolmakov, Jabez McClelland and their colleagues from the University of Maryland’s NanoCenter (link is external) in College Park and the Institute for Research and Development in Microtechnologies in Bucharest, reported the findings (link is external) in a recent Nature Communications.

To explore the electrical function of memristors, the team aimed a tightly focused beam of electrons at different locations on a titanium dioxide memristor. The beam knocked free some of the device’s electrons, which formed ultrasharp images of those locations. The beam also induced four distinct currents to flow within the device. The team determined that the currents are associated with the multiple interfaces between materials in the memristor, which consists of two metal (conducting) layers separated by an insulator.

“We know exactly where each of the currents are coming from because we are controlling the location of the beam that is inducing those currents,” said Hoskins.

In imaging the device, the team found several dark spots—regions of enhanced conductivity—which indicated places where current might leak out of the memristor during its normal operation. These leakage pathways resided outside the memristor’s core—where it switches between the low and high resistance levels that are useful in an electronic device. The finding suggests that reducing the size of a memristor could minimize or even eliminate some of the unwanted current pathways. Although researchers had suspected that might be the case, they had lacked experimental guidance about just how much to reduce the size of the device.

Because the leakage pathways are tiny, involving distances of only 100 to 300 nanometers, “you’re probably not going to start seeing some really big improvements until you reduce dimensions of the memristor on that scale,” Hoskins said.

To their surprise, the team also found that the current that correlated with the memristor’s switch in resistance didn’t come from the active switching material at all, but the metal layer above it. The most important lesson of the memristor study, Hoskins noted, “is that you can’t just worry about the resistive switch, the switching spot itself, you have to worry about everything around it.” The team’s study, he added, “is a way of generating much stronger intuition about what might be a good way to engineer memristors.”

Here’s a link to and a citation for the paper,

Stateful characterization of resistive switching TiO2 with electron beam induced currents by Brian D. Hoskins, Gina C. Adam, Evgheni Strelcov, Nikolai Zhitenev, Andrei Kolmakov, Dmitri B. Strukov, & Jabez J. McClelland. Nature Communications 8, Article number: 1972 (2017) doi:10.1038/s41467-017-02116-9 Published online: 07 December 2017

This is an open access paper.

It might be my imagination but it seemed like a lot of papers from 2017 were being publicized in early 2018.

Finally, I borrowed much of my headline from the NIST’s headline for its news release, specifically, “Thanks for the memory,” which is a rather old song,

Bob Hope and Shirley Ross in “The Big Broadcast of 1938.”

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (1 of 2)

Before launching into the assessment, a brief explanation of my theme: Hedy Lamarr was considered to be one of the great beauties of her day,

“Ziegfeld Girl” Hedy Lamarr 1941 MGM *M.V.
Titles: Ziegfeld Girl
People: Hedy Lamarr
Image courtesy mptvimages.com [downloaded from https://www.imdb.com/title/tt0034415/mediaviewer/rm1566611456]

Aside from starring in Hollywood movies and, before that, movies in Europe, she was also an inventor and not just any inventor (from a Dec. 4, 2017 article by Laura Barnett for The Guardian), Note: Links have been removed,

Let’s take a moment to reflect on the mercurial brilliance of Hedy Lamarr. Not only did the Vienna-born actor flee a loveless marriage to a Nazi arms dealer to secure a seven-year, $3,000-a-week contract with MGM, and become (probably) the first Hollywood star to simulate a female orgasm on screen – she also took time out to invent a device that would eventually revolutionise mobile communications.

As described in unprecedented detail by the American journalist and historian Richard Rhodes in his new book, Hedy’s Folly, Lamarr and her business partner, the composer George Antheil, were awarded a patent in 1942 for a “secret communication system”. It was meant for radio-guided torpedoes, and the pair gave to the US Navy. It languished in their files for decades before eventually becoming a constituent part of GPS, Wi-Fi and Bluetooth technology.

(The article goes on to mention other celebrities [Marlon Brando, Barbara Cartland, Mark Twain, etc] and their inventions.)

Lamarr’s work as an inventor was largely overlooked until the 1990’s when the technology community turned her into a ‘cultish’ favourite and from there her reputation grew and acknowledgement increased culminating in Rhodes’ book and the documentary by Alexandra Dean, ‘Bombshell: The Hedy Lamarr Story (to be broadcast as part of PBS’s American Masters series on May 18, 2018).

Canada as Hedy Lamarr

There are some parallels to be drawn between Canada’s S&T and R&D (science and technology; research and development) and Ms. Lamarr. Chief amongst them, we’re not always appreciated for our brains. Not even by people who are supposed to know better such as the experts on the panel for the ‘Third assessment of The State of Science and Technology and Industrial Research and Development in Canada’ (proper title: Competing in a Global Innovation Economy: The Current State of R&D in Canada) from the Expert Panel on the State of Science and Technology and Industrial Research and Development in Canada.

A little history

Before exploring the comparison to Hedy Lamarr further, here’s a bit more about the history of this latest assessment from the Council of Canadian Academies (CCA), from the report released April 10, 2018,

This assessment of Canada’s performance indicators in science, technology, research, and innovation comes at an opportune time. The Government of Canada has expressed a renewed commitment in several tangible ways to this broad domain of activity including its Innovation and Skills Plan, the announcement of five superclusters, its appointment of a new Chief Science Advisor, and its request for the Fundamental Science Review. More specifically, the 2018 Federal Budget demonstrated the government’s strong commitment to research and innovation with historic investments in science.

The CCA has a decade-long history of conducting evidence-based assessments about Canada’s research and development activities, producing seven assessments of relevance:

The State of Science and Technology in Canada (2006) [emphasis mine]
•Innovation and Business Strategy: Why Canada Falls Short (2009)
•Catalyzing Canada’s Digital Economy (2010)
•Informing Research Choices: Indicators and Judgment (2012)
The State of Science and Technology in Canada (2012) [emphasis mine]
The State of Industrial R&D in Canada (2013) [emphasis mine]
•Paradox Lost: Explaining Canada’s Research Strength and Innovation Weakness (2013)

Using similar methods and metrics to those in The State of Science and Technology in Canada (2012) and The State of Industrial R&D in Canada (2013), this assessment tells a similar and familiar story: Canada has much to be proud of, with world-class researchers in many domains of knowledge, but the rest of the world is not standing still. Our peers are also producing high quality results, and many countries are making significant commitments to supporting research and development that will position them to better leverage their strengths to compete globally. Canada will need to take notice as it determines how best to take action. This assessment provides valuable material for that conversation to occur, whether it takes place in the lab or the legislature, the bench or the boardroom. We also hope it will be used to inform public discussion. [p. ix Print, p. 11 PDF]

This latest assessment succeeds the general 2006 and 2012 reports, which were mostly focused on academic research, and combines it with an assessment of industrial research, which was previously separate. Also, this third assessment’s title (Competing in a Global Innovation Economy: The Current State of R&D in Canada) makes what was previously quietly declared in the text, explicit from the cover onwards. It’s all about competition, despite noises such as the 2017 Naylor report (Review of fundamental research) about the importance of fundamental research.

One other quick comment, I did wonder in my July 1, 2016 posting (featuring the announcement of the third assessment) how combining two assessments would impact the size of the expert panel and the size of the final report,

Given the size of the 2012 assessment of science and technology at 232 pp. (PDF) and the 2013 assessment of industrial research and development at 220 pp. (PDF) with two expert panels, the imagination boggles at the potential size of the 2016 expert panel and of the 2016 assessment combining the two areas.

I got my answer with regard to the panel as noted in my Oct. 20, 2016 update (which featured a list of the members),

A few observations, given the size of the task, this panel is lean. As well, there are three women in a group of 13 (less than 25% representation) in 2016? It’s Ontario and Québec-dominant; only BC and Alberta rate a representative on the panel. I hope they will find ways to better balance this panel and communicate that ‘balanced story’ to the rest of us. On the plus side, the panel has representatives from the humanities, arts, and industry in addition to the expected representatives from the sciences.

The imbalance I noted then was addressed, somewhat, with the selection of the reviewers (from the report released April 10, 2018),

The CCA wishes to thank the following individuals for their review of this report:

Ronald Burnett, C.M., O.B.C., RCA, Chevalier de l’ordre des arts et des
lettres, President and Vice-Chancellor, Emily Carr University of Art and Design
(Vancouver, BC)

Michelle N. Chretien, Director, Centre for Advanced Manufacturing and Design
Technologies, Sheridan College; Former Program and Business Development
Manager, Electronic Materials, Xerox Research Centre of Canada (Brampton,
ON)

Lisa Crossley, CEO, Reliq Health Technologies, Inc. (Ancaster, ON)
Natalie Dakers, Founding President and CEO, Accel-Rx Health Sciences
Accelerator (Vancouver, BC)

Fred Gault, Professorial Fellow, United Nations University-MERIT (Maastricht,
Netherlands)

Patrick D. Germain, Principal Engineering Specialist, Advanced Aerodynamics,
Bombardier Aerospace (Montréal, QC)

Robert Brian Haynes, O.C., FRSC, FCAHS, Professor Emeritus, DeGroote
School of Medicine, McMaster University (Hamilton, ON)

Susan Holt, Chief, Innovation and Business Relationships, Government of
New Brunswick (Fredericton, NB)

Pierre A. Mohnen, Professor, United Nations University-MERIT and Maastricht
University (Maastricht, Netherlands)

Peter J. M. Nicholson, C.M., Retired; Former and Founding President and
CEO, Council of Canadian Academies (Annapolis Royal, NS)

Raymond G. Siemens, Distinguished Professor, English and Computer Science
and Former Canada Research Chair in Humanities Computing, University of
Victoria (Victoria, BC) [pp. xii- xiv Print; pp. 15-16 PDF]

The proportion of women to men as reviewers jumped up to about 36% (4 of 11 reviewers) and there are two reviewers from the Maritime provinces. As usual, reviewers external to Canada were from Europe. Although this time, they came from Dutch institutions rather than UK or German institutions. Interestingly and unusually, there was no one from a US institution. When will they start using reviewers from other parts of the world?

As for the report itself, it is 244 pp. (PDF). (For the really curious, I have a  December 15, 2016 post featuring my comments on the preliminary data for the third assessment.)

To sum up, they had a lean expert panel tasked with bringing together two inquiries and two reports. I imagine that was daunting. Good on them for finding a way to make it manageable.

Bibliometrics, patents, and a survey

I wish more attention had been paid to some of the issues around open science, open access, and open data, which are changing how science is being conducted. (I have more about this from an April 5, 2018 article by James Somers for The Atlantic but more about that later.) If I understand rightly, they may not have been possible due to the nature of the questions posed by the government when requested the assessment.

As was done for the second assessment, there is an acknowledgement that the standard measures/metrics (bibliometrics [no. of papers published, which journals published them; number of times papers were cited] and technometrics [no. of patent applications, etc.] of scientific accomplishment and progress are not the best and new approaches need to be developed and adopted (from the report released April 10, 2018),

It is also worth noting that the Panel itself recognized the limits that come from using traditional historic metrics. Additional approaches will be needed the next time this assessment is done. [p. ix Print; p. 11 PDF]

For the second assessment and as a means of addressing some of the problems with metrics, the panel decided to take a survey which the panel for the third assessment has also done (from the report released April 10, 2018),

The Panel relied on evidence from multiple sources to address its charge, including a literature review and data extracted from statistical agencies and organizations such as Statistics Canada and the OECD. For international comparisons, the Panel focused on OECD countries along with developing countries that are among the top 20 producers of peer-reviewed research publications (e.g., China, India, Brazil, Iran, Turkey). In addition to the literature review, two primary research approaches informed the Panel’s assessment:
•a comprehensive bibliometric and technometric analysis of Canadian research publications and patents; and,
•a survey of top-cited researchers around the world.

Despite best efforts to collect and analyze up-to-date information, one of the Panel’s findings is that data limitations continue to constrain the assessment of R&D activity and excellence in Canada. This is particularly the case with industrial R&D and in the social sciences, arts, and humanities. Data on industrial R&D activity continue to suffer from time lags for some measures, such as internationally comparable data on R&D intensity by sector and industry. These data also rely on industrial categories (i.e., NAICS and ISIC codes) that can obscure important trends, particularly in the services sector, though Statistics Canada’s recent revisions to how this data is reported have improved this situation. There is also a lack of internationally comparable metrics relating to R&D outcomes and impacts, aside from those based on patents.

For the social sciences, arts, and humanities, metrics based on journal articles and other indexed publications provide an incomplete and uneven picture of research contributions. The expansion of bibliometric databases and methodological improvements such as greater use of web-based metrics, including paper views/downloads and social media references, will support ongoing, incremental improvements in the availability and accuracy of data. However, future assessments of R&D in Canada may benefit from more substantive integration of expert review, capable of factoring in different types of research outputs (e.g., non-indexed books) and impacts (e.g., contributions to communities or impacts on public policy). The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity. It is vital that such contributions are better measured and assessed. [p. xvii Print; p. 19 PDF]

My reading: there’s a problem and we’re not going to try and fix it this time. Good luck to those who come after us. As for this line: “The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity.” Did no one explain that when you use ‘no doubt’, you are introducing doubt? It’s a cousin to ‘don’t take this the wrong way’ and ‘I don’t mean to be rude but …’ .

Good news

This is somewhat encouraging (from the report released April 10, 2018),

Canada’s international reputation for its capacity to participate in cutting-edge R&D is strong, with 60% of top-cited researchers surveyed internationally indicating that Canada hosts world-leading infrastructure or programs in their fields. This share increased by four percentage points between 2012 and 2017. Canada continues to benefit from a highly educated population and deep pools of research skills and talent. Its population has the highest level of educational attainment in the OECD in the proportion of the population with
a post-secondary education. However, among younger cohorts (aged 25 to 34), Canada has fallen behind Japan and South Korea. The number of researchers per capita in Canada is on a par with that of other developed countries, andincreased modestly between 2004 and 2012. Canada’s output of PhD graduates has also grown in recent years, though it remains low in per capita terms relative to many OECD countries. [pp. xvii-xviii; pp. 19-20]

Don’t let your head get too big

Most of the report observes that our international standing is slipping in various ways such as this (from the report released April 10, 2018),

In contrast, the number of R&D personnel employed in Canadian businesses
dropped by 20% between 2008 and 2013. This is likely related to sustained and
ongoing decline in business R&D investment across the country. R&D as a share
of gross domestic product (GDP) has steadily declined in Canada since 2001,
and now stands well below the OECD average (Figure 1). As one of few OECD
countries with virtually no growth in total national R&D expenditures between
2006 and 2015, Canada would now need to more than double expenditures to
achieve an R&D intensity comparable to that of leading countries.

Low and declining business R&D expenditures are the dominant driver of this
trend; however, R&D spending in all sectors is implicated. Government R&D
expenditures declined, in real terms, over the same period. Expenditures in the
higher education sector (an indicator on which Canada has traditionally ranked
highly) are also increasing more slowly than the OECD average. Significant
erosion of Canada’s international competitiveness and capacity to participate
in R&D and innovation is likely to occur if this decline and underinvestment
continue.

Between 2009 and 2014, Canada produced 3.8% of the world’s research
publications, ranking ninth in the world. This is down from seventh place for
the 2003–2008 period. India and Italy have overtaken Canada although the
difference between Italy and Canada is small. Publication output in Canada grew
by 26% between 2003 and 2014, a growth rate greater than many developed
countries (including United States, France, Germany, United Kingdom, and
Japan), but below the world average, which reflects the rapid growth in China
and other emerging economies. Research output from the federal government,
particularly the National Research Council Canada, dropped significantly
between 2009 and 2014.(emphasis mine)  [p. xviii Print; p. 20 PDF]

For anyone unfamiliar with Canadian politics,  2009 – 2014 were years during which Stephen Harper’s Conservatives formed the government. Justin Trudeau’s Liberals were elected to form the government in late 2015.

During Harper’s years in government, the Conservatives were very interested in changing how the National Research Council of Canada operated and, if memory serves, the focus was on innovation over research. Consequently, the drop in their research output is predictable.

Given my interest in nanotechnology and other emerging technologies, this popped out (from the report released April 10, 2018),

When it comes to research on most enabling and strategic technologies, however, Canada lags other countries. Bibliometric evidence suggests that, with the exception of selected subfields in Information and Communication Technologies (ICT) such as Medical Informatics and Personalized Medicine, Canada accounts for a relatively small share of the world’s research output for promising areas of technology development. This is particularly true for Biotechnology, Nanotechnology, and Materials science [emphasis mine]. Canada’s research impact, as reflected by citations, is also modest in these areas. Aside from Biotechnology, none of the other subfields in Enabling and Strategic Technologies has an ARC rank among the top five countries. Optoelectronics and photonics is the next highest ranked at 7th place, followed by Materials, and Nanoscience and Nanotechnology, both of which have a rank of 9th. Even in areas where Canadian researchers and institutions played a seminal role in early research (and retain a substantial research capacity), such as Artificial Intelligence and Regenerative Medicine, Canada has lost ground to other countries.

Arguably, our early efforts in artificial intelligence wouldn’t have garnered us much in the way of ranking and yet we managed some cutting edge work such as machine learning. I’m not suggesting the expert panel should have or could have found some way to measure these kinds of efforts but I’m wondering if there could have been some acknowledgement in the text of the report. I’m thinking a couple of sentences in a paragraph about the confounding nature of scientific research where areas that are ignored for years and even decades then become important (e.g., machine learning) but are not measured as part of scientific progress until after they are universally recognized.

Still, point taken about our diminishing returns in ’emerging’ technologies and sciences (from the report released April 10, 2018),

The impression that emerges from these data is sobering. With the exception of selected ICT subfields, such as Medical Informatics, bibliometric evidence does not suggest that Canada excels internationally in most of these research areas. In areas such as Nanotechnology and Materials science, Canada lags behind other countries in levels of research output and impact, and other countries are outpacing Canada’s publication growth in these areas — leading to declining shares of world publications. Even in research areas such as AI, where Canadian researchers and institutions played a foundational role, Canadian R&D activity is not keeping pace with that of other countries and some researchers trained in Canada have relocated to other countries (Section 4.4.1). There are isolated exceptions to these trends, but the aggregate data reviewed by this Panel suggest that Canada is not currently a world leader in research on most emerging technologies.

The Hedy Lamarr treatment

We have ‘good looks’ (arts and humanities) but not the kind of brains (physical sciences and engineering) that people admire (from the report released April 10, 2018),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphases mine] It accounts for more than 5% of world researchin these fields. Conversely, Canada has lower research output than expected
in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

Couldn’t they have used a more buoyant tone? After all, science was known as ‘natural philosophy’ up until the 19th century. As for visual and performing arts, let’s include poetry as a performing and literary art (both have been the case historically and cross-culturally) and let’s also note that one of the great physics texts, (De rerum natura by Lucretius) was a multi-volume poem (from Lucretius’ Wikipedia entry; Note: Links have been removed).

His poem De rerum natura (usually translated as “On the Nature of Things” or “On the Nature of the Universe”) transmits the ideas of Epicureanism, which includes Atomism [the concept of atoms forming materials] and psychology. Lucretius was the first writer to introduce Roman readers to Epicurean philosophy.[15] The poem, written in some 7,400 dactylic hexameters, is divided into six untitled books, and explores Epicurean physics through richly poetic language and metaphors. Lucretius presents the principles of atomism; the nature of the mind and soul; explanations of sensation and thought; the development of the world and its phenomena; and explains a variety of celestial and terrestrial phenomena. The universe described in the poem operates according to these physical principles, guided by fortuna, “chance”, and not the divine intervention of the traditional Roman deities.[16]

Should you need more proof that the arts might have something to contribute to physical sciences, there’s this in my March 7, 2018 posting,

It’s not often you see research that combines biologically inspired engineering and a molecular biophysicist with a professional animator who worked at Peter Jackson’s (Lord of the Rings film trilogy, etc.) Park Road Post film studio. An Oct. 18, 2017 news item on ScienceDaily describes the project,

Like many other scientists, Don Ingber, M.D., Ph.D., the Founding Director of the Wyss Institute, [emphasis mine] is concerned that non-scientists have become skeptical and even fearful of his field at a time when technology can offer solutions to many of the world’s greatest problems. “I feel that there’s a huge disconnect between science and the public because it’s depicted as rote memorization in schools, when by definition, if you can memorize it, it’s not science,” says Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Professor of Bioengineering at the Harvard Paulson School of Engineering and Applied Sciences (SEAS). [emphasis mine] “Science is the pursuit of the unknown. We have a responsibility to reach out to the public and convey that excitement of exploration and discovery, and fortunately, the film industry is already great at doing that.”

“Not only is our physics-based simulation and animation system as good as other data-based modeling systems, it led to the new scientific insight [emphasis mine] that the limited motion of the dynein hinge focuses the energy released by ATP hydrolysis, which causes dynein’s shape change and drives microtubule sliding and axoneme motion,” says Ingber. “Additionally, while previous studies of dynein have revealed the molecule’s two different static conformations, our animation visually depicts one plausible way that the protein can transition between those shapes at atomic resolution, which is something that other simulations can’t do. The animation approach also allows us to visualize how rows of dyneins work in unison, like rowers pulling together in a boat, which is difficult using conventional scientific simulation approaches.”

It comes down to how we look at things. Yes, physical sciences and engineering are very important. If the report is to be believed we have a very highly educated population and according to PISA scores our students rank highly in mathematics, science, and reading skills. (For more information on Canada’s latest PISA scores from 2015 see this OECD page. As for PISA itself, it’s an OECD [Organization for Economic Cooperation and Development] programme where 15-year-old students from around the world are tested on their reading, mathematics, and science skills, you can get some information from my Oct. 9, 2013 posting.)

Is it really so bad that we choose to apply those skills in fields other than the physical sciences and engineering? It’s a little bit like Hedy Lamarr’s problem except instead of being judged for our looks and having our inventions dismissed, we’re being judged for not applying ourselves to physical sciences and engineering and having our work in other closely aligned fields dismissed as less important.

Canada’s Industrial R&D: an oft-told, very sad story

Bemoaning the state of Canada’s industrial research and development efforts has been a national pastime as long as I can remember. Here’s this from the report released April 10, 2018,

There has been a sustained erosion in Canada’s industrial R&D capacity and competitiveness. Canada ranks 33rd among leading countries on an index assessing the magnitude, intensity, and growth of industrial R&D expenditures. Although Canada is the 11th largest spender, its industrial R&D intensity (0.9%) is only half the OECD average and total spending is declining (−0.7%). Compared with G7 countries, the Canadian portfolio of R&D investment is more concentrated in industries that are intrinsically not as R&D intensive. Canada invests more heavily than the G7 average in oil and gas, forestry, machinery and equipment, and finance where R&D has been less central to business strategy than in many other industries. …  About 50% of Canada’s industrial R&D spending is in high-tech sectors (including industries such as ICT, aerospace, pharmaceuticals, and automotive) compared with the G7 average of 80%. Canadian Business Enterprise Expenditures on R&D (BERD) intensity is also below the OECD average in these sectors. In contrast, Canadian investment in low and medium-low tech sectors is substantially higher than the G7 average. Canada’s spending reflects both its long-standing industrial structure and patterns of economic activity.

R&D investment patterns in Canada appear to be evolving in response to global and domestic shifts. While small and medium-sized enterprises continue to perform a greater share of industrial R&D in Canada than in the United States, between 2009 and 2013, there was a shift in R&D from smaller to larger firms. Canada is an increasingly attractive place to conduct R&D. Investment by foreign-controlled firms in Canada has increased to more than 35% of total R&D investment, with the United States accounting for more than half of that. [emphasis mine]  Multinational enterprises seem to be increasingly locating some of their R&D operations outside their country of ownership, possibly to gain proximity to superior talent. Increasing foreign-controlled R&D, however, also could signal a long-term strategic loss of control over intellectual property (IP) developed in this country, ultimately undermining the government’s efforts to support high-growth firms as they scale up. [pp. xxii-xxiii Print; pp. 24-25 PDF]

Canada has been known as a ‘branch plant’ economy for decades. For anyone unfamiliar with the term, it means that companies from other countries come here, open up a branch and that’s how we get our jobs as we don’t have all that many large companies here. Increasingly, multinationals are locating R&D shops here.

While our small to medium size companies fund industrial R&D, it’s large companies (multinationals) which can afford long-term and serious investment in R&D. Luckily for companies from other countries, we have a well-educated population of people looking for jobs.

In 2017, we opened the door more widely so we can scoop up talented researchers and scientists from other countries, from a June 14, 2017 article by Beckie Smith for The PIE News,

Universities have welcomed the inclusion of the work permit exemption for academic stays of up to 120 days in the strategy, which also introduces expedited visa processing for some highly skilled professions.

Foreign researchers working on projects at a publicly funded degree-granting institution or affiliated research institution will be eligible for one 120-day stay in Canada every 12 months.

And universities will also be able to access a dedicated service channel that will support employers and provide guidance on visa applications for foreign talent.

The Global Skills Strategy, which came into force on June 12 [2017], aims to boost the Canadian economy by filling skills gaps with international talent.

As well as the short term work permit exemption, the Global Skills Strategy aims to make it easier for employers to recruit highly skilled workers in certain fields such as computer engineering.

“Employers that are making plans for job-creating investments in Canada will often need an experienced leader, dynamic researcher or an innovator with unique skills not readily available in Canada to make that investment happen,” said Ahmed Hussen, Minister of Immigration, Refugees and Citizenship.

“The Global Skills Strategy aims to give those employers confidence that when they need to hire from abroad, they’ll have faster, more reliable access to top talent.”

Coincidentally, Microsoft, Facebook, Google, etc. have announced, in 2017, new jobs and new offices in Canadian cities. There’s a also Chinese multinational telecom company Huawei Canada which has enjoyed success in Canada and continues to invest here (from a Jan. 19, 2018 article about security concerns by Matthew Braga for the Canadian Broadcasting Corporation (CBC) online news,

For the past decade, Chinese tech company Huawei has found no shortage of success in Canada. Its equipment is used in telecommunications infrastructure run by the country’s major carriers, and some have sold Huawei’s phones.

The company has struck up partnerships with Canadian universities, and say it is investing more than half a billion dollars in researching next generation cellular networks here. [emphasis mine]

While I’m not thrilled about using patents as an indicator of progress, this is interesting to note (from the report released April 10, 2018),

Canada produces about 1% of global patents, ranking 18th in the world. It lags further behind in trademark (34th) and design applications (34th). Despite relatively weak performance overall in patents, Canada excels in some technical fields such as Civil Engineering, Digital Communication, Other Special Machines, Computer Technology, and Telecommunications. [emphases mine] Canada is a net exporter of patents, which signals the R&D strength of some technology industries. It may also reflect increasing R&D investment by foreign-controlled firms. [emphasis mine] [p. xxiii Print; p. 25 PDF]

Getting back to my point, we don’t have large companies here. In fact, the dream for most of our high tech startups is to build up the company so it’s attractive to buyers, sell, and retire (hopefully before the age of 40). Strangely, the expert panel doesn’t seem to share my insight into this matter,

Canada’s combination of high performance in measures of research output and impact, and low performance on measures of industrial R&D investment and innovation (e.g., subpar productivity growth), continue to be viewed as a paradox, leading to the hypothesis that barriers are impeding the flow of Canada’s research achievements into commercial applications. The Panel’s analysis suggests the need for a more nuanced view. The process of transforming research into innovation and wealth creation is a complex multifaceted process, making it difficult to point to any definitive cause of Canada’s deficit in R&D investment and productivity growth. Based on the Panel’s interpretation of the evidence, Canada is a highly innovative nation, but significant barriers prevent the translation of innovation into wealth creation. The available evidence does point to a number of important contributing factors that are analyzed in this report. Figure 5 represents the relationships between R&D, innovation, and wealth creation.

The Panel concluded that many factors commonly identified as points of concern do not adequately explain the overall weakness in Canada’s innovation performance compared with other countries. [emphasis mine] Academia-business linkages appear relatively robust in quantitative terms given the extent of cross-sectoral R&D funding and increasing academia-industry partnerships, though the volume of academia-industry interactions does not indicate the nature or the quality of that interaction, nor the extent to which firms are capitalizing on the research conducted and the resulting IP. The educational system is high performing by international standards and there does not appear to be a widespread lack of researchers or STEM (science, technology, engineering, and mathematics) skills. IP policies differ across universities and are unlikely to explain a divergence in research commercialization activity between Canadian and U.S. institutions, though Canadian universities and governments could do more to help Canadian firms access university IP and compete in IP management and strategy. Venture capital availability in Canada has improved dramatically in recent years and is now competitive internationally, though still overshadowed by Silicon Valley. Technology start-ups and start-up ecosystems are also flourishing in many sectors and regions, demonstrating their ability to build on research advances to develop and deliver innovative products and services.

You’ll note there’s no mention of a cultural issue where start-ups are designed for sale as soon as possible and this isn’t new. Years ago, there was an accounting firm that published a series of historical maps (the last one I saw was in 2005) of technology companies in the Vancouver region. Technology companies were being developed and sold to large foreign companies from the 19th century to present day.

Part 2

Immune to CRISPR?

I guess if you’re going to use bacteria as part of your gene editing technology (CRISPR [clustered regularly interspaced short palindromic repeats]/Cas9) then, you might half expect the body’s immune system may have developed some defenses. A Jan. 9, 2018 article by Sarah Zhang for The Atlantic provides some insight into what the new research suggests (Note: Links have been removed),

2018 is supposed to be the year of CRISPR in humans. The first U.S. and European clinical trials that test the gene-editing tool’s ability to treat diseases—such as sickle-cell anemia, beta thalassemia, and a type of inherited blindness—are slated to begin this year.

But the year has begun on a cautionary note. On Friday [January 5, 2018], Stanford researchers posted a preprint (which has not been peer reviewed) to the website biorXiv highlighting a potential obstacle to using CRISPR in humans: Many of us may already be immune to it. That’s because CRISPR actually comes from bacteria that often live on or infect humans, and we have built up immunity to the proteins from these bacteria over our lives.

Not all CRISPR therapies in humans will be doomed. “We don’t think this is the end of the story. This is the start of the story,” says Porteus [Matthew Porteus, a pediatrician and stem-cell researcher at Stanford]. There are likely ways around the problem of immunity to CRISPR proteins, and many of the early clinical trials appear to be designed around this problem.

Porteus and his colleagues focused on two versions of Cas9, the bacterial protein mostly commonly used in CRISPR gene editing. One comes from Staphylococcus aureus, which often harmlessly lives on skin but can sometimes causes staph infections, and another from Streptococcus pyogenes, which causes strep throat but can also become “flesh-eating bacteria” when it spreads to other parts of the body. So yeah, you want your immune system to be on guard against these bacteria.

The human immune system has a couple different ways of recognizing foreign proteins, and the team tested for both. First, they looked to see if people have molecules in their blood called antibodies that can specifically bind to Cas9. Among 34 people they tested, 79 percent had antibodies against the staph Cas9 and 65 percent against the strep Cas9.

The Stanford team only tested for preexisting immunity against Cas9, but anytime you inject a large bacterial protein into the human body, it can provoke an immune response. After all, that’s how the immune system learns to fight off bacteria it’s never seen before. (Preexisting immunity can make the response faster and more robust, though.)

The danger of the immune system turning on a patient’s body hangs over a lot of research into correcting genes. In the late 1990s and 2000s, research into gene therapy was derailed by the death of 18-year-old Jesse Gelsinger, who died from an immune reaction to the virus used to deliver the corrected gene. This is the worst-case scenario that the CRISPR world hopes to avoid.

Here’s a link to and a citation for the preprint,

Identification of Pre-Existing Adaptive Immunity to Cas9 Proteins in Humans by Carsten Trevor Charlesworth, Priyanka S Deshpande, Daniel P Dever, Beruh Dejene, Natalia Gomez-Ospina, Sruthi Mantri, Mara Pavel-Dinu, Joab Camarena, Kenneth I Weinberg, Matthew H Porteus. bioRxiv posted January 5, 2018 doi: https://doi.org/10.1101/243345

This article is a preprint and has not been peer-reviewed …

This preprint (not yet published paper) is open access and open for feedback.

Meanwhile, the year of CRISPR takes off (from a January 10, 2018 American Chemical Society news release on EurekAlert),

This year could be a defining one for CRISPR, the gene editing technique, which has been hailed as an important breakthrough in laboratory research. That’s because the first company-sponsored clinical studies will be conducted to see if it can help treat diseases in humans, according to an article in Chemical & Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society.

C&EN Assistant Editor Ryan Cross reports that a big push is coming from industry, specifically from three companies that are each partly founded by one of the three inventors of the method. They are zeroing in on the blood diseases called sickle-cell anemia and β-thalassemia, mostly because their precise cause is known. In these diseases, hemoglobin doesn’t function properly, leading to severe health issues in some people. Crispr Therapeutics and Intellia Therapeutics plan to test the technique to boost levels of an alternative version of healthy hemoglobin. Editas Medicine, however, will also use CRISPR to correct mutations in the faulty hemoglobin gene. Labs led by university researchers are also joining the mix, starting or continuing clinical trials with the approach in 2018.

Because CRISPR is being used to cut a cell’s DNA and insert a new sequence, concerns have been raised about the potential for accidents. A cut in the wrong place could mean introducing a new mutation that could be benign — or cancerous. But according to proponents of the method, researchers are conducting extensive computer predictions and in vitro tests to help avoid this outcome.

The January 8, 2018 Chemical and Engineering News (C&EN) open access article by Ryan Cross is here.

Finally, if you are interested in how this affects research as it’s being developed, there’s University of British Columbia researcher Rosie Redfield’s January 16, 2018 posting on RRResearch blog,

Thursday’s [January 11, 2018] post described the hypothesis that bacteria might use gene transfer agent particles to inoculate other cells in the population with fragments of phage DNA, and outlined an experiment to test this.  Now I’m realizing that I need to know a lot more about the kind of immunity I should expect to see if this GTA-as-vaccine hypothesis is correct.

That should give you some idea of what I meant by “research as it’s being developed.” Redfield’s blog is not for the mildly interested.

Redfield is well-known internationally as being one of the first to refute research which suggested the existence of an ‘arsenic bacterium’ (see my Dec. 8, 2010 posting: My apologies for arsenic blooper. She’s first mentioned in the second excerpt, second paragraph.) The affair was known online as #arseniclife. There’s a May 27, 2011 essay by Carl Zimmer on Slate titled: The Discovery of Arsenic-Based Twitter: How #arseniclife changed science.

How small can a carbon nanotube get before it stops being ‘electrical’?

Research, which began as an attempt to get reproducible electronics (?) measurements, yielded some unexpected results according ta January 3, 2018 news item on phys.org,

Carbon nanotubes bound for electronics not only need to be as clean as possible to maximize their utility in next-generation nanoscale devices, but contact effects may limit how small a nano device can be, according to researchers at the Energy Safety Research Institute (ESRI) at Swansea University [UK] in collaboration with researchers at Rice University [US].

ESRI Director Andrew Barron, also a professor at Rice University in the USA, and his team have figured out how to get nanotubes clean enough to obtain reproducible electronic measurements and in the process not only explained why the electrical properties of nanotubes have historically been so difficult to measure consistently, but have shown that there may be a limit to how “nano” future electronic devices can be using carbon nanotubes.

Swansea University Issued a January 3, 2018 press release (also on EurekAlert), which originated the news item, explains the work in more detail,

Like any normal wire, semiconducting nanotubes are progressively more resistant to current along their length. But conductivity measurements of nanotubes over the years have been anything but consistent. The ESRI team wanted to know why.

“We are interested in the creation of nanotube based conductors, and while people have been able to make wires their conduction has not met expectations. We were interested in determining the basic sconce behind the variability observed by other researchers.”

They discovered that hard-to-remove contaminants — leftover iron catalyst, carbon and water — could easily skew the results of conductivity tests. Burning them away, Barron said, creates new possibilities for carbon nanotubes in nanoscale electronics.

The new study appears in the American Chemical Society journal Nano Letters.

The researchers first made multiwalled carbon nanotubes between 40 and 200 nanometers in diameter and up to 30 microns long. They then either heated the nanotubes in a vacuum or bombarded them with argon ions to clean their surfaces.

They tested individual nanotubes the same way one would test any electrical conductor: By touching them with two probes to see how much current passes through the material from one tip to the other. In this case, their tungsten probes were attached to a scanning tunneling microscope.

In clean nanotubes, resistance got progressively stronger as the distance increased, as it should. But the results were skewed when the probes encountered surface contaminants, which increased the electric field strength at the tip. And when measurements were taken within 4 microns of each other, regions of depleted conductivity caused by contaminants overlapped, further scrambling the results.

“We think this is why there’s such inconsistency in the literature,” Barron said.

“If nanotubes are to be the next generation lightweight conductor, then consistent results, batch-to-batch, and sample-to-sample, is needed for devices such as motors and generators as well as power systems.”

Annealing the nanotubes in a vacuum above 200 degrees Celsius (392 degrees Fahrenheit) reduced surface contamination, but not enough to eliminate inconsistent results, they found. Argon ion bombardment also cleaned the tubes, but led to an increase in defects that degrade conductivity.

Ultimately they discovered vacuum annealing nanotubes at 500 degrees Celsius (932 Fahrenheit) reduced contamination enough to accurately measure resistance, they reported.

To now, Barron said, engineers who use nanotube fibers or films in devices modify the material through doping or other means to get the conductive properties they require. But if the source nanotubes are sufficiently decontaminated, they should be able to get the right conductivity by simply putting their contacts in the right spot.

“A key result of our work was that if contacts on a nanotube are less than 1 micron apart, the electronic properties of the nanotube changes from conductor to semiconductor, due to the presence of overlapping depletion zones” said Barron, “this has a potential limiting factor on the size of nanotube based electronic devices – this would limit the application of Moore’s law to nanotube devices.”

Chris Barnett of Swansea is lead author of the paper. Co-authors are Cathren Gowenlock and Kathryn Welsby, and Rice alumnus Alvin Orbaek White of Swansea. Barron is the Sêr Cymru Chair of Low Carbon Energy and Environment at Swansea and the Charles W. Duncan Jr.–Welch Professor of Chemistry and a professor of materials science and nanoengineering at Rice.

The Welsh Government Sêr Cymru National Research Network in Advanced Engineering and Materials, the Sêr Cymru Chair Program, the Office of Naval Research and the Robert A. Welch Foundation supported the research.

Rice University has published a January 4, 2018 Rice University news release (also on EurekAlert), which is almost (95%) identical to the press release from Swansea. That’s a bit unusual as collaborating institutions usually like to focus on their unique contributions to the research, hence, multiple news/press releases.

Dexter Johnson, in a January 11, 2018 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website,  adds a detail or two while writing in an accessible style.

Here’s a link to and a citation for the paper,

Spatial and Contamination-Dependent Electrical Properties of Carbon Nanotubes by Chris J. Barnett, Cathren E. Gowenlock, Kathryn Welsby, Alvin Orbaek White, and Andrew R. Barron. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.7b03390 Publication Date (Web): December 19, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

EuroScience Open Forum in Toulouse, France from July 9 to July 14, 2018

A March 22, 2018 EuroScience Open Forum (ESOF) 2018 announcement (received via email) trumpets some of the latest news for this event being held July 9 to July 14, 2018 in Toulouse, France. (Located in the south in the region known as the Occitanie, it’s the fourth largest city in France. Toulouse is situated on the River Garonne. See more in its Wikipedia entry.) Here’s the latest from the announcement,

ESOF 2018 Plenary Sessions

Top speakers and hot topics confirmed for the Plenary Sessions at ESOF 2018

Lorna Hughes, Professor at the University of Glasgow, Chair of the Europeana Research Advisory Board, will give a plenary keynote on “Digital humanities”. John Ioannidis, Professor of Medicine and of Health Research and Policy at Stanford University, famous for his PLoS Medicine paper on “Why most Published Research Findings are False”, will talk about “Reproducibility”. A third plenary will involve Marìa Teresa Ruiz, a Chilean astronomer and the 2017 L’Oreal UNESCO award for Women in Science: she will talk about exoplanets.

 

ESOF under the spotlights

French President’s high patronage: ESOF is at the top of the institutional agendas in 2018.

“Sharing science”. But also putting science at the highest level making it a real political and societal issue in a changing world. ESOF 2018 has officially received the “High Patronage” from the President of the French Republic Emmanuel Macron. ESOF 2018 has also been listed by the French Minister for Europe and Foreign Affairs among the 27 priority events for France.

A constellation of satellites around the ESOF planet!

Second focus on Satellite events:
4th GEO Blue Planet Symposium organised 4-6 July by Mercator Ocean.
ECSJ 2018, 5th European Conference of Science Journalists, co-organised by the French Association of Science Journalists in the News Press (AJSPI) and the Union of European Science Journalists’ Associations (EUSJA) on 8 July.
– Esprit de Découvertes (Discovery spirit) organised by the Académie des Sciences, Inscriptions et Belles Lettres de Toulouse on 8 July.

More Satellite events to come! Don’t forget to stay long enough in order to participate in these focused Satellite Events and … to discover the city.

The programme for ESOF 2018 can be found here.

Science meets poetry

As has become usual, there is a European City of Science event being held in Toulouse in concert (more or less) with and in celebration of the ESOF event. The City of Science event is being held from July 7 – July 16, 2018.

Organizers have not announced much in the way of programming for the City of Science other than a ‘Science meets Poetry’ meeting,

A unique feature of ESOF is the Science meets Poetry day, which is held at every Forum and brings poets and scientists together.

Indeed, there is today a real artistic movement of poets connected with ESOF. Famous participants from earlier meetings include contributors such as the late Seamus Heaney, Roald Hoffmann [sic] Jean-Pierre Luminet and Prince Henrik of Denmark, but many young and aspiring poets are also involved.

The meeting is in two parts:

  • lectures on subjects involving science with poetry
  • a poster session for contributed poems

There are competitions associated with the event and every Science meets Poetry day gives rise to the publication of Proceedings in book form.

In Toulouse, the event will be staged by EuroScience in collaboration with the Académie des Jeux Floraux of Toulouse, the Société des Poètes Français and the European Academy of Sciences Arts and Letters, under patronage of UNESCO. The full programme will be announced later, but includes such themes as a celebration of the number 7 in honour of the seven Troubadours of Toulouse, who held the first Jeux Floraux in the year 1323, Space Travel and the first poets and scientists who wrote about it (including Cyrano de Bergerac and Johannes Kepler), from Metrodorus and Diophantes of Alexandria to Fermat’s Last Theorem, the Poetry of Ecology, Lafayette’s ship the Hermione seen from America and many other thought-provoking subjects.

The meeting will be held in the Hôtel d’Assézat, one of the finest old buildings of the ancient city of Toulouse.

Exceptionally, it will be open to registered participants from ESOF and also to some members of the public within the limits of available space.

Tentative Programme for the Science meets Poetry day on the 12th of July 2018

(some Speakers are still to be confirmed)

  • 09:00 – 09:30 A welcome for the poets : The legendary Troubadours of Toulouse and the poetry of the number 7 (Philippe Dazet-Brun, Académie des Jeux Floraux)
  • 09:30 – 10:00 The science and the poetry of violets from Toulouse (Marie-Thérèse Esquerré-Tugayé  Laboratoire de Recherche en Sciences Végétales, Université Toulouse III-CNRS)
  • 10:00 –10:30  The true Cyrano de Bergerac, gascon poet, and his celebrated travels to the Moon (Jean-Charles Dorge, Société des Poètes Français)
  • 10:30 – 11:00  Coffee Break (with poems as posters)
  • 11:00 – 11:30 Kepler the author and the imaginary travels of the famous astronomer to the Moon. (Uli Rothfuss, die Kogge International Society of German-language authors )
  • 11:30 – 12:00  Spoutnik and Space in Russian Literature (Alla-Valeria Mikhalevitch, Laboratory of the Russian Academy of Sciences  Saint-Petersburg)
  • 12:00 – 12:30  Poems for the planet Mars (James Philip Kotsybar, the ‘Bard of Mars’, California and NASA USA)
  • 12:30 – 14:00  Lunch and meetings of the Juries of poetry competitions
  • 14:00 – 14:30  The voyage of the Hermione and « Lafayette, here we come ! » seen by an American poet (Nick Norwood, University of Columbus Ohio)
  • 14:30 –  15:00 Alexandria, Toulouse and Oxford : the poem rendered by Eutrope and Fermat’s Last Theorem (Chaunes [Jean-Patrick Connerade], European Academy of Sciences, Arts and Letters, UNESCO)
  • 15:00 –15:30  How biology is celebrated in contemporary poetry (Assumpcio Forcada, biologist and poet from Barcelona)
  • 15:30 – 16:00  A book of poems around ecology : a central subject in modern poetry (Sam Illingworth, Metropolitan University of Manchester)
  • 16:00 – 16:30  Coffee break (with poems as posters)
  • 16:30 – 17:00 Toulouse and Europe : poetry at the crossroads of European Languages (Stefka Hrusanova (Bulgarian Academy and Linguaggi-Di-Versi)
  • 17:00 – 17:30 Round Table : seven poets from Toulouse give their views on the theme : Languages, invisible frontiers within both science and poetry
  • 17:30 – 18:00 The winners of the poetry competitions are announced
  • 18:00 – 18:15 Chaunes. Closing remarks

I’m fascinated as in all the years I’ve covered the European City of Science events I’ve never before tripped across a ‘Science meets Poetry’ meeting. Sadly, there’s no contact information for those organizers. However, you can sign up for a newsletter and there are contacts for the larger event, European City of Science or as they are calling it in Toulouse, the Science in the City Festival,

Contact

Camille Rossignol (Toulouse Métropole)

camille.rossignol@toulouse-metropole.fr

+33 (0)5 36 25 27 83

François Lafont (ESOF 2018 / So Toulouse)

francois.lafont@toulouse2018.esof.eu

+33 (0)5 61 14 58 47

Travel grants for media types

One last note and this is for journalists. It’s still possible to apply for a travel grant, which helps ease but not remove the pain of travel expenses. From the ESOF 2018 Media Travel Grants webpage,

ESOF 2018 – ECSJ 2018 Travel Grants

The 5th European Conference of Science Journalists (ECSJ2018) is offering 50 travel + accommodation grants of up to 400€ to international journalists interested in attending ECSJ and ESOF.

We are looking for active professional journalists who cover science or science policy regularly (not necessarily exclusively), with an interest in reflecting on their professional practices and ethics. Applicants can be freelancers or staff, and can work for print, web, or broadcast media.

More information

ESOF 2018 Nature Travel Grants

Springer Nature is a leading research, educational and professional publisher, providing quality content to its communities through a range of innovative platforms, products and services and is home of trusted brands including Nature Research.

Nature Research has supported ESOF since its very first meeting in 2004 and is funding the Nature Travel Grant Scheme for journalists to attend ESOF2018 with the aim of increasing the impact of ESOF. The Nature Travel Grant Scheme offers a lump sum of £400 for journalists based in Europe and £800 for journalists based outside of Europe, to help cover the costs of travel and accommodation to attend ESOF2018.

More information

Good luck!

(My previous posting about this ESOF 2018 was Sept. 4, 2017 [scroll down about 50% of the way] should you be curious.)

Tracking artificial intelligence

Researchers at Stanford University have developed an index for measuring (tracking) the progress made by artificial intelligence (AI) according to a January 9, 2018 news item on phys.org (Note: Links have been removed),

Since the term “artificial intelligence” (AI) was first used in print in 1956, the one-time science fiction fantasy has progressed to the very real prospect of driverless cars, smartphones that recognize complex spoken commands and computers that see.

In an effort to track the progress of this emerging field, a Stanford-led group of leading AI thinkers called the AI100 has launched an index that will provide a comprehensive baseline on the state of artificial intelligence and measure technological progress in the same way the gross domestic product and the S&P 500 index track the U.S. economy and the broader stock market.

For anyone curious about the AI100 initiative, I have a description of it in my Sept. 27, 2016 post highlighting the group’s first report or you can keep on reading.

Getting back to the matter at hand, a December 21, 2017 Stanford University press release by Andrew Myers, which originated the news item, provides more detail about the AI index,

“The AI100 effort realized that in order to supplement its regular review of AI, a more continuous set of collected metrics would be incredibly useful,” said Russ Altman, a professor of bioengineering and the faculty director of AI100. “We were very happy to seed the AI Index, which will inform the AI100 as we move forward.”

The AI100 was set in motion three years ago when Eric Horvitz, a Stanford alumnus and former president of the Association for the Advancement of Artificial Intelligence, worked with his wife, Mary Horvitz, to define and endow the long-term study. Its first report, released in the fall of 2016, sought to anticipate the likely effects of AI in an urban environment in the year 2030.

Among the key findings in the new index are a dramatic increase in AI startups and investment as well as significant improvements in the technology’s ability to mimic human performance.

Baseline metrics

The AI Index tracks and measures at least 18 independent vectors in academia, industry, open-source software and public interest, plus technical assessments of progress toward what the authors call “human-level performance” in areas such as speech recognition, question-answering and computer vision – algorithms that can identify objects and activities in 2D images. Specific metrics in the index include evaluations of academic papers published, course enrollment, AI-related startups, job openings, search-term frequency and media mentions, among others.

“In many ways, we are flying blind in our discussions about artificial intelligence and lack the data we need to credibly evaluate activity,” said Yoav Shoham, professor emeritus of computer science.

“The goal of the AI Index is to provide a fact-based measuring stick against which we can chart progress and fuel a deeper conversation about the future of the field,” Shoham said.

Shoham conceived of the index and assembled a steering committee including Ray Perrault from SRI International, Erik Brynjolfsson of the Massachusetts Institute of Technology and Jack Clark from OpenAI. The committee subsequently hired Calvin LeGassick as project manager.

“The AI Index will succeed only if it becomes a community effort,” Shoham said.

Although the authors say the AI Index is the first index to track either scientific or technological progress, there are many other non-financial indexes that provide valuable insight into equally hard-to-quantify fields. Examples include the Social Progress Index, the Middle East peace index and the Bangladesh empowerment index, which measure factors as wide-ranging as nutrition, sanitation, workload, leisure time, public sentiment and even public speaking opportunities.

Intriguing findings

Among the findings of this inaugural index is that the number of active AI startups has increased 14-fold since 2000. Venture capital investment has increased six times in the same period. In academia, publishing in AI has increased a similarly impressive nine times in the last 20 years while course enrollment has soared. Enrollment in the introductory AI-related machine learning course at Stanford, for instance, has grown 45-fold in the last 30 years.

In technical metrics, image and speech recognition are both approaching, if not surpassing, human-level performance. The authors noted that AI systems have excelled in such real-world applications as object detection, the ability to understand and answer questions and classification of photographic images of skin cancer cells

Shoham noted that the report is still very U.S.-centric and will need a greater international presence as well as a greater diversity of voices. He said he also sees opportunities to fold in government and corporate investment in addition to the venture capital funds that are currently included.

In terms of human-level performance, the AI Index suggests that in some ways AI has already arrived. This is true in game-playing applications including chess, the Jeopardy! game show and, most recently, the game of Go. Nonetheless, the authors note that computers continue to lag considerably in the ability to generalize specific information into deeper meaning.

“AI has made truly amazing strides in the past decade,” Shoham said, “but computers still can’t exhibit the common sense or the general intelligence of even a 5-year-old.”

The AI Index was made possible by funding from AI100, Google, Microsoft and Toutiao. Data supporting the various metrics were provided by Elsevier, TrendKite, Indeed.com, Monster.com, the Google Trends Team, the Google Brain Team, Sand Hill Econometrics, VentureSource, Crunchbase, Electronic Frontier Foundation, EuroMatrix, Geoff Sutcliffe, Kevin Leyton-Brown and Holger Hoose.

You can find the AI Index here. They’re featuring their 2017 report but you can also find data (on the menu bar on the upper right side of your screen), along with a few provisos. I was curious as to whether any AI had been used to analyze the data and/or write the report. A very cursory look at the 2017 report did not answer that question. I’m fascinated by the failure to address what I think is an obvious question. It suggests that even very, very bright people can become blind and I suspect that’s why the group seems quite eager to get others involved, from the 2017 AI Index Report,

As the report’s limitations illustrate, the AI Index will always paint a partial picture. For this reason, we include subjective commentary from a cross-section of AI experts. This Expert Forum helps animate the story behind the data in the report and adds interpretation the report lacks.

Finally, where the experts’ dialogue ends, your opportunity to Get Involved begins [emphasis mine]. We will need the feedback and participation of a larger community to address the issues identified in this report, uncover issues we have omitted, and build a productive process for tracking activity and progress in Artificial Intelligence. (p. 8)

Unfortunately, it’s not clear how one becomes involved. Is there a forum or do you get in touch with one of the team leaders?

I wish them good luck with their project and imagine that these minor hiccups will be dealt with in near term.

Australian peacock spiders, photonic nanostructures, and making money

Researcher Bor-Kai Hsiung’s work has graced this blog before but the topic was tarantulas and their structural colour. This time, it’s all about Australian peacock spiders and their structural colour according to a December 22, 2017 news item on ScienceDaily,

Even if you are arachnophobic, you probably have seen pictures or videos of Australian peacock spiders (Maratus spp.). These tiny spiders are only 1-5 mm long but are famous for their flamboyant courtship displays featuring diverse and intricate body colorations, patterns, and movements.

The spiders extremely large anterior median eyes have excellent color vision and combine with their bright colors to make peacock spiders cute enough to cure most people of their arachnophobia. But these displays aren’t just pretty to look at, they also inspire new ways for humans to produce color in technology.

One species of peacock spider — the rainbow peacock spider (Maratus robinsoni) is particularly neat, because it showcases an intense rainbow iridescent signal in males’ courtship displays to the females. This is the first known instance in nature of males using an entire rainbow of colors to entice females. Dr. Bor-Kai Hsiung led an international team of researchers from the US (UAkron, Cal Tech, UC San Diego, UNL [University of Nebraska-Lincoln]), Belgium (Ghent University), Netherlands (UGroningen), and Australia to discover how rainbow peacock spiders produce this unique multi-color iridescent signal.

A December 22, 2017 Ghent University (Belgium) press release on Alpha Galileo, which originated the news item, provides more technical detail,

Using a diverse array of research techniques, including light and electron microscopy, hyperspectral imaging, imaging scatterometry, nano 3D printing and optical modeling, the team found the origin of this intense rainbow iridescence emerged from specialized abdominal scales of the spiders. These scales have an airfoil-like microscopic 3D contour with nanoscale diffraction grating structures on the surface.

The interaction between the surface nano-diffraction grating and the microscopic curvature of the scales enables separation and isolation of light into its component wavelengths at finer angles and smaller distances than are possible with current manmade engineering technologies.

Inspiration from these super iridescent scales can be used to overcome current limitations in spectral manipulation, and to further reduce the size of optical spectrometers for applications where fine-scale spectral resolution is required in a very small package, notably instruments on space missions, or wearable chemical detection systems. And it could have a wide array of implications to fields ranging from life sciences and biotechnologies to material sciences and engineering.

Here’s a video of an Australian rainbow peacock spider,

Here’s more from the YouTube description published on April 13, 2017 by Peacockspiderman,

Scenes of Maratus robinsoni, a spider Peter Robinson discovered and David Hill and I named it after him in 2012. You can read our description on pages 36-41 in Peckhamia 103.2, which can be downloaded from the Peckhamia website http://peckhamia.com/peckhamia_number…. This is one of the two smallest species of peacock spider (2.5 mm long) and the only spider we know of in which colour changes occur every time it moves, this video was created to document this. Music: ‘Be Still’ by Johannes Bornlöf licensed through my MCN ‘Brave Bison’ from ‘Epidemic Sound’ For licensing inquiries please contact Brave Bison licensing@bravebison.io

The University of California at San Diego also published a December 22, 2017 news release about this work, which covers some of the same ground while providing a few new tidbits of information,

Brightly colored Australian peacock spiders (Maratus spp.) captivate even the most arachnophobic viewers with their flamboyant courtship displays featuring diverse and intricate body colorations, patterns, and movements – all packed into miniature bodies measuring less than five millimeters in size for many species. However, these displays are not just pretty to look at. They also inspire new ways for humans to produce color in technology.

One species of peacock spider – the rainbow peacock spider (Maratus robinsoni) – is particularly impressive, because it showcases an intense rainbow iridescent signal in males’ courtship displays to females. This is the first known instance in nature of males using an entire rainbow of colors to entice females to mate. But how do males make their rainbows? A new study published in Nature Communications looked to answer that question.

Figuring out the answers was inherently interdisciplinary so Bor-Kai Hsiung, a postdoctoral scholar at Scripps Institution of Oceanography at the University of California San Diego, assembled an international team that included biologists, physicists and engineers. Starting while he was a Ph.D. student at The University of Akron under the mentorship of Todd Blackledge and Matthew Shawkey, the team included researchers from UA, Scripps Oceanography, California Institute of Technology, and University of Nebraska-Lincoln, the University of Ghent in Belgium, University of Groningen in Netherlands, and Australia to discover how rainbow peacock spiders produce this unique iridescent signal.

The team investigated the spider’s photonic structures using techniques that included light and electron microscopy, hyperspectral imaging, imaging scatterometry and optical modeling to generate hypotheses about how the spider’s scale generate such intense rainbows. The team then used cutting-edge nano 3D printing to fabricate different prototypes to test and validate their hypotheses. In the end, they found that the intense rainbow iridescence emerged from specialized abdominal scales on the spiders. These scales combine an airfoil-like microscopic 3D contour with nanoscale diffraction grating structures on the surface. It is the interaction between the surface nano-diffraction grating and the microscopic curvature of the scales that enables separation and isolation of light into its component wavelengths at finer angles and smaller distances than are possible with current engineering technologies.

“Who knew that such a small critter would create such an intense iridescence using extremely sophisticated mechanisms that will inspire optical engineers,” said Dimitri Deheyn, Hsuing’s advisor at Scripps Oceanography and a coauthor of the study.

For Hsiung, the finding wasn’t quite so unexpected.

“One of the main questions that I wanted to address in my Ph.D. dissertation was ‘how does nature modulate iridescence?’ From a biomimicry perspective, to fully understand and address a question, one has to take extremes from both ends into consideration. I purposefully chose to study these tiny spiders with intense iridescence after having investigated the non-iridescent blue tarantulas,” said Hsiung.

The mechanism behind these tiny rainbows may inspire new color technology, but would not have been discovered without research combining basic natural history with physics and engineering, the researchers said.

“Nanoscale 3D printing allowed us to experimentally validate our models, which was really exciting,” said Shawkey. “We hope that these techniques will become common in the future.”

“As an engineer, what I found fascinating about these spider structural colors is how these long evolved complex structures can still outperform human engineering,” said Radwanul Hasan Siddique, a postdoctoral scholar at Caltech and study coauthor. “Even with high-end fabrication techniques, we could not replicate the exact structures. I wonder how the spiders assemble these fancy structural patterns in the first place!”

Inspiration from these super iridescent spider scales can be used to overcome current limitations in spectral manipulation, and to reduce the size of optical spectrometers for applications where fine-scale spectral resolution is required in a very small package, notably instruments on space missions, or wearable chemical detection systems.

In the end, peacock spiders don’t just produce nature’s smallest rainbows.They could also have implications for a wide array of fields ranging from life sciences and biotechnologies to material sciences and engineering.

Before citing the paper and providing a link, here’s a story by Robert F. Service for Science magazine about attempts to capitalize on ‘spider technology’, in this case spider silk,

The hype over spider silk has been building since 1710. That was the year François Xavier Bon de Saint Hilaire, president of the Royal Society of Sciences in Montpellier, France, wrote to his colleagues, “You will be surpriz’d to hear, that Spiders make a Silk, as beautiful, strong and glossy, as common Silk.” Modern pitches boast that spider silk is five times stronger than steel yet more flexible than rubber. If it could be made into ropes, a macroscale web would be able to snare a jetliner.

The key word is “if.” Researchers first cloned a spider silk gene in 1990, in hopes of incorporating it into other organisms to produce the silk. (Spiders can’t be farmed like silkworms because they are territorial and cannibalistic.) Today, Escherichia coli bacteria, yeasts, plants, silkworms, and even goats have been genetically engineered to churn out spider silk proteins, though the proteins are often shorter and simpler than the spiders’ own. Companies have managed to spin those proteins into enough high-strength thread to produce a few prototype garments, including a running shoe by Adidas and a lightweight parka by The North Face. But so far, companies have struggled to mass produce these supersilks.

Some executives say that may finally be about to change. One Emeryville, California-based startup, Bolt Threads, says it has perfected growing spider silk proteins in yeast and is poised to turn out tons of spider silk thread per year. In Lansing, Michigan, Kraig Biocraft Laboratories says it needs only to finalize negotiations with silkworm farms in Vietnam to produce mass quantities of a combination spider/silkworm silk, which the U.S. Army is now testing for ballistics protection. …

I encourage you to read Service’s article in its entirety if the commercialization prospects for spider silk interest you as it includes gems such as this,

Spider silk proteins are already making their retail debut—but in cosmetics and medical devices, not high-strength fibers. AMSilk grows spider silk proteins in E. coli and dries the purified protein into powders or mixes it into gels, for use as additives for personal care products, such as moisture-retaining skin lotions. The silk proteins supposedly help the lotions form a very smooth, but breathable, layer over the skin. Römer says the company now sells tons of its purified silk protein ingredients every year.

Finally, here’s a citation for and a link to the paper about Australian peacock spiders and nanophotonics,

Rainbow peacock spiders inspire miniature super-iridescent optics by Bor-Kai Hsiung, Radwanul Hasan Siddique, Doekele G. Stavenga, Jürgen C. Otto, Michael C. Allen, Ying Liu, Yong-Feng Lu, Dimitri D. Deheyn, Matthew D. Shawkey, & Todd A. Blackledge. Nature Communications 8, Article number: 2278 (2017) doi:10.1038/s41467-017-02451-x Published online: 22 December 2017

This paper is open access.

As for Bor-Kai Hsiung’s other mentions here:

How tarantulas get blue (December 7, 2015 posting)

Noniridescent photonics inspired by tarantulas (October 19, 2016 posting)

More on the blue tarantula noniridescent photonics (December 28, 2016 posting)

Quantum computing and more at SXSW (South by Southwest) 2018

It’s that time of year again. The entertainment conference such as South by South West (SXSW) is being held from March 9-18, 2018. The science portion of the conference can be found in the Intelligent Future sessions, from the description,

AI and new technologies embody the realm of possibilities where intelligence empowers and enables technology while sparking legitimate concerns about its uses. Highlighted Intelligent Future sessions include New Mobility and the Future of Our Cities, Mental Work: Moving Beyond Our Carbon Based Minds, Can We Create Consciousness in a Machine?, and more.

Intelligent Future Track sessions are held March 9-15 at the Fairmont.

Last year I focused on the conference sessions on robots, Hiroshi Ishiguro’s work, and artificial intelligence in a  March 27, 2017 posting. This year I’m featuring one of the conference’s quantum computing session, from a March 9, 2018 University of Texas at Austin news release  (also on EurekAlert),

Imagine a new kind of computer that can quickly solve problems that would stump even the world’s most powerful supercomputers. Quantum computers are fundamentally different. They can store information as not only just ones and zeros, but in all the shades of gray in-between. Several companies and government agencies are investing billions of dollars in the field of quantum information. But what will quantum computers be used for?

South by Southwest 2018 hosts a panel on March 10th [2018] called Quantum Computing: Science Fiction to Science Fact. Experts on quantum computing make up the panel, including Jerry Chow of IBM; Bo Ewald of D-Wave Systems; Andrew Fursman of 1QBit; and Antia Lamas-Linares of the Texas Advanced Computing Center at UT Austin.

Antia Lamas-Linares is a Research Associate in the High Performance Computing group at TACC. Her background is as an experimentalist with quantum computing systems, including work done with them at the Centre for Quantum Technologies in Singapore. She joins podcast host Jorge Salazar to talk about her South by Southwest panel and about some of her latest research on quantum information.

Lamas-Linares co-authored a study (doi: 10.1117/12.2290561) in the Proceedings of the SPIE, The International Society for Optical Engineering, that published in February of 2018. The study, “Secure Quantum Clock Synchronization,” proposed a protocol to verify and secure time synchronization of distant atomic clocks, such as those used for GPS signals in cell phone towers and other places. “It’s important work,” explained Lamas-Linares, “because people are worried about malicious parties messing with the channels of GPS. What James Troupe (Applied Research Laboratories, UT Austin) and I looked at was whether we can use techniques from quantum cryptography and quantum information to make something that is inherently unspoofable.”

Antia Lamas-Linares: The most important thing is that quantum technologies is a really exciting field. And it’s exciting in a fundamental sense. We don’t quite know what we’re going to get out of it. We know a few things, and that’s good enough to drive research. But the things we don’t know are much broader than the things we know, and it’s going to be really interesting. Keep your eyes open for this.

Quantum Computing: Science Fiction to Science Fact, March 10, 2018 | 11:00AM – 12:00PM, Fairmont Manchester EFG, SXSW 2018, Austin, TX.

If you look up the session, you will find,

Quantum Computing: Science Fiction to Science Fact

Quantum Computing: Science Fiction to Science Fact

Speakers

Bo Ewald

D-Wave Systems

Antia Lamas-Linares

Texas Advanced Computing Center at University of Texas

Startups and established players have sold 2000 Qubit systems, made freely available cloud access to quantum computer processors, and created large scale open source initiatives, all taking quantum computing from science fiction to science fact. Government labs and others like IBM, Microsoft, Google are developing software for quantum computers. What problems will be solved with this quantum leap in computing power that cannot be solved today with the world’s most powerful supercomputers?

[Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.]

Favorited by (1128)

View all

Primary Entry: Platinum Badge, Interactive Badge

Secondary Entry: Music Badge, Film Badge

Format: Panel

Event Type: Session

Track: Intelligent Future

Level: Intermediate

 

I wonder what ‘level’ means? I was not able to find an answer (quickly).

It’s was a bit surprising to find someone from D-Wave Systems (a Vancouver-based quantum computing based enterprise) at an entertainment conference. Still, it shouldn’t have been. Two other examples immediately come to mind, the TED (technology, entertainment, and design) conferences have been melding technology, if not science, with creative activities of all kinds for many years (TED 2018: The Age of Amazement, April 10 -14, 2018 in Vancouver [Canada]) and Beakerhead (2018 dates: Sept. 19 – 23) has been melding art, science, and engineering in a festival held in Calgary (Canada) since 2013. One comment about TED, it was held for several years in California (1984, 1990 – 2013) and moved to Vancouver in 2014.

For anyone wanting to browse the 2018 SxSW Intelligent Future sessions online, go here. or wanting to hear Antia Lamas-Linares talk about quantum computing, there’s the interview with Jorge Salazar (mentioned in the news release),