Tag Archives: machine learning

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (1 of 2)

Before launching into the assessment, a brief explanation of my theme: Hedy Lamarr was considered to be one of the great beauties of her day,

“Ziegfeld Girl” Hedy Lamarr 1941 MGM *M.V.
Titles: Ziegfeld Girl
People: Hedy Lamarr
Image courtesy mptvimages.com [downloaded from https://www.imdb.com/title/tt0034415/mediaviewer/rm1566611456]

Aside from starring in Hollywood movies and, before that, movies in Europe, she was also an inventor and not just any inventor (from a Dec. 4, 2017 article by Laura Barnett for The Guardian), Note: Links have been removed,

Let’s take a moment to reflect on the mercurial brilliance of Hedy Lamarr. Not only did the Vienna-born actor flee a loveless marriage to a Nazi arms dealer to secure a seven-year, $3,000-a-week contract with MGM, and become (probably) the first Hollywood star to simulate a female orgasm on screen – she also took time out to invent a device that would eventually revolutionise mobile communications.

As described in unprecedented detail by the American journalist and historian Richard Rhodes in his new book, Hedy’s Folly, Lamarr and her business partner, the composer George Antheil, were awarded a patent in 1942 for a “secret communication system”. It was meant for radio-guided torpedoes, and the pair gave to the US Navy. It languished in their files for decades before eventually becoming a constituent part of GPS, Wi-Fi and Bluetooth technology.

(The article goes on to mention other celebrities [Marlon Brando, Barbara Cartland, Mark Twain, etc] and their inventions.)

Lamarr’s work as an inventor was largely overlooked until the 1990’s when the technology community turned her into a ‘cultish’ favourite and from there her reputation grew and acknowledgement increased culminating in Rhodes’ book and the documentary by Alexandra Dean, ‘Bombshell: The Hedy Lamarr Story (to be broadcast as part of PBS’s American Masters series on May 18, 2018).

Canada as Hedy Lamarr

There are some parallels to be drawn between Canada’s S&T and R&D (science and technology; research and development) and Ms. Lamarr. Chief amongst them, we’re not always appreciated for our brains. Not even by people who are supposed to know better such as the experts on the panel for the ‘Third assessment of The State of Science and Technology and Industrial Research and Development in Canada’ (proper title: Competing in a Global Innovation Economy: The Current State of R&D in Canada) from the Expert Panel on the State of Science and Technology and Industrial Research and Development in Canada.

A little history

Before exploring the comparison to Hedy Lamarr further, here’s a bit more about the history of this latest assessment from the Council of Canadian Academies (CCA), from the report released April 10, 2018,

This assessment of Canada’s performance indicators in science, technology, research, and innovation comes at an opportune time. The Government of Canada has expressed a renewed commitment in several tangible ways to this broad domain of activity including its Innovation and Skills Plan, the announcement of five superclusters, its appointment of a new Chief Science Advisor, and its request for the Fundamental Science Review. More specifically, the 2018 Federal Budget demonstrated the government’s strong commitment to research and innovation with historic investments in science.

The CCA has a decade-long history of conducting evidence-based assessments about Canada’s research and development activities, producing seven assessments of relevance:

The State of Science and Technology in Canada (2006) [emphasis mine]
•Innovation and Business Strategy: Why Canada Falls Short (2009)
•Catalyzing Canada’s Digital Economy (2010)
•Informing Research Choices: Indicators and Judgment (2012)
The State of Science and Technology in Canada (2012) [emphasis mine]
The State of Industrial R&D in Canada (2013) [emphasis mine]
•Paradox Lost: Explaining Canada’s Research Strength and Innovation Weakness (2013)

Using similar methods and metrics to those in The State of Science and Technology in Canada (2012) and The State of Industrial R&D in Canada (2013), this assessment tells a similar and familiar story: Canada has much to be proud of, with world-class researchers in many domains of knowledge, but the rest of the world is not standing still. Our peers are also producing high quality results, and many countries are making significant commitments to supporting research and development that will position them to better leverage their strengths to compete globally. Canada will need to take notice as it determines how best to take action. This assessment provides valuable material for that conversation to occur, whether it takes place in the lab or the legislature, the bench or the boardroom. We also hope it will be used to inform public discussion. [p. ix Print, p. 11 PDF]

This latest assessment succeeds the general 2006 and 2012 reports, which were mostly focused on academic research, and combines it with an assessment of industrial research, which was previously separate. Also, this third assessment’s title (Competing in a Global Innovation Economy: The Current State of R&D in Canada) makes what was previously quietly declared in the text, explicit from the cover onwards. It’s all about competition, despite noises such as the 2017 Naylor report (Review of fundamental research) about the importance of fundamental research.

One other quick comment, I did wonder in my July 1, 2016 posting (featuring the announcement of the third assessment) how combining two assessments would impact the size of the expert panel and the size of the final report,

Given the size of the 2012 assessment of science and technology at 232 pp. (PDF) and the 2013 assessment of industrial research and development at 220 pp. (PDF) with two expert panels, the imagination boggles at the potential size of the 2016 expert panel and of the 2016 assessment combining the two areas.

I got my answer with regard to the panel as noted in my Oct. 20, 2016 update (which featured a list of the members),

A few observations, given the size of the task, this panel is lean. As well, there are three women in a group of 13 (less than 25% representation) in 2016? It’s Ontario and Québec-dominant; only BC and Alberta rate a representative on the panel. I hope they will find ways to better balance this panel and communicate that ‘balanced story’ to the rest of us. On the plus side, the panel has representatives from the humanities, arts, and industry in addition to the expected representatives from the sciences.

The imbalance I noted then was addressed, somewhat, with the selection of the reviewers (from the report released April 10, 2018),

The CCA wishes to thank the following individuals for their review of this report:

Ronald Burnett, C.M., O.B.C., RCA, Chevalier de l’ordre des arts et des
lettres, President and Vice-Chancellor, Emily Carr University of Art and Design
(Vancouver, BC)

Michelle N. Chretien, Director, Centre for Advanced Manufacturing and Design
Technologies, Sheridan College; Former Program and Business Development
Manager, Electronic Materials, Xerox Research Centre of Canada (Brampton,
ON)

Lisa Crossley, CEO, Reliq Health Technologies, Inc. (Ancaster, ON)
Natalie Dakers, Founding President and CEO, Accel-Rx Health Sciences
Accelerator (Vancouver, BC)

Fred Gault, Professorial Fellow, United Nations University-MERIT (Maastricht,
Netherlands)

Patrick D. Germain, Principal Engineering Specialist, Advanced Aerodynamics,
Bombardier Aerospace (Montréal, QC)

Robert Brian Haynes, O.C., FRSC, FCAHS, Professor Emeritus, DeGroote
School of Medicine, McMaster University (Hamilton, ON)

Susan Holt, Chief, Innovation and Business Relationships, Government of
New Brunswick (Fredericton, NB)

Pierre A. Mohnen, Professor, United Nations University-MERIT and Maastricht
University (Maastricht, Netherlands)

Peter J. M. Nicholson, C.M., Retired; Former and Founding President and
CEO, Council of Canadian Academies (Annapolis Royal, NS)

Raymond G. Siemens, Distinguished Professor, English and Computer Science
and Former Canada Research Chair in Humanities Computing, University of
Victoria (Victoria, BC) [pp. xii- xiv Print; pp. 15-16 PDF]

The proportion of women to men as reviewers jumped up to about 36% (4 of 11 reviewers) and there are two reviewers from the Maritime provinces. As usual, reviewers external to Canada were from Europe. Although this time, they came from Dutch institutions rather than UK or German institutions. Interestingly and unusually, there was no one from a US institution. When will they start using reviewers from other parts of the world?

As for the report itself, it is 244 pp. (PDF). (For the really curious, I have a  December 15, 2016 post featuring my comments on the preliminary data for the third assessment.)

To sum up, they had a lean expert panel tasked with bringing together two inquiries and two reports. I imagine that was daunting. Good on them for finding a way to make it manageable.

Bibliometrics, patents, and a survey

I wish more attention had been paid to some of the issues around open science, open access, and open data, which are changing how science is being conducted. (I have more about this from an April 5, 2018 article by James Somers for The Atlantic but more about that later.) If I understand rightly, they may not have been possible due to the nature of the questions posed by the government when requested the assessment.

As was done for the second assessment, there is an acknowledgement that the standard measures/metrics (bibliometrics [no. of papers published, which journals published them; number of times papers were cited] and technometrics [no. of patent applications, etc.] of scientific accomplishment and progress are not the best and new approaches need to be developed and adopted (from the report released April 10, 2018),

It is also worth noting that the Panel itself recognized the limits that come from using traditional historic metrics. Additional approaches will be needed the next time this assessment is done. [p. ix Print; p. 11 PDF]

For the second assessment and as a means of addressing some of the problems with metrics, the panel decided to take a survey which the panel for the third assessment has also done (from the report released April 10, 2018),

The Panel relied on evidence from multiple sources to address its charge, including a literature review and data extracted from statistical agencies and organizations such as Statistics Canada and the OECD. For international comparisons, the Panel focused on OECD countries along with developing countries that are among the top 20 producers of peer-reviewed research publications (e.g., China, India, Brazil, Iran, Turkey). In addition to the literature review, two primary research approaches informed the Panel’s assessment:
•a comprehensive bibliometric and technometric analysis of Canadian research publications and patents; and,
•a survey of top-cited researchers around the world.

Despite best efforts to collect and analyze up-to-date information, one of the Panel’s findings is that data limitations continue to constrain the assessment of R&D activity and excellence in Canada. This is particularly the case with industrial R&D and in the social sciences, arts, and humanities. Data on industrial R&D activity continue to suffer from time lags for some measures, such as internationally comparable data on R&D intensity by sector and industry. These data also rely on industrial categories (i.e., NAICS and ISIC codes) that can obscure important trends, particularly in the services sector, though Statistics Canada’s recent revisions to how this data is reported have improved this situation. There is also a lack of internationally comparable metrics relating to R&D outcomes and impacts, aside from those based on patents.

For the social sciences, arts, and humanities, metrics based on journal articles and other indexed publications provide an incomplete and uneven picture of research contributions. The expansion of bibliometric databases and methodological improvements such as greater use of web-based metrics, including paper views/downloads and social media references, will support ongoing, incremental improvements in the availability and accuracy of data. However, future assessments of R&D in Canada may benefit from more substantive integration of expert review, capable of factoring in different types of research outputs (e.g., non-indexed books) and impacts (e.g., contributions to communities or impacts on public policy). The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity. It is vital that such contributions are better measured and assessed. [p. xvii Print; p. 19 PDF]

My reading: there’s a problem and we’re not going to try and fix it this time. Good luck to those who come after us. As for this line: “The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity.” Did no one explain that when you use ‘no doubt’, you are introducing doubt? It’s a cousin to ‘don’t take this the wrong way’ and ‘I don’t mean to be rude but …’ .

Good news

This is somewhat encouraging (from the report released April 10, 2018),

Canada’s international reputation for its capacity to participate in cutting-edge R&D is strong, with 60% of top-cited researchers surveyed internationally indicating that Canada hosts world-leading infrastructure or programs in their fields. This share increased by four percentage points between 2012 and 2017. Canada continues to benefit from a highly educated population and deep pools of research skills and talent. Its population has the highest level of educational attainment in the OECD in the proportion of the population with
a post-secondary education. However, among younger cohorts (aged 25 to 34), Canada has fallen behind Japan and South Korea. The number of researchers per capita in Canada is on a par with that of other developed countries, andincreased modestly between 2004 and 2012. Canada’s output of PhD graduates has also grown in recent years, though it remains low in per capita terms relative to many OECD countries. [pp. xvii-xviii; pp. 19-20]

Don’t let your head get too big

Most of the report observes that our international standing is slipping in various ways such as this (from the report released April 10, 2018),

In contrast, the number of R&D personnel employed in Canadian businesses
dropped by 20% between 2008 and 2013. This is likely related to sustained and
ongoing decline in business R&D investment across the country. R&D as a share
of gross domestic product (GDP) has steadily declined in Canada since 2001,
and now stands well below the OECD average (Figure 1). As one of few OECD
countries with virtually no growth in total national R&D expenditures between
2006 and 2015, Canada would now need to more than double expenditures to
achieve an R&D intensity comparable to that of leading countries.

Low and declining business R&D expenditures are the dominant driver of this
trend; however, R&D spending in all sectors is implicated. Government R&D
expenditures declined, in real terms, over the same period. Expenditures in the
higher education sector (an indicator on which Canada has traditionally ranked
highly) are also increasing more slowly than the OECD average. Significant
erosion of Canada’s international competitiveness and capacity to participate
in R&D and innovation is likely to occur if this decline and underinvestment
continue.

Between 2009 and 2014, Canada produced 3.8% of the world’s research
publications, ranking ninth in the world. This is down from seventh place for
the 2003–2008 period. India and Italy have overtaken Canada although the
difference between Italy and Canada is small. Publication output in Canada grew
by 26% between 2003 and 2014, a growth rate greater than many developed
countries (including United States, France, Germany, United Kingdom, and
Japan), but below the world average, which reflects the rapid growth in China
and other emerging economies. Research output from the federal government,
particularly the National Research Council Canada, dropped significantly
between 2009 and 2014.(emphasis mine)  [p. xviii Print; p. 20 PDF]

For anyone unfamiliar with Canadian politics,  2009 – 2014 were years during which Stephen Harper’s Conservatives formed the government. Justin Trudeau’s Liberals were elected to form the government in late 2015.

During Harper’s years in government, the Conservatives were very interested in changing how the National Research Council of Canada operated and, if memory serves, the focus was on innovation over research. Consequently, the drop in their research output is predictable.

Given my interest in nanotechnology and other emerging technologies, this popped out (from the report released April 10, 2018),

When it comes to research on most enabling and strategic technologies, however, Canada lags other countries. Bibliometric evidence suggests that, with the exception of selected subfields in Information and Communication Technologies (ICT) such as Medical Informatics and Personalized Medicine, Canada accounts for a relatively small share of the world’s research output for promising areas of technology development. This is particularly true for Biotechnology, Nanotechnology, and Materials science [emphasis mine]. Canada’s research impact, as reflected by citations, is also modest in these areas. Aside from Biotechnology, none of the other subfields in Enabling and Strategic Technologies has an ARC rank among the top five countries. Optoelectronics and photonics is the next highest ranked at 7th place, followed by Materials, and Nanoscience and Nanotechnology, both of which have a rank of 9th. Even in areas where Canadian researchers and institutions played a seminal role in early research (and retain a substantial research capacity), such as Artificial Intelligence and Regenerative Medicine, Canada has lost ground to other countries.

Arguably, our early efforts in artificial intelligence wouldn’t have garnered us much in the way of ranking and yet we managed some cutting edge work such as machine learning. I’m not suggesting the expert panel should have or could have found some way to measure these kinds of efforts but I’m wondering if there could have been some acknowledgement in the text of the report. I’m thinking a couple of sentences in a paragraph about the confounding nature of scientific research where areas that are ignored for years and even decades then become important (e.g., machine learning) but are not measured as part of scientific progress until after they are universally recognized.

Still, point taken about our diminishing returns in ’emerging’ technologies and sciences (from the report released April 10, 2018),

The impression that emerges from these data is sobering. With the exception of selected ICT subfields, such as Medical Informatics, bibliometric evidence does not suggest that Canada excels internationally in most of these research areas. In areas such as Nanotechnology and Materials science, Canada lags behind other countries in levels of research output and impact, and other countries are outpacing Canada’s publication growth in these areas — leading to declining shares of world publications. Even in research areas such as AI, where Canadian researchers and institutions played a foundational role, Canadian R&D activity is not keeping pace with that of other countries and some researchers trained in Canada have relocated to other countries (Section 4.4.1). There are isolated exceptions to these trends, but the aggregate data reviewed by this Panel suggest that Canada is not currently a world leader in research on most emerging technologies.

The Hedy Lamarr treatment

We have ‘good looks’ (arts and humanities) but not the kind of brains (physical sciences and engineering) that people admire (from the report released April 10, 2018),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphases mine] It accounts for more than 5% of world researchin these fields. Conversely, Canada has lower research output than expected
in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

Couldn’t they have used a more buoyant tone? After all, science was known as ‘natural philosophy’ up until the 19th century. As for visual and performing arts, let’s include poetry as a performing and literary art (both have been the case historically and cross-culturally) and let’s also note that one of the great physics texts, (De rerum natura by Lucretius) was a multi-volume poem (from Lucretius’ Wikipedia entry; Note: Links have been removed).

His poem De rerum natura (usually translated as “On the Nature of Things” or “On the Nature of the Universe”) transmits the ideas of Epicureanism, which includes Atomism [the concept of atoms forming materials] and psychology. Lucretius was the first writer to introduce Roman readers to Epicurean philosophy.[15] The poem, written in some 7,400 dactylic hexameters, is divided into six untitled books, and explores Epicurean physics through richly poetic language and metaphors. Lucretius presents the principles of atomism; the nature of the mind and soul; explanations of sensation and thought; the development of the world and its phenomena; and explains a variety of celestial and terrestrial phenomena. The universe described in the poem operates according to these physical principles, guided by fortuna, “chance”, and not the divine intervention of the traditional Roman deities.[16]

Should you need more proof that the arts might have something to contribute to physical sciences, there’s this in my March 7, 2018 posting,

It’s not often you see research that combines biologically inspired engineering and a molecular biophysicist with a professional animator who worked at Peter Jackson’s (Lord of the Rings film trilogy, etc.) Park Road Post film studio. An Oct. 18, 2017 news item on ScienceDaily describes the project,

Like many other scientists, Don Ingber, M.D., Ph.D., the Founding Director of the Wyss Institute, [emphasis mine] is concerned that non-scientists have become skeptical and even fearful of his field at a time when technology can offer solutions to many of the world’s greatest problems. “I feel that there’s a huge disconnect between science and the public because it’s depicted as rote memorization in schools, when by definition, if you can memorize it, it’s not science,” says Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Professor of Bioengineering at the Harvard Paulson School of Engineering and Applied Sciences (SEAS). [emphasis mine] “Science is the pursuit of the unknown. We have a responsibility to reach out to the public and convey that excitement of exploration and discovery, and fortunately, the film industry is already great at doing that.”

“Not only is our physics-based simulation and animation system as good as other data-based modeling systems, it led to the new scientific insight [emphasis mine] that the limited motion of the dynein hinge focuses the energy released by ATP hydrolysis, which causes dynein’s shape change and drives microtubule sliding and axoneme motion,” says Ingber. “Additionally, while previous studies of dynein have revealed the molecule’s two different static conformations, our animation visually depicts one plausible way that the protein can transition between those shapes at atomic resolution, which is something that other simulations can’t do. The animation approach also allows us to visualize how rows of dyneins work in unison, like rowers pulling together in a boat, which is difficult using conventional scientific simulation approaches.”

It comes down to how we look at things. Yes, physical sciences and engineering are very important. If the report is to be believed we have a very highly educated population and according to PISA scores our students rank highly in mathematics, science, and reading skills. (For more information on Canada’s latest PISA scores from 2015 see this OECD page. As for PISA itself, it’s an OECD [Organization for Economic Cooperation and Development] programme where 15-year-old students from around the world are tested on their reading, mathematics, and science skills, you can get some information from my Oct. 9, 2013 posting.)

Is it really so bad that we choose to apply those skills in fields other than the physical sciences and engineering? It’s a little bit like Hedy Lamarr’s problem except instead of being judged for our looks and having our inventions dismissed, we’re being judged for not applying ourselves to physical sciences and engineering and having our work in other closely aligned fields dismissed as less important.

Canada’s Industrial R&D: an oft-told, very sad story

Bemoaning the state of Canada’s industrial research and development efforts has been a national pastime as long as I can remember. Here’s this from the report released April 10, 2018,

There has been a sustained erosion in Canada’s industrial R&D capacity and competitiveness. Canada ranks 33rd among leading countries on an index assessing the magnitude, intensity, and growth of industrial R&D expenditures. Although Canada is the 11th largest spender, its industrial R&D intensity (0.9%) is only half the OECD average and total spending is declining (−0.7%). Compared with G7 countries, the Canadian portfolio of R&D investment is more concentrated in industries that are intrinsically not as R&D intensive. Canada invests more heavily than the G7 average in oil and gas, forestry, machinery and equipment, and finance where R&D has been less central to business strategy than in many other industries. …  About 50% of Canada’s industrial R&D spending is in high-tech sectors (including industries such as ICT, aerospace, pharmaceuticals, and automotive) compared with the G7 average of 80%. Canadian Business Enterprise Expenditures on R&D (BERD) intensity is also below the OECD average in these sectors. In contrast, Canadian investment in low and medium-low tech sectors is substantially higher than the G7 average. Canada’s spending reflects both its long-standing industrial structure and patterns of economic activity.

R&D investment patterns in Canada appear to be evolving in response to global and domestic shifts. While small and medium-sized enterprises continue to perform a greater share of industrial R&D in Canada than in the United States, between 2009 and 2013, there was a shift in R&D from smaller to larger firms. Canada is an increasingly attractive place to conduct R&D. Investment by foreign-controlled firms in Canada has increased to more than 35% of total R&D investment, with the United States accounting for more than half of that. [emphasis mine]  Multinational enterprises seem to be increasingly locating some of their R&D operations outside their country of ownership, possibly to gain proximity to superior talent. Increasing foreign-controlled R&D, however, also could signal a long-term strategic loss of control over intellectual property (IP) developed in this country, ultimately undermining the government’s efforts to support high-growth firms as they scale up. [pp. xxii-xxiii Print; pp. 24-25 PDF]

Canada has been known as a ‘branch plant’ economy for decades. For anyone unfamiliar with the term, it means that companies from other countries come here, open up a branch and that’s how we get our jobs as we don’t have all that many large companies here. Increasingly, multinationals are locating R&D shops here.

While our small to medium size companies fund industrial R&D, it’s large companies (multinationals) which can afford long-term and serious investment in R&D. Luckily for companies from other countries, we have a well-educated population of people looking for jobs.

In 2017, we opened the door more widely so we can scoop up talented researchers and scientists from other countries, from a June 14, 2017 article by Beckie Smith for The PIE News,

Universities have welcomed the inclusion of the work permit exemption for academic stays of up to 120 days in the strategy, which also introduces expedited visa processing for some highly skilled professions.

Foreign researchers working on projects at a publicly funded degree-granting institution or affiliated research institution will be eligible for one 120-day stay in Canada every 12 months.

And universities will also be able to access a dedicated service channel that will support employers and provide guidance on visa applications for foreign talent.

The Global Skills Strategy, which came into force on June 12 [2017], aims to boost the Canadian economy by filling skills gaps with international talent.

As well as the short term work permit exemption, the Global Skills Strategy aims to make it easier for employers to recruit highly skilled workers in certain fields such as computer engineering.

“Employers that are making plans for job-creating investments in Canada will often need an experienced leader, dynamic researcher or an innovator with unique skills not readily available in Canada to make that investment happen,” said Ahmed Hussen, Minister of Immigration, Refugees and Citizenship.

“The Global Skills Strategy aims to give those employers confidence that when they need to hire from abroad, they’ll have faster, more reliable access to top talent.”

Coincidentally, Microsoft, Facebook, Google, etc. have announced, in 2017, new jobs and new offices in Canadian cities. There’s a also Chinese multinational telecom company Huawei Canada which has enjoyed success in Canada and continues to invest here (from a Jan. 19, 2018 article about security concerns by Matthew Braga for the Canadian Broadcasting Corporation (CBC) online news,

For the past decade, Chinese tech company Huawei has found no shortage of success in Canada. Its equipment is used in telecommunications infrastructure run by the country’s major carriers, and some have sold Huawei’s phones.

The company has struck up partnerships with Canadian universities, and say it is investing more than half a billion dollars in researching next generation cellular networks here. [emphasis mine]

While I’m not thrilled about using patents as an indicator of progress, this is interesting to note (from the report released April 10, 2018),

Canada produces about 1% of global patents, ranking 18th in the world. It lags further behind in trademark (34th) and design applications (34th). Despite relatively weak performance overall in patents, Canada excels in some technical fields such as Civil Engineering, Digital Communication, Other Special Machines, Computer Technology, and Telecommunications. [emphases mine] Canada is a net exporter of patents, which signals the R&D strength of some technology industries. It may also reflect increasing R&D investment by foreign-controlled firms. [emphasis mine] [p. xxiii Print; p. 25 PDF]

Getting back to my point, we don’t have large companies here. In fact, the dream for most of our high tech startups is to build up the company so it’s attractive to buyers, sell, and retire (hopefully before the age of 40). Strangely, the expert panel doesn’t seem to share my insight into this matter,

Canada’s combination of high performance in measures of research output and impact, and low performance on measures of industrial R&D investment and innovation (e.g., subpar productivity growth), continue to be viewed as a paradox, leading to the hypothesis that barriers are impeding the flow of Canada’s research achievements into commercial applications. The Panel’s analysis suggests the need for a more nuanced view. The process of transforming research into innovation and wealth creation is a complex multifaceted process, making it difficult to point to any definitive cause of Canada’s deficit in R&D investment and productivity growth. Based on the Panel’s interpretation of the evidence, Canada is a highly innovative nation, but significant barriers prevent the translation of innovation into wealth creation. The available evidence does point to a number of important contributing factors that are analyzed in this report. Figure 5 represents the relationships between R&D, innovation, and wealth creation.

The Panel concluded that many factors commonly identified as points of concern do not adequately explain the overall weakness in Canada’s innovation performance compared with other countries. [emphasis mine] Academia-business linkages appear relatively robust in quantitative terms given the extent of cross-sectoral R&D funding and increasing academia-industry partnerships, though the volume of academia-industry interactions does not indicate the nature or the quality of that interaction, nor the extent to which firms are capitalizing on the research conducted and the resulting IP. The educational system is high performing by international standards and there does not appear to be a widespread lack of researchers or STEM (science, technology, engineering, and mathematics) skills. IP policies differ across universities and are unlikely to explain a divergence in research commercialization activity between Canadian and U.S. institutions, though Canadian universities and governments could do more to help Canadian firms access university IP and compete in IP management and strategy. Venture capital availability in Canada has improved dramatically in recent years and is now competitive internationally, though still overshadowed by Silicon Valley. Technology start-ups and start-up ecosystems are also flourishing in many sectors and regions, demonstrating their ability to build on research advances to develop and deliver innovative products and services.

You’ll note there’s no mention of a cultural issue where start-ups are designed for sale as soon as possible and this isn’t new. Years ago, there was an accounting firm that published a series of historical maps (the last one I saw was in 2005) of technology companies in the Vancouver region. Technology companies were being developed and sold to large foreign companies from the 19th century to present day.

Part 2

Leftover 2017 memristor news bits

i have two bits of news, one from this October 2017 about using light to control a memristor’s learning properties and one from December 2017 about memristors and neural networks.

Shining a light on the memristor

Michael Berger wrote an October 30, 2017 Nanowerk Sportlight article about some of the latest work concerning memristors and light,

Memristors – or resistive memory – are nanoelectronic devices that are very promising components for next generation memory and computing devices. They are two-terminal electric elements similar to a conventional resistor – however, the electric resistance in a memristor is dependent on the charge passing through it; which means that its conductance can be precisely modulated by charge or flux through it. Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function).

In this sense, a memristor is similar to a synapse in the human brain because it exhibits the same switching characteristics, i.e. it is able, with a high level of plasticity, to modify the efficiency of signal transfer between neurons under the influence of the transfer itself. That’s why researchers are hopeful to use memristors for the fabrication of electronic synapses for neuromorphic (i.e. brain-like) computing that mimics some of the aspects of learning and computation in human brains.

Human brains may be slow at pure number crunching but they are excellent at handling fast dynamic sensory information such as image and voice recognition. Walking is something that we take for granted but this is quite challenging for robots, especially over uneven terrain.

“Memristors present an opportunity to make new types of computers that are different from existing von Neumann architectures, which traditional computers are based upon,” Dr Neil T. Kemp, a Lecturer in Physics at the University of Hull [UK], tells Nanowerk. “Our team at the University of Hull is focussed on making memristor devices dynamically reconfigurable and adaptive – we believe this is the route to making a new generation of artificial intelligence systems that are smarter and can exhibit complex behavior. Such systems would also have the advantage of memristors, high density integration and lower power usage, so these systems would be more lightweight, portable and not need re-charging so often – which is something really needed for robots etc.”

In their new paper in Nanoscale (“Reversible Optical Switching Memristors with Tunable STDP Synaptic Plasticity: A Route to Hierarchical Control in Artificial Intelligent Systems”), Kemp and his team demonstrate the ability to reversibly control the learning properties of memristors via optical means.

The reversibility is achieved by changing the polarization of light. The researchers have used this effect to demonstrate tuneable learning in a memristor. One way this is achieved is through something called Spike Timing Dependent Plasticity (STDP), which is an effect known to occur in human brains and is linked with sensory perception, spatial reasoning, language and conscious thought in the neocortex.

STDP learning is based upon differences in the arrival time of signals from two adjacent neurons. The University of Hull team has shown that they can modulate the synaptic plasticity via optical means which enables the devices to have tuneable learning.

“Our research findings are important because it demonstrates that light can be used to control the learning properties of a memristor,” Kemp points out. “We have shown that light can be used in a reversible manner to change the connection strength (or conductivity) of artificial memristor synapses and as well control their ability to forget i.e. we can dynamically change device to have short-term or long-term memory.”

According to the team, there are many potential applications, such as adaptive electronic circuits controllable via light, or in more complex systems, such as neuromorphic computing, the development of optically reconfigurable neural networks.

Having optically controllable memristors can also facilitate the implementation of hierarchical control in larger artificial-brain like systems, whereby some of the key processes that are carried out by biological molecules in human brains can be emulated in solid-state devices through patterning with light.

Some of these processes include synaptic pruning, conversion of short term memory to long term memory, erasing of certain memories that are no longer needed or changing the sensitivity of synapses to be more adept at learning new information.

“The ability to control this dynamically, both spatially and temporally, is particularly interesting since it would allow neural networks to be reconfigurable on the fly through either spatial patterning or by adjusting the intensity of the light source,” notes Kemp.

In their new paper in Nanoscale Currently, the devices are more suited to neuromorphic computing applications, which do not need to be as fast. Optical control of memristors opens the route to dynamically tuneable and reprogrammable synaptic circuits as well the ability (via optical patterning) to have hierarchical control in larger and more complex artificial intelligent systems.

“Artificial Intelligence is really starting to come on strong in many areas, especially in the areas of voice/image recognition and autonomous systems – we could even say that this is the next revolution, similarly to what the industrial revolution was to farming and production processes,” concludes Kemp. “There are many challenges to overcome though. …

That excerpt should give you the gist of Berger’s article and, for those who need more information, there’s Berger’s article and, also, a link to and a citation for the paper,

Reversible optical switching memristors with tunable STDP synaptic plasticity: a route to hierarchical control in artificial intelligent systems by Ayoub H. Jaafar, Robert J. Gray, Emanuele Verrelli, Mary O’Neill, Stephen. M. Kelly, and Neil T. Kemp. Nanoscale, 2017,9, 17091-17098 DOI: 10.1039/C7NR06138B First published on 24 Oct 2017

This paper is behind a paywall.

The memristor and the neural network

It would seem machine learning could experience a significant upgrade if the work in Wei Lu’s University of Michigan laboratory can be scaled for general use. From a December 22, 2017 news item on ScienceDaily,

A new type of neural network made with memristors can dramatically improve the efficiency of teaching machines to think like humans.

The network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.

The research team that created the reservoir computing system, led by Wei Lu, professor of electrical engineering and computer science at the University of Michigan, recently published their work in Nature Communications.

A December 19, 2017 University of Michigan news release (also on EurekAlert) by Dan Newman, which originated the news item, expands on the theme,

Reservoir computing systems, which improve on a typical neural network’s capacity and reduce the required training time, have been created in the past with larger optical components. However, the U-M group created their system using memristors, which require less space and can be integrated more easily into existing silicon-based electronics.

Memristors are a special type of resistive device that can both perform logic and store data. This contrasts with typical computer systems, where processors perform logic separate from memory modules. In this study, Lu’s team used a special memristor that memorizes events only in the near history.

Inspired by brains, neural networks are composed of neurons, or nodes, and synapses, the connections between nodes.

To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.

Once trained, a neural network can then be tested without knowing the answer. For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.

“A lot of times, it takes days or months to train a network,” says Lu. “It is very expensive.”

Image recognition is also a relatively simple problem, as it doesn’t require any information apart from a static image. More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred, or what has just been said.

“When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ depending on the previous syllables,” says Lu.

This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu says.

Reservoir computing systems built with memristors, however, can skip most of the expensive training process and still provide the network the capability to remember. This is because the most critical component of the system – the reservoir – does not require training.

When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network. This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.

Enlargereservoir computing system

IMAGE:  Schematic of a reservoir computing system, showing the reservoir with internal dynamics and the simpler output. Only the simpler output needs to be trained, allowing for quicker and lower-cost training. Courtesy Wei Lu.

 

“The beauty of reservoir computing is that while we design it, we don’t have to train it,” says Lu.

The team proved the reservoir computing concept using a test of handwriting recognition, a common benchmark among neural networks. Numerals were broken up into rows of pixels, and fed into the computer with voltages like Morse code, with zero volts for a dark pixel and a little over one volt for a white pixel.

Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91% accuracy.

Reservoir computing systems are especially adept at handling data that varies with time, like a stream of data or words, or a function depending on past results.

To demonstrate this, the team tested a complex function that depended on multiple past results, which is common in engineering fields. The reservoir computing system was able to model the complex function with minimal error.

Lu plans on exploring two future paths with this research: speech recognition and predictive analysis.

“We can make predictions on natural spoken language, so you don’t even have to say the full word,” explains Lu.

“We could actually predict what you plan to say next.”

In predictive analysis, Lu hopes to use the system to take in signals with noise, like static from far-off radio stations, and produce a cleaner stream of data. “It could also predict and generate an output signal even if the input stopped,” he says.

EnlargeWei Lu

IMAGE:  Wei Lu, Professor of Electrical Engineering & Computer Science at the University of Michigan holds a memristor he created. Photo: Marcin Szczepanski.

 

The work was published in Nature Communications in the article, “Reservoir computing using dynamic memristors for temporal information processing”, with authors Chao Du, Fuxi Cai, Mohammed Zidan, Wen Ma, Seung Hwan Lee, and Prof. Wei Lu.

The research is part of a $6.9 million DARPA [US Defense Advanced Research Projects Agency] project, called “Sparse Adaptive Local Learning for Sensing and Analytics [also known as SALLSA],” that aims to build a computer chip based on self-organizing, adaptive neural networks. The memristor networks are fabricated at Michigan’s Lurie Nanofabrication Facility.

Lu and his team previously used memristors in implementing “sparse coding,” which used a 32-by-32 array of memristors to efficiently analyze and recreate images.

Here’s a link to and a citation for the paper,

Reservoir computing using dynamic memristors for temporal information processing by Chao Du, Fuxi Cai, Mohammed A. Zidan, Wen Ma, Seung Hwan Lee & Wei D. Lu. Nature Communications 8, Article number: 2204 (2017) doi:10.1038/s41467-017-02337-y Published online: 19 December 2017

This is an open access paper.

Machine learning software and quantum computers that think

A Sept. 14, 2017 news item on phys.org sets the stage for quantum machine learning by explaining a few basics first,

Language acquisition in young children is apparently connected with their ability to detect patterns. In their learning process, they search for patterns in the data set that help them identify and optimize grammar structures in order to properly acquire the language. Likewise, online translators use algorithms through machine learning techniques to optimize their translation engines to produce well-rounded and understandable outcomes. Even though many translations did not make much sense at all at the beginning, in these past years we have been able to see major improvements thanks to machine learning.

Machine learning techniques use mathematical algorithms and tools to search for patterns in data. These techniques have become powerful tools for many different applications, which can range from biomedical uses such as in cancer reconnaissance, in genetics and genomics, in autism monitoring and diagnosis and even plastic surgery, to pure applied physics, for studying the nature of materials, matter or even complex quantum systems.

Capable of adapting and changing when exposed to a new set of data, machine learning can identify patterns, often outperforming humans in accuracy. Although machine learning is a powerful tool, certain application domains remain out of reach due to complexity or other aspects that rule out the use of the predictions that learning algorithms provide.

Thus, in recent years, quantum machine learning has become a matter of interest because of is vast potential as a possible solution to these unresolvable challenges and quantum computers show to be the right tool for its solution.

A Sept. 14, 2017 Institute of Photonic Sciences ([Catalan] Institut de Ciències Fotòniques] ICFO) press release, which originated the news item, goes on to detail a recently published overview of the state of quantum machine learning,

In a recent study, published in Nature, an international team of researchers integrated by Jacob Biamonte from Skoltech/IQC, Peter Wittek from ICFO, Nicola Pancotti from MPQ, Patrick Rebentrost from MIT, Nathan Wiebe from Microsoft Research, and Seth Lloyd from MIT have reviewed the actual status of classical machine learning and quantum machine learning. In their review, they have thoroughly addressed different scenarios dealing with classical and quantum machine learning. In their study, they have considered different possible combinations: the conventional method of using classical machine learning to analyse classical data, using quantum machine learning to analyse both classical and quantum data, and finally, using classical machine learning to analyse quantum data.

Firstly, they set out to give an in-depth view of the status of current supervised and unsupervised learning protocols in classical machine learning by stating all applied methods. They introduce quantum machine learning and provide an extensive approach on how this technique could be used to analyse both classical and quantum data, emphasizing that quantum machines could accelerate processing timescales thanks to the use of quantum annealers and universal quantum computers. Quantum annealing technology has better scalability, but more limited use cases. For instance, the latest iteration of D-Wave’s [emphasis mine] superconducting chip integrates two thousand qubits, and it is used for solving certain hard optimization problems and for efficient sampling. On the other hand, universal (also called gate-based) quantum computers are harder to scale up, but they are able to perform arbitrary unitary operations on qubits by sequences of quantum logic gates. This resembles how digital computers can perform arbitrary logical operations on classical bits.

However, they address the fact that controlling a quantum system is very complex and analyzing classical data with quantum resources is not as straightforward as one may think, mainly due to the challenge of building quantum interface devices that allow classical information to be encoded into a quantum mechanical form. Difficulties, such as the “input” or “output” problems appear to be the major technical challenge that needs to be overcome.

The ultimate goal is to find the most optimized method that is able to read, comprehend and obtain the best outcomes of a data set, be it classical or quantum. Quantum machine learning is definitely aimed at revolutionizing the field of computer sciences, not only because it will be able to control quantum computers, speed up the information processing rates far beyond current classical velocities, but also because it is capable of carrying out innovative functions, such quantum deep learning, that could not only recognize counter-intuitive patterns in data, invisible to both classical machine learning and to the human eye, but also reproduce them.

As Peter Wittek [emphasis mine] finally states, “Writing this paper was quite a challenge: we had a committee of six co-authors with different ideas about what the field is, where it is now, and where it is going. We rewrote the paper from scratch three times. The final version could not have been completed without the dedication of our editor, to whom we are indebted.”

It was a bit of a surprise to see local (Vancouver, Canada) company D-Wave Systems mentioned but i notice that one of the paper’s authors (Peter Wittek) is mentioned in a May 22, 2017 D-Wave news release announcing a new partnership to foster quantum machine learning,

Today [May 22, 2017] D-Wave Systems Inc., the leader in quantum computing systems and software, announced a new initiative with the Creative Destruction Lab (CDL) at the University of Toronto’s Rotman School of Management. D-Wave will work with CDL, as a CDL Partner, to create a new track to foster startups focused on quantum machine learning. The new track will complement CDL’s successful existing track in machine learning. Applicants selected for the intensive one-year program will go through an introductory boot camp led by Dr. Peter Wittek [emphasis mine], author of Quantum Machine Learning: What Quantum Computing means to Data Mining, with instruction and technical support from D-Wave experts, access to a D-Wave 2000Q™ quantum computer, and the opportunity to use a D-Wave sampling service to enable machine learning computations and applications. D-Wave staff will be a part of the committee selecting up to 40 individuals for the program, which begins in September 2017.

For anyone interested in the paper, here’s a link to and a citation,

Quantum machine learning by Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, & Seth Lloyd. Nature 549, 195–202 (14 September 2017) doi:10.1038/nature23474 Published online 13 September 2017

This paper is behind a paywall.

Robot artists—should they get copyright protection

Clearly a lawyer wrote this June 26, 2017 essay on theconversation.com (Note: A link has been removed),

When a group of museums and researchers in the Netherlands unveiled a portrait entitled The Next Rembrandt, it was something of a tease to the art world. It wasn’t a long lost painting but a new artwork generated by a computer that had analysed thousands of works by the 17th-century Dutch artist Rembrandt Harmenszoon van Rijn.

The computer used something called machine learning [emphasis mine] to analyse and reproduce technical and aesthetic elements in Rembrandt’s works, including lighting, colour, brush-strokes and geometric patterns. The result is a portrait produced based on the styles and motifs found in Rembrandt’s art but produced by algorithms.

But who owns creative works generated by artificial intelligence? This isn’t just an academic question. AI is already being used to generate works in music, journalism and gaming, and these works could in theory be deemed free of copyright because they are not created by a human author.

This would mean they could be freely used and reused by anyone and that would be bad news for the companies selling them. Imagine you invest millions in a system that generates music for video games, only to find that music isn’t protected by law and can be used without payment by anyone in the world.

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

It could have been someone involved in the technology but nobody with that background would write “… something called machine learning … .”  Andres Guadamuz, lecturer in Intellectual Property Law at the University of Sussex, goes on to say (Note: Links have been removed),

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

That doesn’t mean that copyright should be awarded to the computer, however. Machines don’t (yet) have the rights and status of people under the law. But that doesn’t necessarily mean there shouldn’t be any copyright either. Not all copyright is owned by individuals, after all.

Companies are recognised as legal people and are often awarded copyright for works they don’t directly create. This occurs, for example, when a film studio hires a team to make a movie, or a website commissions a journalist to write an article. So it’s possible copyright could be awarded to the person (company or human) that has effectively commissioned the AI to produce work for it.

 

Things are likely to become yet more complex as AI tools are more commonly used by artists and as the machines get better at reproducing creativity, making it harder to discern if an artwork is made by a human or a computer. Monumental advances in computing and the sheer amount of computational power becoming available may well make the distinction moot. At that point, we will have to decide what type of protection, if any, we should give to emergent works created by intelligent algorithms with little or no human intervention.

The most sensible move seems to follow those countries that grant copyright to the person who made the AI’s operation possible, with the UK’s model looking like the most efficient. This will ensure companies keep investing in the technology, safe in the knowledge they will reap the benefits. What happens when we start seriously debating whether computers should be given the status and rights of people is a whole other story.

The team that developed a ‘new’ Rembrandt produced a video about the process,

Mark Brown’s April 5, 2016 article abut this project (which was unveiled on April 5, 2017 in Amsterdam, Netherlands) for the Guardian newspaper provides more detail such as this,

It [Next Rembrandt project] is the result of an 18-month project which asks whether new technology and data can bring back to life one of the greatest, most innovative painters of all time.

Advertising executive [Bas] Korsten, whose brainchild the project was, admitted that there were many doubters. “The idea was greeted with a lot of disbelief and scepticism,” he said. “Also coming up with the idea is one thing, bringing it to life is another.”

The project has involved data scientists, developers, engineers and art historians from organisations including Microsoft, Delft University of Technology, the Mauritshuis in The Hague and the Rembrandt House Museum in Amsterdam.

The final 3D printed painting consists of more than 148 million pixels and is based on 168,263 Rembrandt painting fragments.

Some of the challenges have been in designing a software system that could understand Rembrandt based on his use of geometry, composition and painting materials. A facial recognition algorithm was then used to identify and classify the most typical geometric patterns used to paint human features.

It sounds like it was a fascinating project but I don’t believe ‘The Next Rembrandt’ is an example of AI creativity or an example of the ‘creative spark’ Guadamuz discusses. This seems more like the kind of work  that could be done by a talented forger or fraudster. As I understand it, even when a human creates this type of artwork (a newly discovered and unknown xxx masterpiece), the piece is not considered a creative work in its own right. Some pieces are outright fraudulent and others which are described as “in the manner of xxx.”

Taking a somewhat different approach to mine, Timothy Geigner at Techdirt has also commented on the question of copyright and AI in relation to Guadamuz’s essay in a July 7, 2017 posting,

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

Let’s get the easy part out of the way: the culminating sentence in the quote above is not true. The creative spark is not the artistic output. Rather, the creative spark has always been known as the need to create in the first place. This isn’t a trivial quibble, either, as it factors into the simple but important reasoning for why AI and machines should certainly not receive copyright rights on their output.

That reasoning is the purpose of copyright law itself. Far too many see copyright as a reward system for those that create art rather than what it actually was meant to be: a boon to an artist to compensate for that artist to create more art for the benefit of the public as a whole. Artificial intelligence, however far progressed, desires only what it is programmed to desire. In whatever hierarchy of needs an AI might have, profit via copyright would factor either laughably low or not at all into its future actions. Future actions of the artist, conversely, are the only item on the agenda for copyright’s purpose. If receiving a copyright wouldn’t spur AI to create more art beneficial to the public, then copyright ought not to be granted.

Geigner goes on (July 7, 2017 posting) to elucidate other issues with the ideas expressed in the general debates of AI and ‘rights’ and the EU’s solution.

Brain stuff: quantum entanglement and a multi-dimensional universe

I have two brain news bits, one about neural networks and quantum entanglement and another about how the brain operates on more than three dimensions.

Quantum entanglement and neural networks

A June 13, 2017 news item on phys.org describes how machine learning can be used to solve problems in physics (Note: Links have been removed),

Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI [Joint Quantum Institute] and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.

An artist’s rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions (Credit: E. Edwards/JQI)

A June 12, 2017 JQI news release by Chris Cesare, which originated the news item, describes how neural networks can represent quantum entanglement,

Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”

On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.

The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.

What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world’s best Go players last year (link is external) (and the top player this year (link is external)). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo’s triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states (link is external), although it gave no indication of exactly how wide the tool’s reach might be. “We immediately recognized that this should be a very important paper,” Deng says, “so we put all our energy and time into studying the problem more.”

The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.

Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.

These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.

Here’s a link to and a citation for the paper,

Quantum Entanglement in Neural Network States by Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Phys. Rev. X 7, 021021 – Published 11 May 2017

This paper is open access.

Blue Brain and the multidimensional universe

Blue Brain is a Swiss government brain research initiative which officially came to life in 2006 although the initial agreement between the École Politechnique Fédérale de Lausanne (EPFL) and IBM was signed in 2005 (according to the project’s Timeline page). Moving on, the project’s latest research reveals something astounding (from a June 12, 2017 Frontiers Publishing press release on EurekAlert),

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.

###

About Blue Brain

The aim of the Blue Brain Project, a Swiss brain initiative founded and directed by Professor Henry Markram, is to build accurate, biologically detailed digital reconstructions and simulations of the rodent brain, and ultimately, the human brain. The supercomputer-based reconstructions and simulations built by Blue Brain offer a radically new approach for understanding the multilevel structure and function of the brain. http://bluebrain.epfl.ch

About Frontiers

Frontiers is a leading community-driven open-access publisher. By taking publishing entirely online, we drive innovation with new technologies to make peer review more efficient and transparent. We provide impact metrics for articles and researchers, and merge open access publishing with a research network platform – Loop – to catalyse research dissemination, and popularize research to the public, including children. Our goal is to increase the reach and impact of research articles and their authors. Frontiers has received the ALPSP Gold Award for Innovation in Publishing in 2014. http://www.frontiersin.org.

Here’s a link to and a citation for the paper,

Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function by Michael W. Reimann, Max Nolte, Martina Scolamiero, Katharine Turner, Rodrigo Perin, Giuseppe Chindemi, Paweł Dłotko, Ran Levi, Kathryn Hess, and Henry Markram. Front. Comput. Neurosci., 12 June 2017 | https://doi.org/10.3389/fncom.2017.00048

This paper is open access.

Artificial intelligence (AI) company (in Montréal, Canada) attracts $135M in funding from Microsoft, Intel, Nvidia and others

It seems there’s a push on to establish Canada as a centre for artificial intelligence research and, if the federal and provincial governments have their way, for commercialization of said research. As always, there seems to be a bit of competition between Toronto (Ontario) and Montréal (Québec) as to which will be the dominant hub for the Canadian effort if one is to take Braga’s word for the situation.

In any event, Toronto seemed to have a mild advantage over Montréal initially with the 2017 Canadian federal government  budget announcement that the Canadian Institute for Advanced Research (CIFAR), based in Toronto, would launch a Pan-Canadian Artificial Intelligence Strategy and with an announcement from the University of Toronto shortly after (from my March 31, 2017 posting),

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

However, Montréal and the province of Québec are no slouches when it comes to supporting to technology. From a June 14, 2017 article by Matthew Braga for CBC (Canadian Broadcasting Corporation) news online (Note: Links have been removed),

One of the most promising new hubs for artificial intelligence research in Canada is going international, thanks to a $135 million investment with contributions from some of the biggest names in tech.

The company, Montreal-based Element AI, was founded last October [2016] to help companies that might not have much experience in artificial intelligence start using the technology to change the way they do business.

It’s equal parts general research lab and startup incubator, with employees working to develop new and improved techniques in artificial intelligence that might not be fully realized for years, while also commercializing products and services that can be sold to clients today.

It was co-founded by Yoshua Bengio — one of the pioneers of a type of AI research called machine learning — along with entrepreneurs Jean-François Gagné and Nicolas Chapados, and the Canadian venture capital fund Real Ventures.

In an interview, Bengio and Gagné said the money from the company’s funding round will be used to hire 250 new employees by next January. A hundred will be based in Montreal, but an additional 100 employees will be hired for a new office in Toronto, and the remaining 50 for an Element AI office in Asia — its first international outpost.

They will join more than 100 employees who work for Element AI today, having left jobs at Amazon, Uber and Google, among others, to work at the company’s headquarters in Montreal.

The expansion is a big vote of confidence in Element AI’s strategy from some of the world’s biggest technology companies. Microsoft, Intel and Nvidia all contributed to the round, and each is a key player in AI research and development.

The company has some not unexpected plans and partners (from the Braga, article, Note: A link has been removed),

The Series A round was led by Data Collective, a Silicon Valley-based venture capital firm, and included participation by Fidelity Investments Canada, National Bank of Canada, and Real Ventures.

What will it help the company do? Scale, its founders say.

“We’re looking at domain experts, artificial intelligence experts,” Gagné said. “We already have quite a few, but we’re looking at people that are at the top of their game in their domains.

“And at this point, it’s no longer just pure artificial intelligence, but people who understand, extremely well, robotics, industrial manufacturing, cybersecurity, and financial services in general, which are all the areas we’re going after.”

Gagné says that Element AI has already delivered 10 projects to clients in those areas, and have many more in development. In one case, Element AI has been helping a Japanese semiconductor company better analyze the data collected by the assembly robots on its factory floor, in a bid to reduce manufacturing errors and improve the quality of the company’s products.

There’s more to investment in Québec’s AI sector than Element AI (from the Braga article; Note: Links have been removed),

Element AI isn’t the only organization in Canada that investors are interested in.

In September, the Canadian government announced $213 million in funding for a handful of Montreal universities, while both Google and Microsoft announced expansions of their Montreal AI research groups in recent months alongside investments in local initiatives. The province of Quebec has pledged $100 million for AI initiatives by 2022.

Braga goes on to note some other initiatives but at that point the article’s focus is exclusively Toronto.

For more insight into the AI situation in Québec, there’s Dan Delmar’s May 23, 2017 article for the Montreal Express (Note: Links have been removed),

Advocating for massive government spending with little restraint admittedly deviates from the tenor of these columns, but the AI business is unlike any other before it. [emphasis misn] Having leaders acting as fervent advocates for the industry is crucial; resisting the coming technological tide is, as the Borg would say, futile.

The roughly 250 AI researchers who call Montreal home are not simply part of a niche industry. Quebec’s francophone character and Montreal’s multilingual citizenry are certainly factors favouring the development of language technology, but there’s ample opportunity for more ambitious endeavours with broader applications.

AI isn’t simply a technological breakthrough; it is the technological revolution. [emphasis mine] In the coming decades, modern computing will transform all industries, eliminating human inefficiencies and maximizing opportunities for innovation and growth — regardless of the ethical dilemmas that will inevitably arise.

“By 2020, we’ll have computers that are powerful enough to simulate the human brain,” said (in 2009) futurist Ray Kurzweil, author of The Singularity Is Near, a seminal 2006 book that has inspired a generation of AI technologists. Kurzweil’s projections are not science fiction but perhaps conservative, as some forms of AI already effectively replace many human cognitive functions. “By 2045, we’ll have expanded the intelligence of our human-machine civilization a billion-fold. That will be the singularity.”

The singularity concept, borrowed from physicists describing event horizons bordering matter-swallowing black holes in the cosmos, is the point of no return where human and machine intelligence will have completed their convergence. That’s when the machines “take over,” so to speak, and accelerate the development of civilization beyond traditional human understanding and capability.

The claims I’ve highlighted in Delmar’s article have been made before for other technologies, “xxx is like no other business before’ and “it is a technological revolution.”  Also if you keep scrolling down to the bottom of the article, you’ll find Delmar is a ‘public relations consultant’ which, if you look at his LinkedIn profile, you’ll find means he’s a managing partner in a PR firm known as Provocateur.

Bertrand Marotte’s May 20, 2017 article for the Montreal Gazette offers less hyperbole along with additional detail about the Montréal scene (Note: Links have been removed),

It might seem like an ambitious goal, but key players in Montreal’s rapidly growing artificial-intelligence sector are intent on transforming the city into a Silicon Valley of AI.

Certainly, the flurry of activity these days indicates that AI in the city is on a roll. Impressive amounts of cash have been flowing into academia, public-private partnerships, research labs and startups active in AI in the Montreal area.

…, researchers at Microsoft Corp. have successfully developed a computing system able to decipher conversational speech as accurately as humans do. The technology makes the same, or fewer, errors than professional transcribers and could be a huge boon to major users of transcription services like law firms and the courts.

Setting the goal of attaining the critical mass of a Silicon Valley is “a nice point of reference,” said tech entrepreneur Jean-François Gagné, co-founder and chief executive officer of Element AI, an artificial intelligence startup factory launched last year.

The idea is to create a “fluid, dynamic ecosystem” in Montreal where AI research, startup, investment and commercialization activities all mesh productively together, said Gagné, who founded Element with researcher Nicolas Chapados and Université de Montréal deep learning pioneer Yoshua Bengio.

“Artificial intelligence is seen now as a strategic asset to governments and to corporations. The fight for resources is global,” he said.

The rise of Montreal — and rival Toronto — as AI hubs owes a lot to provincial and federal government funding.

Ottawa promised $213 million last September to fund AI and big data research at four Montreal post-secondary institutions. Quebec has earmarked $100 million over the next five years for the development of an AI “super-cluster” in the Montreal region.

The provincial government also created a 12-member blue-chip committee to develop a strategic plan to make Quebec an AI hub, co-chaired by Claridge Investments Ltd. CEO Pierre Boivin and Université de Montréal rector Guy Breton.

But private-sector money has also been flowing in, particularly from some of the established tech giants competing in an intense AI race for innovative breakthroughs and the best brains in the business.

Montreal’s rich talent pool is a major reason Waterloo, Ont.-based language-recognition startup Maluuba decided to open a research lab in the city, said the company’s vice-president of product development, Mohamed Musbah.

“It’s been incredible so far. The work being done in this space is putting Montreal on a pedestal around the world,” he said.

Microsoft struck a deal this year to acquire Maluuba, which is working to crack one of the holy grails of deep learning: teaching machines to read like the human brain does. Among the company’s software developments are voice assistants for smartphones.

Maluuba has also partnered with an undisclosed auto manufacturer to develop speech recognition applications for vehicles. Voice recognition applied to cars can include such things as asking for a weather report or making remote requests for the vehicle to unlock itself.

Marotte’s Twitter profile describes him as a freelance writer, editor, and translator.

Nanoparticle behaviour in the environment unpredictable

These Swiss researchers took on a fairly massive project according to an April 19, 2017 news item on ScienceDaily,

The nanotech industry is booming. Every year, several thousands of tonnes of man-made nanoparticles are produced worldwide; sooner or later, a certain part of them will end up in bodies of water or soil. But even experts find it difficult to say exactly what happens to them there. It is a complex question, not only because there are many different types of man-made (engineered) nanoparticles, but also because the particles behave differently in the environment depending on the prevailing conditions.

Researchers led by Martin Scheringer, Senior Scientist at the Department of Chemistry and Applied Biosciences, wanted to bring some clarity to this issue. They reviewed 270 scientific studies, and the nearly 1,000 laboratory experiments described in them, looking for patterns in the behaviour of engineered nanoparticles. The goal was to make universal predictions about the behaviour of the particles.

An April 19, 2017ETH Zurich press release by Fabio Bergamin (also on EurekAlert), which originated the news item, elaborates,

Particles attach themselves to everything

However, the researchers found a very mixed picture when they looked at the data. “The situation is more complex than many scientists would previously have predicted,” says Scheringer. “We need to recognise that we can’t draw a uniform picture with the data available to us today.”

Nicole Sani-Kast, a doctoral student in Scheringer’s group and first author of the analysis published in the journal PNAS [Proceedings of the National Academy of Sciences], adds: “Engineered nanoparticles behave very dynamically and are highly reactive. They attach themselves to everything they find: to other nanoparticles in order to form agglomerates, or to other molecules present in the environment.”

Network analysis

To what exactly the particles react, and how quickly, depends on various factors such as the acidity of the water or soil, the concentration of the existing minerals and salts, and above all, the composition of the organic substances dissolved in the water or present in the soil. The fact that the engineered nanoparticles often have a surface coating makes things even more complicated. Depending on the environmental conditions, the particles retain or lose their coating, which in turn influences their reaction behaviour.

To evaluate the results available in the literature, Sani-Kast used a network analysis for the first time in this research field. It is a technique familiar in social research for measuring networks of social relations, and allowed her to show that the data available on engineered nanoparticles is inconsistent, insufficiently diverse and poorly structured.

More method for machine learning

“If more structured, consistent and sufficiently diverse data were available, it may be possible to discover universal patterns using machine learning methods,” says Scheringer, “but we’re not there yet.” Enough structured experimental data must first be available.

“In order for the scientific community to carry out such experiments in a systematic and standardised manner, some kind of coordination is necessary,” adds Sani-Kast, but she is aware that such work is difficult to coordinate. Scientists are generally well known for preferring to explore new methods and conditions rather than routinely performing standardized experiments.

[additional material]

Distinguishing man-made and natural nanoparticles

In addition to the lack of systematic research, there is also a second tangible problem in researching the behaviour of engineered nanoparticles: many engineered nanoparticles consist of chemical compounds that occur naturally in the soil. So far it has been difficult to measure the engineered particles in the environment since it is hard to distinguish them from naturally occurring particles with the same chemical composition.

However, researchers at ETH Zurich’s Department of Chemistry and Applied Biosciences, under the direction of ETH Professor Detlef Günther, have recently established an effective method that makes such a distinction possible in routine investigations. They used a state-of-the-art and highly sensitive mass spectrometry technique (called spICP-TOF mass spectrometry) to determine which chemical elements make up individual nanoparticles in a sample.

In collaboration with scientists from the University of Vienna, the ETH researchers applied the method to soil samples with natural cerium-containing particles, into which they mixed engineered cerium dioxide nanoparticles. Using machine learning methods, which were ideally suited to this particular issue, the researchers were able to identify differences in the chemical fingerprints of the two particle classes. “While artificially produced nanoparticles often consist of a single compound, natural nanoparticles usually still contain a number of additional chemical elements,” explains Alexander Gundlach-Graham, a postdoc in Günther’s group.

The new measuring method is very sensitive: the scientists were able to measure engineered particles in samples with up to one hundred times more natural particles.

The researchers have produced a visualization of their network analysis,

The researchers evaluated the experimental data published in the scientific literature using a network analysis. This analysis reveals which types of nanoparticles (blue) have been studied under which environmental conditions (red). (Visualisations: Thomas Kast)

Here are links and citation for two papers associated with this research,

A network perspective reveals decreasing material diversity in studies on nanoparticle interactions with dissolved organic matter by Nicole Sani-Kast, Jérôme Labille, Patrick Ollivier, Danielle Slomberg, Konrad Hungerbühler, and Martin Scheringer. PNAS 2017, 114: E1756-E1765, DOI: 10.1073/pnas.1608106114

Single-particle multi-element fingerprinting (spMEF) using inductively-coupled plasma time-of-flight mass spectrometry (ICP-TOFMS) to identify engineered nanoparticles against the elevated natural background in soils by Antonia Praetorius, Alexander Gundlach-Graham, Eli Goldberg, Willi Fabienke, Jana Navratilova, Andreas Gondikas, Ralf Kaegi, Detlef Günther, Thilo Hofmann, and Frank von der Kammer. Environonmental Science: Nano 2017, 4: 307-314, DOI: 10.1039/c6en00455e

Both papers are behind a paywall.

Health technology and the Canadian Broadcasting Corporation’s (CBC) two-tier health system ‘Viewpoint’

There’s a lot of talk and handwringing about Canada’s health care system, which ebbs and flows in almost predictable cycles. Jesse Hirsh in a May 16, 2017 ‘Viewpoints’ segment (an occasional series run as part the of the CBC’s [Canadian Broadcasting Corporation] flagship, daily news programme, The National) dared to reframe the discussion as one about technology and ‘those who get it’  [the technologically literate] and ‘those who don’t’,  a state Hirsh described as being illiterate as you can see and hear in the following video.

I don’t know about you but I’m getting tired of being called illiterate when I don’t know something. To be illiterate means you can’t read and write and as it turns out I do both of those things on a daily basis (sometimes even in two languages). Despite my efforts, I’m ignorant about any number of things and those numbers keep increasing day by day. BTW, Is there anyone who isn’t having trouble keeping up?

Moving on from my rhetorical question, Hirsh has a point about the tech divide and about the need for discussion. It’s a point that hadn’t occurred to me (although I think he’s taking it in the wrong direction). In fact, this business of a tech divide already exists if you consider that people who live in rural environments and need the latest lifesaving techniques or complex procedures or access to highly specialized experts have to travel to urban centres. I gather that Hirsh feels that this divide isn’t necessarily going to be an urban/rural split so much as an issue of how technically literate you and your doctor are.  That’s intriguing but then his argumentation gets muddled. Confusingly, he seems to be suggesting that the key to the split is your access (not your technical literacy) to artificial intelligence (AI) and algorithms (presumably he’s referring to big data and data analytics). I expect access will come down more to money than technological literacy.

For example, money is likely to be a key issue when you consider his big pitch is for access to IBM’s Watson computer. (My Feb. 28, 2011 posting titled: Engineering, entertainment, IBM’s Watson, and product placement focuses largely on Watson, its winning appearances on the US television game show, Jeopardy, and its subsequent adoption into the University of Maryland’s School of Medicine in a project to bring Watson into the examining room with patients.)

Hirsh’s choice of IBM’s Watson is particularly interesting for a number of reasons. (1) Presumably there are companies other than IBM in this sector. Why do they not rate a mention?  (2) Given the current situation with IBM and the Canadian federal government’s introduction of the Phoenix payroll system (a PeopleSoft product customized by IBM), which is  a failure of monumental proportions (a Feb. 23, 2017 article by David Reevely for the Ottawa Citizen and a May 25, 2017 article by Jordan Press for the National Post), there may be a little hesitation, if not downright resistance, to a large scale implementation of any IBM product or service, regardless of where the blame lies. (3) Hirsh notes on the home page for his eponymous website,

I’m presently spending time at the IBM Innovation Space in Toronto Canada, investigating the impact of artificial intelligence and cognitive computing on all sectors and industries.

Yes, it would seem he has some sort of relationship with IBM not referenced in his Viewpoints segment on The National. Also, his description of the relationship isn’t especially illuminating but perhaps it.s this? (from the IBM Innovation Space  – Toronto Incubator Application webpage),

Our incubator

The IBM Innovation Space is a Toronto-based incubator that provides startups with a collaborative space to innovate and disrupt the market. Our goal is to provide you with the tools needed to take your idea to the next level, introduce you to the right networks and help you acquire new clients. Our unique approach, specifically around client engagement, positions your company for optimal growth and revenue at an accelerated pace.

OUR SERVICES

IBM Bluemix
IBM Global Entrepreneur
Softlayer – an IBM Company
Watson

Startups partnered with the IBM Innovation Space can receive up to $120,000 in IBM credits at no charge for up to 12 months through the Global Entrepreneurship Program (GEP). These credits can be used in our products such our IBM Bluemix developer platform, Softlayer cloud services, and our world-renowned IBM Watson ‘cognitive thinking’ APIs. We provide you with enterprise grade technology to meet your clients’ needs, large or small.

Collaborative workspace in the heart of Downtown Toronto
Mentorship opportunities available with leading experts
Access to large clients to scale your startup quickly and effectively
Weekly programming ranging from guest speakers to collaborative activities
Help with funding and access to local VCs and investors​

Final comments

While I have some issues with Hirsh’s presentation, I agree that we should be discussing the issues around increased automation of our health care system. A friend of mine’s husband is a doctor and according to him those prescriptions and orders you get when leaving the hospital? They are not made up by a doctor so much as they are spit up by a computer based on the data that the doctors and nurses have supplied.

GIGO, bias, and de-skilling

Leaving aside the wonders that Hirsh describes, there’s an oldish saying in the computer business, garbage in/garbage out (gigo). At its simplest, who’s going to catch a mistake? (There are lots of mistakes made in hospitals and other health care settings.)

There are also issues around the quality of research. Are all the research papers included in the data used by the algorithms going to be considered equal? There’s more than one case where a piece of problematic research has been accepted uncritically, even if it get through peer review, and subsequently cited many times over. One of the ways to measure impact, i.e., importance, is to track the number of citations. There’s also the matter of where the research is published. A ‘high impact’ journal, such as Nature, Science, or Cell, automatically gives a piece of research a boost.

There are other kinds of bias as well. Increasingly, there’s discussion about algorithms being biased and about how machine learning (AI) can become biased. (See my May 24, 2017 posting: Machine learning programs learn bias, which highlights the issues and cites other FrogHeart posts on that and other related topics.)

These problems are to a large extent already present. Doctors have biases and research can be wrong and it can take a long time before there are corrections. However, the advent of an automated health diagnosis and treatment system is likely to exacerbate the problems. For example, if you don’t agree with your doctor’s diagnosis or treatment, you can search other opinions. What happens when your diagnosis and treatment have become data? Will the system give you another opinion? Who will you talk to? The doctor who got an answer from ‘Watson”? Is she or he going to debate Watson? Are you?

This leads to another issue and that’s automated systems getting more credit than they deserve. Futurists such as Hirsh tend to underestimate people and overestimate the positive impact that automation will have. A computer, data analystics, or an AI system are tools not gods. You’ll have as much luck petitioning one of those tools as you would Zeus.

The unasked question is how will your doctor or other health professional gain experience and skills if they never have to practice the basic, boring aspects of health care (asking questions for a history, reading medical journals to keep up with the research, etc.) and leave them to the computers? There had to be  a reason for calling it a medical ‘practice’.

There are definitely going to be advantages to these technological innovations but thoughtful adoption of these practices (pun intended) should be our goal.

Who owns your data?

Another issue which is increasingly making itself felt is ownership of data. Jacob Brogan has written a provocative May 23, 2017 piece for slate.com asking that question about the data Ancestry.com gathers for DNA testing (Note: Links have been removed),

AncestryDNA’s pitch to consumers is simple enough. For $99 (US), the company will analyze a sample of your saliva and then send back information about your “ethnic mix.” While that promise may be scientifically dubious, it’s a relatively clear-cut proposal. Some, however, worry that the service might raise significant privacy concerns.

After surveying AncestryDNA’s terms and conditions, consumer protection attorney Joel Winston found a few issues that troubled him. As he noted in a Medium post last week, the agreement asserts that it grants the company “a perpetual, royalty-free, world-wide, transferable license to use your DNA.” (The actual clause is considerably longer.) According to Winston, “With this single contractual provision, customers are granting Ancestry.com the broadest possible rights to own and exploit their genetic information.”

Winston also noted a handful of other issues that further complicate the question of ownership. Since we share much of our DNA with our relatives, he warned, “Even if you’ve never used Ancestry.com, but one of your genetic relatives has, the company may already own identifiable portions of your DNA.” [emphasis mine] Theoretically, that means information about your genetic makeup could make its way into the hands of insurers or other interested parties, whether or not you’ve sent the company your spit. (Maryam Zaringhalam explored some related risks in a recent Slate article.) Further, Winston notes that Ancestry’s customers waive their legal rights, meaning that they cannot sue the company if their information gets used against them in some way.

Over the weekend, Eric Heath, Ancestry’s chief privacy officer, responded to these concerns on the company’s own site. He claims that the transferable license is necessary for the company to provide its customers with the service that they’re paying for: “We need that license in order to move your data through our systems, render it around the globe, and to provide you with the results of our analysis work.” In other words, it allows them to send genetic samples to labs (Ancestry uses outside vendors), store the resulting data on servers, and furnish the company’s customers with the results of the study they’ve requested.

Speaking to me over the phone, Heath suggested that this license was akin to the ones that companies such as YouTube employ when users upload original content. It grants them the right to shift that data around and manipulate it in various ways, but isn’t an assertion of ownership. “We have committed to our users that their DNA data is theirs. They own their DNA,” he said.

I’m glad to see the company’s representatives are open to discussion and, later in the article, you’ll see there’ve already been some changes made. Still, there is no guarantee that the situation won’t again change, for ill this time.

What data do they have and what can they do with it?

It’s not everybody who thinks data collection and data analytics constitute problems. While some people might balk at the thought of their genetic data being traded around and possibly used against them, e.g., while hunting for a job, or turned into a source of revenue, there tends to be a more laissez-faire attitude to other types of data. Andrew MacLeod’s May 24, 2017 article for thetyee.ca highlights political implications and privacy issues (Note: Links have been removed),

After a small Victoria [British Columbia, Canada] company played an outsized role in the Brexit vote, government information and privacy watchdogs in British Columbia and Britain have been consulting each other about the use of social media to target voters based on their personal data.

The U.K.’s information commissioner, Elizabeth Denham [Note: Denham was formerly B.C.’s Office of the Information and Privacy Commissioner], announced last week [May 17, 2017] that she is launching an investigation into “the use of data analytics for political purposes.”

The investigation will look at whether political parties or advocacy groups are gathering personal information from Facebook and other social media and using it to target individuals with messages, Denham said.

B.C.’s Office of the Information and Privacy Commissioner confirmed it has been contacted by Denham.

Macleod’s March 6, 2017 article for thetyee.ca provides more details about the company’s role (note: Links have been removed),

The “tiny” and “secretive” British Columbia technology company [AggregateIQ; AIQ] that played a key role in the Brexit referendum was until recently listed as the Canadian office of a much larger firm that has 25 years of experience using behavioural research to shape public opinion around the world.

The larger firm, SCL Group, says it has worked to influence election outcomes in 19 countries. Its associated company in the U.S., Cambridge Analytica, has worked on a wide range of campaigns, including Donald Trump’s presidential bid.

In late February [2017], the Telegraph reported that campaign disclosures showed that Vote Leave campaigners had spent £3.5 million — about C$5.75 million [emphasis mine] — with a company called AggregateIQ, run by CEO Zack Massingham in downtown Victoria.

That was more than the Leave side paid any other company or individual during the campaign and about 40 per cent of its spending ahead of the June referendum that saw Britons narrowly vote to exit the European Union.

According to media reports, Aggregate develops advertising to be used on sites including Facebook, Twitter and YouTube, then targets messages to audiences who are likely to be receptive.

The Telegraph story described Victoria as “provincial” and “picturesque” and AggregateIQ as “secretive” and “low-profile.”

Canadian media also expressed surprise at AggregateIQ’s outsized role in the Brexit vote.

The Globe and Mail’s Paul Waldie wrote “It’s quite a coup for Mr. Massingham, who has only been involved in politics for six years and started AggregateIQ in 2013.”

Victoria Times Colonist columnist Jack Knox wrote “If you have never heard of AIQ, join the club.”

The Victoria company, however, appears to be connected to the much larger SCL Group, which describes itself on its website as “the global leader in data-driven communications.”

In the United States it works through related company Cambridge Analytica and has been involved in elections since 2012. Politico reported in 2015 that the firm was working on Ted Cruz’s presidential primary campaign.

And NBC and other media outlets reported that the Trump campaign paid Cambridge Analytica millions to crunch data on 230 million U.S. adults, using information from loyalty cards, club and gym memberships and charity donations [emphasis mine] to predict how an individual might vote and to shape targeted political messages.

That’s quite a chunk of change and I don’t believe that gym memberships, charity donations, etc. were the only sources of information (in the US, there’s voter registration, credit card information, and more) but the list did raise my eyebrows. It would seem we are under surveillance at all times, even in the gym.

In any event, I hope that Hirsh’s call for discussion is successful and that the discussion includes more critical thinking about the implications of Hirsh’s ‘Brave New World’.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.