Tag Archives: British Columbia

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.

Graphite ‘gold’ rush?

Someone in Germany (I think) is very excited about graphite, more specifically, there’s excitement around graphite flakes located in the province of Québec, Canada. Although, the person who wrote this news release might have wanted to run a search for ‘graphite’ and ‘gold rush’. The last graphite gold rush seems to have taken place in 2013.

Here’s the March 1, 2018 news release on PR Newswire (Cision), Note: Some links have been removed),

PALM BEACH, Florida, March 1, 2018 /PRNewswire/ —

MarketNewsUpdates.com News Commentary

Much like the gold rush in North America in the 1800s, people are going out in droves searching for a different kind of precious metal, graphite. The thing your third grade pencils were made of is now one of the hottest commodities on the market. This graphite is not being mined by your run-of-the-mill old-timey soot covered prospectors anymore. Big mining companies are all looking for this important resource integral to the production of lithium ion batteries due to the rise in popularity of electric cars. These players include Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE), Teck Resources Limited (NYSE: TECK), Nemaska Lithium (TSX: NMX), Lithium Americas Corp. (TSX: LAC), and Cruz Cobalt Corp. (TSX-V: CUZ) (OTC: BKTPF).

These companies looking to manufacturer their graphite-based products, have seen steady positive growth over the past year. Their development of cutting-edge new products seems to be paying off. But in order to continue innovating, these companies need the graphite to do it. One junior miner looking to capitalize on the growing demand for this commodity is Graphite Energy Corp.

Graphite Energy is a mining company, that is focused on developing graphite resources. Graphite Energy’s state-of-the-art mining technology is friendly to the environment and has indicate graphite carbon (Cg) in the range of 2.20% to 22.30% with average 10.50% Cg from their Lac Aux Bouleaux Graphite Property in Southern Quebec [Canada].

Not Just Any Graphite Will Do

Graphite is one of the most in demand technology metals that is required for a green and sustainable world. Demand is only set to increase as the need for lithium ion batteries grows, fueled by the popularity of electric vehicles. However, not all graphite is created equal. The price of natural graphite has more than doubled since 2013 as companies look to maintain environmental standards which the use of synthetic graphite cannot provide due to its pollutant manufacturing process. Synthetic graphite is also very expensive to produce, deriving from petroleum and costing up to ten times as much as natural graphite. Therefore manufacturers are interested in increasing the proportion of natural graphite in their products in order to lower their costs.

High-grade large flake graphite is the solution to the environmental issues these companies are facing. But there is only so much supply to go around. Recent news by Graphite Energy Corp. on February 26th [2018] showed promising exploratory results. The announcement of the commencement of drilling is a positive step forward to meeting this increased demand.

Everything from batteries to solar panels need to be made with this natural high-grade flake graphite because what is the point of powering your home with the sun or charging your car if the products themselves do more harm than good to the environment when produced. However, supply consistency remains an issue since mines have different raw material impurities which vary from mine to mine. Certain types of battery technology already require graphite to be almost 100% pure. It is very possible that the purity requirements will increase in the future.

Natural graphite is also the basis of graphene, the uses of which seem limited only by scientists’ imaginations, given the host of new applications announced daily. In a recent study by ResearchSEA, a team from the Ocean University of China and Yunnan Normal University developed a highly efficient dye-sensitized solar cell using a graphene layer. This thin layer of graphene will allow solar panels to generate electricity when it rains.

Graphite Energy Is Keeping It Green

Whether it’s the graphite for the solar panels that will power the homes of tomorrow, or the lithium ion batteries that will fuel the latest cars, these advancements need to made in an environmentally conscious way. Mining companies like Graphite Energy Corp. specialize in the production of environmentally friendly graphite. The company will be producing its supply of natural graphite with the lowest environmental footprint possible.

From Saltwater To Clean Water Using Graphite

The world’s freshwater supply is at risk of running out. In order to mitigate this global disaster, worldwide spending on desalination technology was an estimated $16.6 billion in 2016. Due to the recent intense droughts in California, the state has accelerated the construction of desalination plants. However, the operating costs and the impact on the environment due to energy requirements for the process, is hindering any real progress in the space, until now.

Jeffrey Grossman, a professor at MIT’s [Massachusetts Institute of Technology, United States] Department of Materials Science and Engineering (DMSE), has been looking into whether graphite/graphene might reduce the cost of desalination.

“A billion people around the world lack regular access to clean water, and that’s expected to more than double in the next 25 years,” Grossman says. “Desalinated water costs five to 10 times more than regular municipal water, yet we’re not investing nearly enough money into research. If we don’t have clean energy we’re in serious trouble, but if we don’t have water we die.”

Grossman’s lab has demonstrated strong results showing that new filters made from graphene could greatly improve the energy efficiency of desalination plants while potentially reducing other costs as well.

Graphite/Graphene producers like Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE) are moving quickly to provide the materials necessary to develop this new generation of desalination plants.

Potential Comparables

Cruz Cobalt Corp. (TSX-V: CUZ) (OTC: BKTPF) Cruz Cobalt Corp. is cobalt mining company involved in the identification, acquisition and exploration of mineral properties. The company’s geographical segments include the United States and Canada. They are focused on acquiring and developing high-grade Cobalt projects in politically stable, environmentally responsible and ethical mining jurisdictions, essential for the rapidly growing rechargeable battery and renewable energy.

Nemaska Lithium (TSE: NMX.TO)

Nemaska Lithium is lithium mining company. The company is a supplier of lithium hydroxide and lithium carbonate to the emerging lithium battery market that is largely driven by electric vehicles. Nemaska mining operations are located in the mining friendly jurisdiction of Quebec, Canada. Nemaska Lithium has received a notice of allowance of a main patent application on its proprietary process to produce lithium hydroxide and lithium carbonate.

Lithium Americas Corp. (TSX: LAC.TO)

Lithium Americas is developing one of North America’s largest lithium deposits in northern Nevada. It operates nearly two lithium projects namely Cauchari-Olaroz project which is located in Argentina, and the Lithium Nevada project located in Nevada. The company manufactures specialty organoclay products, derived from clays, for sale to the oil and gas and other sectors.

Teck Resources Limited (NYSE: TECK)

Teck Resources Limited is a Canadian metals and mining company.Teck’s principal products include coal, copper, zinc, with secondary products including lead, silver, gold, molybdenum, germanium, indium and cadmium. Teck’s diverse resources focuses on providing products that are essential to building a better quality of life for people around the globe.

Graphite Mining Today For A Better Tomorrow

Graphite mining will forever be intertwined with the latest advancements in science and technology. Graphite deserves attention for its various use cases in automotive, energy, aerospace and robotics industries. In order for these and other industries to become sustainable and environmentally friendly, a reliance on graphite is necessary. Therefore, this rapidly growing sector has the potential to fuel investor interest in the mining space throughout 2018. The near limitless uses of graphite has the potential to impact every facet of our lives. Companies like Graphite Energy Corp. (OTC: GRXXF); (CSE: GRE) is at the forefront in this technological revolution.

For more information on Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE), please visit streetsignals.com for a free research report.

Streetsignals.com (SS) is the source of the Article and content set forth above. References to any issuer other than the profiled issuer are intended solely to identify industry participants and do not constitute an endorsement of any issuer and do not constitute a comparison to the profiled issuer. FN Media Group (FNM) is a third-party publisher and news dissemination service provider, which disseminates electronic information through multiple online media channels. FNM is NOT affiliated with SS or any company mentioned herein. The commentary, views and opinions expressed in this release by SS are solely those of SS and are not shared by and do not reflect in any manner the views or opinions of FNM. Readers of this Article and content agree that they cannot and will not seek to hold liable SS and FNM for any investment decisions by their readers or subscribers. SS and FNM and their respective affiliated companies are a news dissemination and financial marketing solutions provider and are NOT registered broker-dealers/analysts/investment advisers, hold no investment licenses and may NOT sell, offer to sell or offer to buy any security.

The Article and content related to the profiled company represent the personal and subjective views of the Author (SS), and are subject to change at any time without notice. The information provided in the Article and the content has been obtained from sources which the Author believes to be reliable. However, the Author (SS) has not independently verified or otherwise investigated all such information. None of the Author, SS, FNM, or any of their respective affiliates, guarantee the accuracy or completeness of any such information. This Article and content are not, and should not be regarded as investment advice or as a recommendation regarding any particular security or course of action; readers are strongly urged to speak with their own investment advisor and review all of the profiled issuer’s filings made with the Securities and Exchange Commission before making any investment decisions and should understand the risks associated with an investment in the profiled issuer’s securities, including, but not limited to, the complete loss of your investment. FNM was not compensated by any public company mentioned herein to disseminate this press release but was compensated seventy six hundred dollars by SS, a non-affiliated third party to distribute this release on behalf of Graphite Energy Corp.

FNM HOLDS NO SHARES OF ANY COMPANY NAMED IN THIS RELEASE.

This release contains “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E the Securities Exchange Act of 1934, as amended and such forward-looking statements are made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. “Forward-looking statements” describe future expectations, plans, results, or strategies and are generally preceded by words such as “may”, “future”, “plan” or “planned”, “will” or “should”, “expected,” “anticipates”, “draft”, “eventually” or “projected”. You are cautioned that such statements are subject to a multitude of risks and uncertainties that could cause future circumstances, events, or results to differ materially from those projected in the forward-looking statements, including the risks that actual results may differ materially from those projected in the forward-looking statements as a result of various factors, and other risks identified in a company’s annual report on Form 10-K or 10-KSB and other filings made by such company with the Securities and Exchange Commission. You should consider these factors in evaluating the forward-looking statements included herein, and not place undue reliance on such statements. The forward-looking statements in this release are made as of the date hereof and SS and FNM undertake no obligation to update such statements.

Media Contact:

FN Media Group, LLC
info@marketnewsupdates.com
+1(561)325-8757

SOURCE MarketNewsUpdates.com

Hopefully my insertions of ‘Canada’ and the ‘United States’ help to clarify matters. North America and the United States are not synonyms although they are sometimes used synonymously.

There is another copy of this news release on Wall Street Online (Deutschland), both in English and German.By the way, that was my first clue that there might be some German interest. The second clue was the Graphite Energy Corp. homepage. Unusually for a company with ‘headquarters’ in the Canadian province of British Columbia, there’s an option to read the text in German.

Graphite Energy Corp. seems to be a relatively new player in the ‘rush’ to mine graphite flakes for use in graphene-based applications. One of my first posts about mining for graphite flakes was a July 26, 2011 posting concerning Northern Graphite and their mining operation (Bissett Creek) in Ontario. I don’t write about them often but they are still active if their news releases are to be believed. The latest was issued February 28, 2018 and offers “financial metrics for the Preliminary Economic Assessment (the “PEA”) on the Company’s 100% owned Bissett Creek graphite project.”

The other graphite mining company mentioned here is Lomiko Metals. The latest posting here about Lomiko is a December 23, 2015 piece regarding an analysis and stock price recommendation by a company known as SeeThruEquity. Like Graphite Energy Corp., Lomiko’s mines are located in Québec and their business headquarters in British Columbia. Lomiko has a March 16, 2018 news release announcing its reinstatement for trading on the TSX (Toronto Stock Exchange),

(Vancouver, B.C.) Lomiko Metals Inc. (“Lomiko”) (“Lomiko”) (TSX-V: LMR, OTC: LMRMF, FSE: DH8C) announces it has been successful in its reinstatement application with the TSX Venture Exchange and trading will begin at the opening on Tuesday, March 20, 2018.

Getting back to the flakes, here’s more about Graphite Energy Corp.’s mine (from the About Lac Aux Bouleaux webpage),

Lac Aux Bouleaux

The Lac Aux Bouleaux Property is comprised of 14 mineral claims in one contiguous block totaling 738.12 hectares land on NTS 31J05, near the town of Mont-Laurier in southern Québec. Lac Aux Bouleaux “LAB” is a world class graphite property that borders the only producing graphite in North America [Note: There are three countries in North America, Canada, the United States, and Mexico. Québec is in Canada.]. On the property we have a full production facility already built which includes an open pit mine, processing facility, tailings pond, power and easy access to roads.

High Purity Levels

An important asset of LAB is its metallurgy. The property contains a high proportion of large and jumbo flakes from which a high purity concentrate was proven to be produced across all flakes by a simple flotation process. The concentrate can then be further purified using the province’s green and affordable hydro-electricity to be used in lithium-ion batteries.

The geological work performed in order to verify the existing data consisted of visiting approachable graphite outcrops, historical exploration and development work on the property. Large flake graphite showings located on the property were confirmed with flake size in the range of 0.5 to 2 millimeters, typically present in shear zones at the contact of gneisses and marbles where the graphite content usually ranges from 2% to 20%. The results of the property are outstanding showing to have jumbo flake natural graphite.

An onsite mill structure, a tailing dam facility, and a historical open mining pit is already present and constructed on the property. The property is ready to be put into production based on the existing infrastructure already built. The company would hope to be able to ship by rail its mined graphite directly to Teslas Gigafactory being built in Nevada [United States] which will produce 35GWh of batteries annually by 2020.

Adjacent Properties

The property is located in a very active graphite exploration and production area, adjacent to the south of TIMCAL’s Lac des Iles graphite mine in Quebec which is a world class deposit producing 25,000 tonnes of graphite annually. There are several graphite showings and past producing mines in its vicinity, including a historic deposit located on the property.

The open pit mine in operation since 1989 with an onsite plant ranked 5th in the world production of graphite. The mine is operated by TIMCAL Graphite & Carbon which is a subsidiary of Imerys S.A., a French multinational company. The mine has an average grade of 7.5% Cg (graphite carbon) and has been producing 50 different graphite products for various graphite end users around the globe.

Canadians! We have great flakes!

Alberta adds a newish quantum nanotechnology research hub to the Canada’s quantum computing research scene

One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.

Alberta’s quantum nanotechnology hub (graduate programme)

Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,

Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.

Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.

Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.

It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.

The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.

“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.

But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”

“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.

Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.

Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.

East vs. West—Again?

Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),

 Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.

Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.

Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,

History-bannerHistory has repeatedly demonstrated the power of research in physics to transform society.  As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research.  That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough  technologies.

Establishing a World Class Centre in Quantum Research:

The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics.  Perimeter was established in 2000 as an independent theoretical physics research institute.  Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth).  Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute.  In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it.  Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.

Stephen HawkingPerimeter is located in a Governor-General award winning designed building in Waterloo.  Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility.  A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.

Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo.  IQC was established as an experimental research institute focusing on quantum information.  Mike established IQC with an initial donation of $33.3 million.  Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives.  As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts.  Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.

Quantum ComputingMike and Doug Fregin have been close friends since grade 5.  They are also co-founders of BlackBerry (formerly Research In Motion Limited).  Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million.  Since that time Doug has donated a total of $30 million to Perimeter Institute.  Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million.  As suggested by its name, WIN is devoted to research in the area of nanotechnology.  It has established as an area of primary focus the intersection of nanotechnology and quantum physics.

With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world.  QNC was opened in September 2012 and houses researchers from both IQC and WIN.

Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:

In the Research LabFor many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper.  That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge.  Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries.  Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries.  Local experimentalists are very much playing a leading role in this regard.  It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.

Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example.  The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications.  Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.

Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),

… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.

Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?

Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.

Semeniuk offers an overview of the D-Wave Systems story,

D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …

The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …

Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”

D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.

It seems Lazaridis is not the only one who likes to hold company information tightly.

Back to Semeniuk and D-Wave,

Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.

But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …

Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …

I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),

Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while  hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for  Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.

There’s a lot more to Semeniuk’s article but this is the last excerpt,

The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing  a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].

I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?

In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).

Finally, you can find Semeniuk’s October 2017 article here but be aware it’s behind a paywall.

Whither we goest?

Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate  programme, it’s early days yet and no one should ever count out Alberta.

Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.

London gets its first Chief Digital Officer (CDO)

A report commissioned from 2thinknow by Business Insider ranks the 25 most high-tech cities in the world (Vancouver, Canada rates as 14th on this list) is featured in an Aug. 25, 2017 news item on the Daily Hive; Vancouver,

The ranking was selected on 10 factors related to technological advancement, which included the number of patents filed per capita, startups, tech venture capitalists, ranking in other innovation datasets, and level of smartphone use.

Topping the list, which was released this month, is San Fransisco’s “Silicon Valley,” which “wins in just about every category.” New York comes in second place, followed by London [UK; emphasis mine], Los Angeles, and Seoul.

Intriguingly, London’s Mayor Sadiq Khan announced a new Chief Digital Officer for the city just a few days later. From an August 29, 2017 news item by Michael Moore for Beta News,

Theo Blackwell, a former cabinet member at Camden Council, will take responsibility for helping London continue to be the technology powerhouse it has become over the past few years.

Mr Blackwell will work closely with the Mayor’s office, particularly the Smart London Board, to create a new “Smart London Plan” that looks to outline how the capital can benefit from embracing new technologies, with cybersecurity, open data and connectivity all at the forefront.

He will also look to build collaboration across London’s boroughs when it comes to public technology schemes, and encourage the digital transformation of public services.

“The new chief digital officer post is an amazing opportunity to make our capital even more open to innovation, support jobs and investment and make our public services more effective,” he said in a statement.

An August 25, 2017 Mayor of London press release, which originated the news item, provides a more detailed look at the position and the motives for creating it,

The Mayor of London, Sadiq Khan, has today (25 August [2017]) appointed Theo Blackwell as the capital’s first ever Chief Digital Officer (CDO).

As London’s first CDO, Theo will play a leading role in realising the Mayor’s ambition to make London the world’s smartest city, ensuring that the capital’s status as a global tech hub helps transform the way public services are designed and delivered, making them more accessible, efficient and responsive to the needs of Londoners. The appointment fulfils a key manifesto commitment made by the Mayor.

He joins the Mayor’s team following work at GovTech accelerator Public Group, advising start-ups on the growing market in local public services, and was previously Head of Policy & Public Affairs for the video games industry’s trade body, Ukie – where he ran a ‘Next Gen Skills’ campaign to get coding back on the curriculum.

Theo brings more than 20 years of experience in technology and digital transformation in both the public and private sector.  In his role as cabinet member for finance, technology and growth at Camden Council, Theo has established Camden as London’s leading digital borough through its use of public data – and this year they received national recognition as Digital Leaders ‘Council of the year’.

Theo also sits on the Advisory Board of Digital Leaders and is a director of Camden Town Unlimited, a Business Improvement District which pioneered new start-up incubation in ‘meanwhile’ space.

Theo will work closely with the Mayor’s Smart London Board to develop a new Smart London Plan, and will play a central role in building collaboration across London’s boroughs, and businesses, to drive the digital transformation of public services, as well as supporting the spread of innovation through common technology standards and better data-sharing.

Theo will also promote manifesto ambitions around pan-London collaboration on connectivity, digital inclusion, cyber-security and open data. He will also focus on scoping work for the London Office for Technology & Innovation that was announced by the Mayor at London Tech Week.

London already has more than 47,000 digital technology companies, employing approximately 240,000 people. It is forecast that the number of tech companies will increase by a third and a further 44,500 jobs will have been created by 2026.

The capital is also racing ahead with new technologies, using it for ticketing and contactless on the transport network, while the London Datastore is an open resource with vast amounts of data about all areas of the city, and tech start-ups have used this open data to create innovative new apps.

The Mayor of London, Sadiq Khan, said:

I am determined to make London the world’s leading ‘smart city’ with digital technology and data at the heart of making our capital a better place to live, work and visit. We already lead in digital technology, data science and innovation and I want us to make full use of this in transforming our public services for Londoners and the millions of visitors to our great city.

I am delighted to appoint Theo Blackwell as London’s first Chief Digital Officer, and I know he will use his experience working in the technology sector and developing public services to improve the lives of all Londoners.

Theo Blackwell said:

The new Chief Digital Officer post is an amazing opportunity to make our capital even more open to innovation, support jobs and investment and make our public services more effective. The pace of change over the next decade requires public services to develop a stronger relationship with the tech sector.  Our purpose is to fully harness London’s world-class potential to make our public services faster and more reliable at doing things we expect online, but also adaptable enough to overcome the capital’s most complex challenges.

Antony Walker, Deputy CEO of techUK, said:

techUK has long argued that London needed a Chief Digital Officer to ensure that London makes the best possible use of new digital technologies. The appointment of Theo Blackwell is good news for Londoners. The smart use of new digital technologies can improve the lives of people living in or visiting London. Theo Blackwell brings a deep understanding of both the opportunities ahead and the challenges of implementing new digital technologies to address the city’s most pressing problems. This appointment is an important step forward to London being at the forefront of tech innovation to create smart places and communities where citizens want to live, work and thrive.

Councillor Claire Kober, Chair of London Councils, said:

The appointment of London’s first Chief Digital Officer fills an important role providing needed digital leadership for London’s public services.  Theo will bring his longstanding experience working with other borough leaders, which I think is critical as we develop new approaches to developing, procuring and scaling the best digital solutions across the capital.

Robin Knowles, Founder and CEO of Digital Leaders, said:

Theo Blackwell has huge experience and is a fabulous appointment as the capital’s first Chief Digital Officer.  He will do a great job for London.

Doteveryone founder, Baroness Martha Lane Fox, said:

Digital leadership is a major challenge for the public sector, as the new Chief Digital Officer for London Theo’s track-record delivering real change in local government and his work in the tech sector brings real experience to this role.

Mike Flowers, First Chief Analytics Officer for New York City and Chief Analytics Officer at Enigma Technologies, said:

Theo is a pragmatic visionary with that rare combination of tech savvy and human focus that the task ahead of him requires. I congratulate Mayor Khan on his decision to trust him with this critical role, and I’m very happy for the residents of London whose lives will be improved by the better use of data and technology by their government. Theo gets results.

It’s always possible that there’s a mastermind involved in the timing of these announcements but sometimes they’re just a reflection of a trend. Cities have their moments just like people do and it seems like London may be on an upswing. From an August 18 (?), 2017 opinion piece by Gavin Poole (Chief Executive Officer, Here East) for ITProPortal,

Recently released data from London & Partners indicates that record levels of venture capital investment are flooding into the London tech sector, with a record £1.1 billion pounds being invested since the start of the year. Strikingly, 2017 has seen a fourfold increase in investment compared with 2013. This indicates that, despite Brexit fears, London retains its crown as Europe’s number one tech hub for global investors but we must make sure that we keep that place by protecting access to the world’s best talent.

As the tech sector continues to outperform the rest of the UK economy, London’s place in it will become all the more important. When London does well, so too does the rest of the UK. Mega-deals from challenger brands like Monzo and Improbable, and the recent opening of Europe’s newest technology innovation destination, Plexal, at Here East have helped to cement the tech sector’s future in the medium-term. Government too has recognised the strength of the sector; earlier this month the Department for Culture, Media and Sport rebranded as the Department for Digital, Culture, Media and Sport. This name change, 25 years after the department’s creation, signifies how much things have developed. There is now also a Minister of State for Digital who covers everything from broadband and mobile connectivity to the creative industries. This visible commitment by the Government to put digital at the heart of its agenda should be welcomed.

There are lots of reasons for London’s tech success: start-ups and major corporates look to London for its digital and geographical connectivity, the entrepreneurialism of its tech talent and the vibrancy of its urban life. We continue to lead Europe on all of these fronts and Sadiq Khan’s #LondonIsOpen campaign has made clear that the city remains welcoming and accessible. In fact, there’s no shortage of start-ups proclaiming the great things about London. Melissa Morris, CEO and Founder, Lantum, a company that recently secured £5.3 in funding in London said “London is the world’s coolest city – it attracts some of the most interesting people from across the world… We’ve just closed a round of funding, and our plans are very much about growth”.

As for Vancouver, we don’t have any science officers or technology officers or anything of that ilk. Our current mayor, Gregor Robertson, who pledged to reduce homelessness almost 10 years ago has experienced a resounding failure with regard to that pledge but his greenest city pledge has enjoyed more success. As far as I’m aware the mayor and the current city council remain blissfully uninvolved in major initiatives to encourage science and technology efforts although there was a ‘sweetheart’ real estate deal for local technology company, Hootsuite. A Feb. 18, 2014 news item on the CBC (Canadian Broadcasting Corporation) website provides a written description of the deal but there is also this video,

Robertson went on to win his election despite the hint of financial misdoings in the video but there is another election* coming in 2018. The city official in the video, Penny Ballem was terminated in September 2015 *due to what seemed to be her attempts to implement policy at a pace some found disconcerting*. In the meantime, the Liberal party which made up our provincial government until recently (July 2017) was excoriated for its eagerness to accept political money and pledged to ‘change the rules’ as did the parties which were running in opposition. As far as I’m aware, there have been no changes that will impace provincial or municipal politicians in the near future.

Getting back to government initiatives that encourage science and technology efforts in Vancouver, there is the Cascadia Innovation Corridor. Calling it governmental is a bit of a stretch as it seems to be a Microsoft initiative that found favour with the governments of Washington state and the province of British Columbia; Vancouver will be one of the happy recipients. See my Feb. 28, 2017 posting and August 28, 2017 posting for more details about the proposed Corridor.

In any event, I’d like to see a science policy and at this point I don’t care if it’s a city policy or a provincial policy.

*’elections’ corrected to ‘election’ and ‘due to what seemed to be her attempts to implement policy at a pace some found disconcerting’ added for clarity on August 31, 2017.

High speed rail link for Cascadia Innovation Corridor

In a Feb. 28, 2017 posting I featured an announcement about what I believe is the first  project from the British Columbia (province of Canada) and Washington State (US) government’s joint Cascadia Innovation Corridor initiative:  the Cascadia Analytics Cooperative, During the telephone press conference a couple of the participants joked about hyperloop (transportation pods in vacuum tubes) and  being able to travel between Vancouver (Canada) and Seattle (US) in minutes. It seems that might not have been quite the joke I assumed. Kenneth Chan in an Aug. 14, 2017 posting for the Daily Hive announced a high-speed rail feasibility study is underway (Note: Links have been removed),

According to KUOW public radio, the study began in late-July and will be conducted by a consultant at a cost of US$300,000 – down from the budgeted USD$1 million when the study was first announced earlier this year in Governor Jay Inslee’s proposed state budget. The budget bill proposed Washington State stations at locations such as Bellingham, Everett, SeaTac International Airport, Tacoma, Olympia, and Vancouver, Washington.

The idea has received the full backing of Washington State-based Microsoft, which supported the study with an additional $50,000 contribution. [emphasis mine] Engineering consultancy firm CH2M, which has offices in Vancouver, Seattle, and Portland, has been contracted to perform the study.

Interest in such a rail link is spurred from the Cascadia Innovation Corridor agreement signed by the government leaders of BC and Washington State last fall. The agreement committed both jurisdictions to growing the Vancouver-Seattle corridor into a tech corridor and innovation hub and improving transportation connections, such as high-speed rail.

“Why not a high speed train from Vancouver to Seattle to Portland? If we lived in Europe it would already be there,” said Brad Smith, Microsoft President and Chief Legal Officer, at a recent Portland conference on regional policy. “We need to raise our sights and our ambition level as a region.”

Microsoft is very interested in facilitating greater ease of movement, a development which causes me to to feel some unease as mentioned in my February 28, 2017 posting,

I look forward to hearing more about the Cascadia Urban Analytics Cooperative and the Cascadia Innovation Corridor as they develop. This has the potential to be very exciting although I do have some concerns such as MIcrosoft and its agendas, both stated and unstated. After all, the Sept. 2016 meeting was convened by Microsoft and its public affairs/lobbying group and the topic was innovation, which is code for business and as hinted earlier, business is not synonymous with social good. Having said that I’m not about to demonize business either. I just think a healthy dose of skepticism is called for. Good things can happen but we need to ensure they do.

Since February 2017, the government in British Columbia has changed hands and is now led by James Horgan of the New Democratic Party. Like Christy Clark and the Liberals before them, this provincial government does not have any science policy, a ministry of science (senior or junior), or any evidence of independent science advice. There has been (and may still be, it’s hard to tell) a Premier’s Technology Council, a BC Innovation Council (formerly the Science Council of BC), and #BCTECH Strategy which hie more to business and applied science than an inclusive ‘science strategy’ with attendant government agencies.

Canadian children to learn computer coding from kindergarten through to high school

Government officials are calling the new $50M programme to teach computer coding skills to approximately 500,000 Canadian children from kindergarten to grade 12, CanCode (h/t June 14, 2017 news item on phys.org). Here’s more from the June 14, 2017 Innovation, Science and Economic Development Canada news release,,

Young Canadians will get the skills they need for the well-paying jobs of the future as a result of a $50-million program that gives them the opportunity to learn coding and other digital skills.

The Honourable Navdeep Bains, Minister of Innovation, Science and Economic Development, together with the Honourable Kirsty Duncan, Minister of Science, today launched CanCode, a new program that, over the next two years, will give 500,000 students from kindergarten to grade 12 the opportunity to learn the in-demand skills that will prepare them for future jobs.

The program also aims to encourage more young women, Indigenous Canadians and other under-represented groups to pursue careers in science, technology, engineering and math. In addition, it will equip 500 teachers across the country with the training and tools to teach digital skills and coding.

Many jobs today rely on the ability of Canadian workers to solve problems using digital skills. The demand for such skills will only intensify as the number of software and data companies increases—whether they sell music online or design self-driving cars, for example. That’s why the government is investing in the skills that prepare young Canadians for the jobs of tomorrow.

This program is part of the Innovation and Skills Plan, a multi-year strategy to create well-paying jobs for the middle class and those working hard to join it.

 

Quotes

“Our government is investing in a program that will equip young Canadians with the skills they need for a future in which every job will require some level of digital ability. Coding teaches our young people how to work as a team to solve difficult problems in creative ways. That’s how they will become the next great innovators and entrepreneurs that Canada needs to succeed.”

– The Honourable Navdeep Bains, Minister of Innovation, Science and Economic Development

“Coding skills are highly relevant in today’s scientific and technological careers, and they will only become more important in the future. That’s why it is essential that we teach these skills to young Canadians today so they have an advantage when they choose to pursue a career as a scientist, researcher or engineer. Our government is proud to support their curiosity, their ambition and their desire to build a bolder, brighter future for all Canadians.”

– The Honourable Kirsty Duncan, Minister of Science

Quick Facts

  • Funding applicants must be not-for-profit organizations incorporated in Canada. They must have a minimum of three years of experience delivering education-related programs to young Canadians.
  • The deadline for applications for project funding is July 26, 2017 [emphasis mine].

Associated Links

Exciting stuff, eh?

I was a bit curious about how the initiative will be executed since education is a provincial responsibility. The answers are on the ‘CanCode funding application‘ page,

The CanCode program aims to provide coding and digital skills learning opportunities to a diverse set of participants, principally students from kindergarten to grade 12 (K-12) across Canada, including traditionally underrepresented groups, as well as their teachers. The program will consider proposals for initiatives that run until the program end date of March 31, 2019.

Funding

Maximum contribution funding to any one recipient cannot exceed $5 million per year, and the need for the contribution must be clearly demonstrated by the applicant. The level of funding provided by the program will be contingent upon the assessment of the proposal and the availability of program funds.

Proposals may include funding from other levels of government, private sector or non-profit partners, however, total funding from all federal, provincial/territorial and municipal sources cannot exceed 100%.

Eligible costs

Eligible costs are the costs directly related to the proposal that respect all conditions and limitations of the program and that will be eligible for claim as set out in the Contribution Agreement (CA) if the proposal is approved for funding.

Eligible costs include:

  • Administrative operating costs, including travel related to delivery of training (limited to no more than 10% of total eligible costs except for approved recipients delivering initiatives in Canada’s Far North due to high costs associated with travel, inclement weather, costs of accommodation and food)
  • Direct costs to deliver training (including for training delivery personnel, space rental, materials, etc.)
  • Costs for required equipment limited to no more than 20% of total eligible costs
  • Costs to develop and administer online training

Eligibility details

Essential criteria for assessment

To qualify for funding, your organization:

  • Must be a not-for-profit organization incorporated in Canada; and
  • Must have a minimum of three years’ experience in the delivery of coding and digital education programs to K-12 youth and/or their teachers.

Your funding proposal must also clearly demonstrate that:

  • Your proposed initiative meets the objectives of the program in terms of target participants and content (e.g. computational thinking, coding concepts, programming robotics, internet safety, teacher training);
  • Your initiative will be delivered at no cost to participants;
  • With program funding, your organization will have the resource capacity and expertise, either internally or through partnerships, to successfully deliver the proposed initiative; and
  • You can deliver the proposed initiative within the program timeframe.

Asset criteria for assessment

While not essential requirements, proposals will also be assessed on the degree to which they include one or more of the following elements:

  • Content that maps to provincial/territorial educational curricula (e.g. lessons for teachers on how to integrate coding/digital skills into the classroom; topics/content that support current curricula);
  • Development of tools and resources that will be made available to students and teachers following a learning opportunity, and which could reinforce or continue learning, and/or reach a broader audience;
  • Partnerships with other organizations, such as school boards, teacher associations, community organizations, and other organizations delivering coding/digital skills;
  • Private sector funding or partnerships that can leverage federal contributions to deliver programming to a wider audience or to enhance or expand initiatives and content;
  • A demonstrated ability to reach traditionally underrepresented groups such as girls, Indigenous youth, disabled, and at-risk youth;
  • A demonstrated ability to deliver services on First Nations Reserves; or
  • A demonstrated ability to reach underserved locations in Canada, such as rural, remote and northern communities.

Eligibility self-assessment

Before you get started, take the following self-assessment to ensure your proposed initiative/project is eligible for funding. If you answer yes to all of the questions below, you are eligible to apply:

  • Are you a not-for-profit organization incorporated in Canada? Are you able to provide articles of incorporation?
  • Has your organization been delivering coding/digital skills education to youth within the range of kindergarten to grade 12 and/or teachers for at least three years?
  • Can your proposed initiative/project be delivered by March 31, 2019?
  • Does your proposed initiative/project provide any of the following: development and delivery of training and educational initiatives for K-12 students to learn digital skills, coding and related concepts (e.g. in-class instruction, after-school programs, summer camps, etc.); development and delivery of training and professional development initiatives for teacher to develop the skills and confidence to introduce digital skills, coding and related concepts into the classroom (e.g. teacher training courses, workshops, etc.); development of online resources/tools to support and enhance coding and digital skills learning initiatives for youth and/or teachers.

How to apply

When you click “Apply now”, you will be prompted to submit a basic form to collect your contact information. We will then contact you to provide you with the application package.

[Go here to Apply now]

Contact information

For general questions and comments, please contact the CanCode program.

Telephone (toll-free in Canada): 1-800-328-6189
Telephone (Ottawa): 613-954-5031
Fax: 343-291-1913
TTY (for hearing-impaired): 1-866-694-8389
By email
Chat now
Business hours: 8:30 a.m. to 5:00 p.m. (Eastern Time)
By mail: CanCode
C.D. Howe Building
235 Queen Street, 1st floor, West Tower
Ottawa, ON  K1A 0H5
Canada

For anyone curious about just how much work is involved (from the Apply for CanCode funding page;Note: contact form not included),

Please complete and submit the form below and we will contact you within 2 business days to provide you with an application package.

Application package

A complete application package, consisting of a completed Application Form, a Project Work Plan, a Budget, and such additional supporting documentation as required by the program to fully assess the proposal’s merit to be funded, must be submitted on or before July 26, 2017 to be considered.

Supporting documentation includes, but is not limited to, the following:

  • Corporate documents, e.g. articles of corporation;
  • Financial statements from the last three years;
  • Information on any contributors/partners and their roles and resources in support of the project;
  • A detailed budget outlining forecasted total costs and per participant cost of delivering the proposed initiative;
  • A detailed work plan providing a description of all project activities and timelines, as well as overall expected results and benefits;
  • Information on experience/skills of key personnel;
  • Copies of any funding or partnership agreements relevant to the proposal;
  • Letters of support from partners, previous clientele, other relevant stakeholders;

Application intake

The program will accept proposals until July 26, 2017 [emphasis mine], whereupon the call for proposals will be closed. Should funding remain available following the assessment and funding decisions regarding proposals received during this intake period, further calls for proposals may be issued.

If you keep scrolling down you’ll find the contact form.

Applicants sure don’t much time to prepare their submissions from which I infer that interested parties have already been contacted or apprised that this programme was in the works.

Also, for those of us in British Columbia, this is not the first government initiative directed at children’s computer coding skills. In January 2016, Premier Christy Clark* announced a provincial programme  (my Jan. 19, 2016 posting; scroll down about 55% of the way for the discussion about ‘talent’ and several months later announced there would be funding for the programme (June 10, 2016 Office of the Premier news release about funding). i wonder if these federal and provincial efforts are going to be coordinated?

For more insight into the BC government’s funding, there’s Tracy Sherlock’s Sept. 3, 2016 article for the Vancouver Sun.

For anyone wanting to keep up with Canadian government science-related announcements, there are the two minister’s separate twitter feeds:

@ministerISED

@ScienceMin

*As of June 16, 2017, Premier Clark appears to be on her way out of government after her party failed by one seat to win a majority in the Legislative Assembly. However, there is a great deal of wrangling. Presumably the funding for computer coding programmes in the schools was locked in.

Health technology and the Canadian Broadcasting Corporation’s (CBC) two-tier health system ‘Viewpoint’

There’s a lot of talk and handwringing about Canada’s health care system, which ebbs and flows in almost predictable cycles. Jesse Hirsh in a May 16, 2017 ‘Viewpoints’ segment (an occasional series run as part the of the CBC’s [Canadian Broadcasting Corporation] flagship, daily news programme, The National) dared to reframe the discussion as one about technology and ‘those who get it’  [the technologically literate] and ‘those who don’t’,  a state Hirsh described as being illiterate as you can see and hear in the following video.

I don’t know about you but I’m getting tired of being called illiterate when I don’t know something. To be illiterate means you can’t read and write and as it turns out I do both of those things on a daily basis (sometimes even in two languages). Despite my efforts, I’m ignorant about any number of things and those numbers keep increasing day by day. BTW, Is there anyone who isn’t having trouble keeping up?

Moving on from my rhetorical question, Hirsh has a point about the tech divide and about the need for discussion. It’s a point that hadn’t occurred to me (although I think he’s taking it in the wrong direction). In fact, this business of a tech divide already exists if you consider that people who live in rural environments and need the latest lifesaving techniques or complex procedures or access to highly specialized experts have to travel to urban centres. I gather that Hirsh feels that this divide isn’t necessarily going to be an urban/rural split so much as an issue of how technically literate you and your doctor are.  That’s intriguing but then his argumentation gets muddled. Confusingly, he seems to be suggesting that the key to the split is your access (not your technical literacy) to artificial intelligence (AI) and algorithms (presumably he’s referring to big data and data analytics). I expect access will come down more to money than technological literacy.

For example, money is likely to be a key issue when you consider his big pitch is for access to IBM’s Watson computer. (My Feb. 28, 2011 posting titled: Engineering, entertainment, IBM’s Watson, and product placement focuses largely on Watson, its winning appearances on the US television game show, Jeopardy, and its subsequent adoption into the University of Maryland’s School of Medicine in a project to bring Watson into the examining room with patients.)

Hirsh’s choice of IBM’s Watson is particularly interesting for a number of reasons. (1) Presumably there are companies other than IBM in this sector. Why do they not rate a mention?  (2) Given the current situation with IBM and the Canadian federal government’s introduction of the Phoenix payroll system (a PeopleSoft product customized by IBM), which is  a failure of monumental proportions (a Feb. 23, 2017 article by David Reevely for the Ottawa Citizen and a May 25, 2017 article by Jordan Press for the National Post), there may be a little hesitation, if not downright resistance, to a large scale implementation of any IBM product or service, regardless of where the blame lies. (3) Hirsh notes on the home page for his eponymous website,

I’m presently spending time at the IBM Innovation Space in Toronto Canada, investigating the impact of artificial intelligence and cognitive computing on all sectors and industries.

Yes, it would seem he has some sort of relationship with IBM not referenced in his Viewpoints segment on The National. Also, his description of the relationship isn’t especially illuminating but perhaps it.s this? (from the IBM Innovation Space  – Toronto Incubator Application webpage),

Our incubator

The IBM Innovation Space is a Toronto-based incubator that provides startups with a collaborative space to innovate and disrupt the market. Our goal is to provide you with the tools needed to take your idea to the next level, introduce you to the right networks and help you acquire new clients. Our unique approach, specifically around client engagement, positions your company for optimal growth and revenue at an accelerated pace.

OUR SERVICES

IBM Bluemix
IBM Global Entrepreneur
Softlayer – an IBM Company
Watson

Startups partnered with the IBM Innovation Space can receive up to $120,000 in IBM credits at no charge for up to 12 months through the Global Entrepreneurship Program (GEP). These credits can be used in our products such our IBM Bluemix developer platform, Softlayer cloud services, and our world-renowned IBM Watson ‘cognitive thinking’ APIs. We provide you with enterprise grade technology to meet your clients’ needs, large or small.

Collaborative workspace in the heart of Downtown Toronto
Mentorship opportunities available with leading experts
Access to large clients to scale your startup quickly and effectively
Weekly programming ranging from guest speakers to collaborative activities
Help with funding and access to local VCs and investors​

Final comments

While I have some issues with Hirsh’s presentation, I agree that we should be discussing the issues around increased automation of our health care system. A friend of mine’s husband is a doctor and according to him those prescriptions and orders you get when leaving the hospital? They are not made up by a doctor so much as they are spit up by a computer based on the data that the doctors and nurses have supplied.

GIGO, bias, and de-skilling

Leaving aside the wonders that Hirsh describes, there’s an oldish saying in the computer business, garbage in/garbage out (gigo). At its simplest, who’s going to catch a mistake? (There are lots of mistakes made in hospitals and other health care settings.)

There are also issues around the quality of research. Are all the research papers included in the data used by the algorithms going to be considered equal? There’s more than one case where a piece of problematic research has been accepted uncritically, even if it get through peer review, and subsequently cited many times over. One of the ways to measure impact, i.e., importance, is to track the number of citations. There’s also the matter of where the research is published. A ‘high impact’ journal, such as Nature, Science, or Cell, automatically gives a piece of research a boost.

There are other kinds of bias as well. Increasingly, there’s discussion about algorithms being biased and about how machine learning (AI) can become biased. (See my May 24, 2017 posting: Machine learning programs learn bias, which highlights the issues and cites other FrogHeart posts on that and other related topics.)

These problems are to a large extent already present. Doctors have biases and research can be wrong and it can take a long time before there are corrections. However, the advent of an automated health diagnosis and treatment system is likely to exacerbate the problems. For example, if you don’t agree with your doctor’s diagnosis or treatment, you can search other opinions. What happens when your diagnosis and treatment have become data? Will the system give you another opinion? Who will you talk to? The doctor who got an answer from ‘Watson”? Is she or he going to debate Watson? Are you?

This leads to another issue and that’s automated systems getting more credit than they deserve. Futurists such as Hirsh tend to underestimate people and overestimate the positive impact that automation will have. A computer, data analystics, or an AI system are tools not gods. You’ll have as much luck petitioning one of those tools as you would Zeus.

The unasked question is how will your doctor or other health professional gain experience and skills if they never have to practice the basic, boring aspects of health care (asking questions for a history, reading medical journals to keep up with the research, etc.) and leave them to the computers? There had to be  a reason for calling it a medical ‘practice’.

There are definitely going to be advantages to these technological innovations but thoughtful adoption of these practices (pun intended) should be our goal.

Who owns your data?

Another issue which is increasingly making itself felt is ownership of data. Jacob Brogan has written a provocative May 23, 2017 piece for slate.com asking that question about the data Ancestry.com gathers for DNA testing (Note: Links have been removed),

AncestryDNA’s pitch to consumers is simple enough. For $99 (US), the company will analyze a sample of your saliva and then send back information about your “ethnic mix.” While that promise may be scientifically dubious, it’s a relatively clear-cut proposal. Some, however, worry that the service might raise significant privacy concerns.

After surveying AncestryDNA’s terms and conditions, consumer protection attorney Joel Winston found a few issues that troubled him. As he noted in a Medium post last week, the agreement asserts that it grants the company “a perpetual, royalty-free, world-wide, transferable license to use your DNA.” (The actual clause is considerably longer.) According to Winston, “With this single contractual provision, customers are granting Ancestry.com the broadest possible rights to own and exploit their genetic information.”

Winston also noted a handful of other issues that further complicate the question of ownership. Since we share much of our DNA with our relatives, he warned, “Even if you’ve never used Ancestry.com, but one of your genetic relatives has, the company may already own identifiable portions of your DNA.” [emphasis mine] Theoretically, that means information about your genetic makeup could make its way into the hands of insurers or other interested parties, whether or not you’ve sent the company your spit. (Maryam Zaringhalam explored some related risks in a recent Slate article.) Further, Winston notes that Ancestry’s customers waive their legal rights, meaning that they cannot sue the company if their information gets used against them in some way.

Over the weekend, Eric Heath, Ancestry’s chief privacy officer, responded to these concerns on the company’s own site. He claims that the transferable license is necessary for the company to provide its customers with the service that they’re paying for: “We need that license in order to move your data through our systems, render it around the globe, and to provide you with the results of our analysis work.” In other words, it allows them to send genetic samples to labs (Ancestry uses outside vendors), store the resulting data on servers, and furnish the company’s customers with the results of the study they’ve requested.

Speaking to me over the phone, Heath suggested that this license was akin to the ones that companies such as YouTube employ when users upload original content. It grants them the right to shift that data around and manipulate it in various ways, but isn’t an assertion of ownership. “We have committed to our users that their DNA data is theirs. They own their DNA,” he said.

I’m glad to see the company’s representatives are open to discussion and, later in the article, you’ll see there’ve already been some changes made. Still, there is no guarantee that the situation won’t again change, for ill this time.

What data do they have and what can they do with it?

It’s not everybody who thinks data collection and data analytics constitute problems. While some people might balk at the thought of their genetic data being traded around and possibly used against them, e.g., while hunting for a job, or turned into a source of revenue, there tends to be a more laissez-faire attitude to other types of data. Andrew MacLeod’s May 24, 2017 article for thetyee.ca highlights political implications and privacy issues (Note: Links have been removed),

After a small Victoria [British Columbia, Canada] company played an outsized role in the Brexit vote, government information and privacy watchdogs in British Columbia and Britain have been consulting each other about the use of social media to target voters based on their personal data.

The U.K.’s information commissioner, Elizabeth Denham [Note: Denham was formerly B.C.’s Office of the Information and Privacy Commissioner], announced last week [May 17, 2017] that she is launching an investigation into “the use of data analytics for political purposes.”

The investigation will look at whether political parties or advocacy groups are gathering personal information from Facebook and other social media and using it to target individuals with messages, Denham said.

B.C.’s Office of the Information and Privacy Commissioner confirmed it has been contacted by Denham.

Macleod’s March 6, 2017 article for thetyee.ca provides more details about the company’s role (note: Links have been removed),

The “tiny” and “secretive” British Columbia technology company [AggregateIQ; AIQ] that played a key role in the Brexit referendum was until recently listed as the Canadian office of a much larger firm that has 25 years of experience using behavioural research to shape public opinion around the world.

The larger firm, SCL Group, says it has worked to influence election outcomes in 19 countries. Its associated company in the U.S., Cambridge Analytica, has worked on a wide range of campaigns, including Donald Trump’s presidential bid.

In late February [2017], the Telegraph reported that campaign disclosures showed that Vote Leave campaigners had spent £3.5 million — about C$5.75 million [emphasis mine] — with a company called AggregateIQ, run by CEO Zack Massingham in downtown Victoria.

That was more than the Leave side paid any other company or individual during the campaign and about 40 per cent of its spending ahead of the June referendum that saw Britons narrowly vote to exit the European Union.

According to media reports, Aggregate develops advertising to be used on sites including Facebook, Twitter and YouTube, then targets messages to audiences who are likely to be receptive.

The Telegraph story described Victoria as “provincial” and “picturesque” and AggregateIQ as “secretive” and “low-profile.”

Canadian media also expressed surprise at AggregateIQ’s outsized role in the Brexit vote.

The Globe and Mail’s Paul Waldie wrote “It’s quite a coup for Mr. Massingham, who has only been involved in politics for six years and started AggregateIQ in 2013.”

Victoria Times Colonist columnist Jack Knox wrote “If you have never heard of AIQ, join the club.”

The Victoria company, however, appears to be connected to the much larger SCL Group, which describes itself on its website as “the global leader in data-driven communications.”

In the United States it works through related company Cambridge Analytica and has been involved in elections since 2012. Politico reported in 2015 that the firm was working on Ted Cruz’s presidential primary campaign.

And NBC and other media outlets reported that the Trump campaign paid Cambridge Analytica millions to crunch data on 230 million U.S. adults, using information from loyalty cards, club and gym memberships and charity donations [emphasis mine] to predict how an individual might vote and to shape targeted political messages.

That’s quite a chunk of change and I don’t believe that gym memberships, charity donations, etc. were the only sources of information (in the US, there’s voter registration, credit card information, and more) but the list did raise my eyebrows. It would seem we are under surveillance at all times, even in the gym.

In any event, I hope that Hirsh’s call for discussion is successful and that the discussion includes more critical thinking about the implications of Hirsh’s ‘Brave New World’.

Nanotech Security Corp. stock declining but Cantor Fitzgerald Canada analyst Ralph Garcea gives the stock a buy rating

Linda Rogers has written a Feb. 29, 2016 article about a Vancouver-based company rather perturbingly titled ‘What’s Propelling Nanotech Security Corp to Decline So Much?‘ for Small Cap Wired,

The stock of Nanotech Security Corp (CVE:NTS) is a huge mover today! The stock is down 3.23% or $0.04 after the news [Nanotech Security announced its first quarter fiscal 2016 results in a Feb. 29, 2016 news release], hitting $1.2 per share. … The move comes after 7 months negative chart setup for the $68.48M company. It was reported on Feb, 29 [2016] by Barchart.com. We have $1.06 PT which if reached, will make CVE:NTS worth $8.22 million less.

The Feb. 29, 2016 Nanotech Security news release (summary version) highlights the good news first,

  • Revenue of $1.5 million consistent with the same period last year.  Security Features contributed revenues of $569,000 largely from development contracts and Surveillance delivered $940,000.
  • Gross margin improved to 50% up from 34% in the same period last year.  The improvement reflects the increased mix of higher margin Security Features revenue.
  • Renewed a $1.0 million banknote security feature development contract. The Company successfully renewed the third and final phase of a banknote development contract with a top ten issuing authority to develop a unique Optically Variable Device (“OVD”) security feature for incorporation into future banknotes.  The final phase is expected to generate revenues of approximately $1.0 million.
  • Signed new $3.0 million KolourOptik banknote development contract. The Company signed a new three phase development contract to use the KolourOptik™ nanotechnology to develop a unique OVD security features with another G8 country for incorporation into future banknotes.
  • Strategic meetings with large international banknote issuing authority.  The Company continues to work with a large international banknote issuing authority to deliver a significant volume of colour shifting Optical Thin Film (“OTF”), and partner with our KolourOptik™ technology.  Management continues to devote a significant amount of time and resources in advancing these opportunities.
  • Signed a Memorandum of Understanding (“MOU”) with Hueck Folien, a European manufacturer to supply OTF to the banknote market.  The MOU contemplates an operational agreement to collaborate in the volume production of a colour shifting OTF security feature.  The OTF product is anticipated to initially be used in banknotes as threads and then expand into other markets in the future.

Doug Blakeway, Nanotech’s Chairman and CEO commented, “These two development contracts are material achievements.  Issuing authorities are paying us – something not common in the industry – to design unique banknote security features with our OTF and KolourOptik™ technologies.”  He further added, “Nanotech’s team has scaled the Hueck Folien production facility to where we believe together we can provide the initial volumes demanded by a top-ten issuing authority.  Our relationship with Hueck Folien continues to funnel security feature opportunities to Nanotech.”

The company’s sadder news can be found in their seven-page Feb. 29, 2016 news release (PDF). Their net earnings for the final quarter of 2015 and 2014 were both losses but in 2014 their loss was (931,271) and in 2015 it was (1,746,335). Still, the company’s gross profit from revenue for the same time periods was 50% in 2015 as opposed to 34% in 2014 despite slightly less revenue in 2015.

Assuming I’ve read this information correctly, Nanotech Security does seem to be in a fragile situation but that can change. After all, IBM was in serious trouble for a number of years during the 1990s when there was even talk the company might go bankrupt. As far as I’m aware, IBM is no longer in imminent danger of disappearing from the scene. *ETA March 9, 2016: It seems I used the wrong example if Robert X. Cringley’s March 9, 2016 article ‘What’s happening at IBM? (It’s dying)‘ for Beta News is to be believed.)* Getting back to my point, companies do go through cycles and it can be difficult to determine exactly what’s happening at some of the earlier stages.

Certainly, Cantor Fitzgerald Canada analyst Ralph Garcea has an optimistic view of Nanotech Security’s prospects according to a March 1, 2016 article by Nick Waddell for cantech letter,

Nanotech Security (TSXV:NTS) offers a better and more secure solution in multiple market segments that together are worth billions of dollars per year, says Cantor Fitzgerald Canada analyst Ralph Garcea.

This morning [March 1, 2016], Garcea initiated coverage of Nanotech with a “Buy” rating and a one-year price target of $2.50, implying a return of 110 per cent at the time of publication.

Garcea notes that Nanotech has already created solutions for the consumer electronics, brand identification and currency segments. He points out that one of the company’s biggest differentiators is that its solution can be embedded onto almost any material. This is important, he says, because it means that security can be embedded into places it previously could not go, such as directly onto a pharmaceutical pill.

Shares of Nanotech Security closed today [March 1, 2016] up 2.5 per cent to $1.22.

I have written about Nanotech Security frequently and believe the most recent is a Dec. 29, 2015 posting. For those unfamiliar with the company’s technology, it’s based on the structures found on the blue morpho butterfly. The holes in the butterfly’s wings lend it certain optical properties which the company mimics for its anti-counterfeiting technology.

One final comment, I am not endorsing the company or any of the analysis of the company’s financial situation and prospects.

A study in contrasts: innovation and education strategies in US and British Columbia (Canada)

It’s always interesting to contrast two approaches to the same issue, in this case, innovation and education strategies designed to improve the economies of the United States and of British Columbia, a province in Canada.

One of the major differences regarding education in the US and in Canada is that the Canadian federal government, unlike the US federal government, has no jurisdiction over the matter. Education is strictly a provincial responsibility.

I recently wrote a commentary (a Jan. 19, 2016 posting) about the BC government’s Jan. 18, 2016 announcement of its innovation strategy in a special emphasis on the education aspect. Premier Christy Clark focused largely on the notion of embedding courses on computer coding in schools from K-12 (kindergarten through grade 12) as Jonathon Narvey noted in his Jan. 19, 2016 event recap for Betakit,

While many in the tech sector will be focused on the short-term benefits of a quick injection of large capital [a $100M BC Tech Fund as part of a new strategy was announced in Dec. 2015 but details about the new #BCTECH Strategy were not shared until Jan. 18, 2016], the long-term benefits for the local tech sector are being seeded in local schools. More than 600,000 BC students will be getting basic skills in the K-12 curriculum, with coding academies, more work experience electives and partnerships between high school and post-secondary institutions.

Here’s what I had to say in my commentary (from the Jan. 19, 2016 posting),

… the government wants to embed  computer coding into the education system for K-12 (kindergarten to grade 12). One determined reporter (Canadian Press if memory serves) attempted to find out how much this would cost. No answer was forthcoming although there were many words expended. Whether this failure was due to ignorance (disturbing!) or a reluctance to share (also disturbing!) was impossible to tell. Another reporter (Georgia Straight) asked about equipment (coding can be taught with pen and paper but hardware is better). … Getting back to the reporter’s question, no answer was forthcoming although the speaker was loquacious.

Another reporter asked if the government had found any jurisdictions doing anything similar regarding computer coding. It seems they did consider other jurisdictions although it was claimed that BC is the first to strike out in this direction. Oddly, no one mentioned Estonia, known in some circles as E-stonia, where the entire school system was online by the late 1990s in an initiative known as the ‘Tiger Leap Foundation’ which also supported computer coding classes in secondary school (there’s more in Tim Mansel’s May 16, 2013 article about Estonia’s then latest initiative to embed computer coding into grade school.) …

Aside from the BC government’s failure to provide details, I am uncomfortable with what I see as an overemphasis on computer coding that suggests a narrow focus on what constitutes a science and technology strategy for education. I find the US approach closer to what I favour although I may be biased since they are building their strategy around nanotechnology education.

The US approach had been announced in dribs and drabs until recently when a Jan. 26, 2016 news item on Nanotechnology Now indicated a broad-based plan for nanotechnology education (and computer coding),

Over the past 15 years, the Federal Government has invested over $22 billion in R&D under the auspices of the National Nanotechnology Initiative (NNI) to understand and control matter at the nanoscale and develop applications that benefit society. As these nanotechnology-enabled applications become a part of everyday life, it is important for students to have a basic understanding of material behavior at the nanoscale, and some states have even incorporated nanotechnology concepts into their K-12 science standards. Furthermore, application of the novel properties that exist at the nanoscale, from gecko-inspired climbing gloves and invisibility cloaks, to water-repellent coatings on clothes or cellphones, can spark students’ excitement about science, technology, engineering, and mathematics (STEM).

An earlier Jan. 25, 2016 White House blog posting by Lisa Friedersdorf and Lloyd Whitman introduced the notion that nanotechnology is viewed as foundational and a springboard for encouraging interest in STEM (science, technology, engineering, and mathematics) careers while outlining several formal and information education efforts,

The Administration’s updated Strategy for American Innovation, released in October 2015, identifies nanotechnology as one of the emerging “general-purpose technologies”—a technology that, like the steam engine, electricity, and the Internet, will have a pervasive impact on our economy and our society, with the ability to create entirely new industries, create jobs, and increase productivity. To reap these benefits, we must train our Nation’s students for these high-tech jobs of the future. Fortunately, the multidisciplinary nature of nanotechnology and the unique and fascinating phenomena that occur at the nanoscale mean that nanotechnology is a perfect topic to inspire students to pursue careers in science, technology, engineering, and mathematics (STEM).

The Nanotechnology: Super Small Science series [mentioned in my Jan. 21, 2016 posting] is just the latest example of the National Nanotechnology Initiative (NNI)’s efforts to educate and inspire our Nation’s students. Other examples include:

The announcement about computer coding and courses being integrated in the US education curricula K-12 was made in US President Barack Obama’s 2016 State of the Union speech and covered in a Jan. 30, 2016 article by Jessica Hullinger for Fast Company,

In his final State Of The Union address earlier this month, President Obama called for providing hands-on computer science classes for all students to make them “job ready on day one.” Today, he is unveiling how he plans to do that with his upcoming budget.

The President’s Computer Science for All Initiative seeks to provide $4 billion in funding for states and an additional $100 million directly to school districts in a push to provide access to computer science training in K-12 public schools. The money would go toward things like training teachers, providing instructional materials, and getting kids involved in computer science early in elementary and middle school.

There are more details in the Hullinger’s article and in a Jan. 30, 2016 White House blog posting by Megan Smith,

Computer Science for All is the President’s bold new initiative to empower all American students from kindergarten through high school to learn computer science and be equipped with the computational thinking skills they need to be creators in the digital economy, not just consumers, and to be active citizens in our technology-driven world. Our economy is rapidly shifting, and both educators and business leaders are increasingly recognizing that computer science (CS) is a “new basic” skill necessary for economic opportunity and social mobility.

CS for All builds on efforts already being led by parents, teachers, school districts, states, and private sector leaders from across the country.

Nothing says one approach has to be better than the other as there’s usually more than one way to accomplish a set of goals. As well, it’s unfair to expect a provincial government to emulate the federal government of a larger country with more money to spend. I just wish the BC government (a) had shared details such as the budget allotment for their initiative and (b) would hint at a more imaginative, long range view of STEM education.

Going back to Estonia one last time, in addition to the country’s recent introduction of computer coding classes in grade school, it has also embarked on a nanotechnology/nanoscience educational and entrepreneurial programme as noted in my Sept. 30, 2014 posting,

The University of Tartu (Estonia) announced in a Sept. 29, 2014 press release an educational and entrepreneurial programme about nanotechnology/nanoscience for teachers and students,

To bring nanoscience closer to pupils, educational researchers of the University of Tartu decided to implement the European Union LLP Comenius project “Quantum Spin-Off – connecting schools with high-tech research and entrepreneurship”. The objective of the project is to build a kind of a bridge: at one end, pupils can familiarise themselves with modern science, and at the other, experience its application opportunities at high-tech enterprises. “We also wish to inspire these young people to choose a specialisation related to science and technology in the future,” added Lukk [Maarika Lukk, Coordinator of the project].

The pupils can choose between seven topics of nanotechnology: the creation of artificial muscles, microbiological fuel elements, manipulation of nanoparticles, nanoparticles and ionic liquids as oil additives, materials used in regenerative medicine, deposition and 3D-characterisation of atomically designed structures and a topic covered in English, “Artificial robotic fish with EAP elements”.

Learning is based on study modules in the field of nanotechnology. In addition, each team of pupils will read a scientific publication, selected for them by an expert of that particular field. In that way, pupils will develop an understanding of the field and of scientific texts. On the basis of the scientific publication, the pupils prepare their own research project and a business plan suitable for applying the results of the project.

In each field, experts of the University of Tartu will help to understand the topics. Participants will visit a nanotechnology research laboratory and enterprises using nanotechnologies.

The project lasts for two years and it is also implemented in Belgium, Switzerland and Greece.

As they say, time will tell.