Tag Archives: open access

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (1 of 2)

Before launching into the assessment, a brief explanation of my theme: Hedy Lamarr was considered to be one of the great beauties of her day,

“Ziegfeld Girl” Hedy Lamarr 1941 MGM *M.V.
Titles: Ziegfeld Girl
People: Hedy Lamarr
Image courtesy mptvimages.com [downloaded from https://www.imdb.com/title/tt0034415/mediaviewer/rm1566611456]

Aside from starring in Hollywood movies and, before that, movies in Europe, she was also an inventor and not just any inventor (from a Dec. 4, 2017 article by Laura Barnett for The Guardian), Note: Links have been removed,

Let’s take a moment to reflect on the mercurial brilliance of Hedy Lamarr. Not only did the Vienna-born actor flee a loveless marriage to a Nazi arms dealer to secure a seven-year, $3,000-a-week contract with MGM, and become (probably) the first Hollywood star to simulate a female orgasm on screen – she also took time out to invent a device that would eventually revolutionise mobile communications.

As described in unprecedented detail by the American journalist and historian Richard Rhodes in his new book, Hedy’s Folly, Lamarr and her business partner, the composer George Antheil, were awarded a patent in 1942 for a “secret communication system”. It was meant for radio-guided torpedoes, and the pair gave to the US Navy. It languished in their files for decades before eventually becoming a constituent part of GPS, Wi-Fi and Bluetooth technology.

(The article goes on to mention other celebrities [Marlon Brando, Barbara Cartland, Mark Twain, etc] and their inventions.)

Lamarr’s work as an inventor was largely overlooked until the 1990’s when the technology community turned her into a ‘cultish’ favourite and from there her reputation grew and acknowledgement increased culminating in Rhodes’ book and the documentary by Alexandra Dean, ‘Bombshell: The Hedy Lamarr Story (to be broadcast as part of PBS’s American Masters series on May 18, 2018).

Canada as Hedy Lamarr

There are some parallels to be drawn between Canada’s S&T and R&D (science and technology; research and development) and Ms. Lamarr. Chief amongst them, we’re not always appreciated for our brains. Not even by people who are supposed to know better such as the experts on the panel for the ‘Third assessment of The State of Science and Technology and Industrial Research and Development in Canada’ (proper title: Competing in a Global Innovation Economy: The Current State of R&D in Canada) from the Expert Panel on the State of Science and Technology and Industrial Research and Development in Canada.

A little history

Before exploring the comparison to Hedy Lamarr further, here’s a bit more about the history of this latest assessment from the Council of Canadian Academies (CCA), from the report released April 10, 2018,

This assessment of Canada’s performance indicators in science, technology, research, and innovation comes at an opportune time. The Government of Canada has expressed a renewed commitment in several tangible ways to this broad domain of activity including its Innovation and Skills Plan, the announcement of five superclusters, its appointment of a new Chief Science Advisor, and its request for the Fundamental Science Review. More specifically, the 2018 Federal Budget demonstrated the government’s strong commitment to research and innovation with historic investments in science.

The CCA has a decade-long history of conducting evidence-based assessments about Canada’s research and development activities, producing seven assessments of relevance:

The State of Science and Technology in Canada (2006) [emphasis mine]
•Innovation and Business Strategy: Why Canada Falls Short (2009)
•Catalyzing Canada’s Digital Economy (2010)
•Informing Research Choices: Indicators and Judgment (2012)
The State of Science and Technology in Canada (2012) [emphasis mine]
The State of Industrial R&D in Canada (2013) [emphasis mine]
•Paradox Lost: Explaining Canada’s Research Strength and Innovation Weakness (2013)

Using similar methods and metrics to those in The State of Science and Technology in Canada (2012) and The State of Industrial R&D in Canada (2013), this assessment tells a similar and familiar story: Canada has much to be proud of, with world-class researchers in many domains of knowledge, but the rest of the world is not standing still. Our peers are also producing high quality results, and many countries are making significant commitments to supporting research and development that will position them to better leverage their strengths to compete globally. Canada will need to take notice as it determines how best to take action. This assessment provides valuable material for that conversation to occur, whether it takes place in the lab or the legislature, the bench or the boardroom. We also hope it will be used to inform public discussion. [p. ix Print, p. 11 PDF]

This latest assessment succeeds the general 2006 and 2012 reports, which were mostly focused on academic research, and combines it with an assessment of industrial research, which was previously separate. Also, this third assessment’s title (Competing in a Global Innovation Economy: The Current State of R&D in Canada) makes what was previously quietly declared in the text, explicit from the cover onwards. It’s all about competition, despite noises such as the 2017 Naylor report (Review of fundamental research) about the importance of fundamental research.

One other quick comment, I did wonder in my July 1, 2016 posting (featuring the announcement of the third assessment) how combining two assessments would impact the size of the expert panel and the size of the final report,

Given the size of the 2012 assessment of science and technology at 232 pp. (PDF) and the 2013 assessment of industrial research and development at 220 pp. (PDF) with two expert panels, the imagination boggles at the potential size of the 2016 expert panel and of the 2016 assessment combining the two areas.

I got my answer with regard to the panel as noted in my Oct. 20, 2016 update (which featured a list of the members),

A few observations, given the size of the task, this panel is lean. As well, there are three women in a group of 13 (less than 25% representation) in 2016? It’s Ontario and Québec-dominant; only BC and Alberta rate a representative on the panel. I hope they will find ways to better balance this panel and communicate that ‘balanced story’ to the rest of us. On the plus side, the panel has representatives from the humanities, arts, and industry in addition to the expected representatives from the sciences.

The imbalance I noted then was addressed, somewhat, with the selection of the reviewers (from the report released April 10, 2018),

The CCA wishes to thank the following individuals for their review of this report:

Ronald Burnett, C.M., O.B.C., RCA, Chevalier de l’ordre des arts et des
lettres, President and Vice-Chancellor, Emily Carr University of Art and Design
(Vancouver, BC)

Michelle N. Chretien, Director, Centre for Advanced Manufacturing and Design
Technologies, Sheridan College; Former Program and Business Development
Manager, Electronic Materials, Xerox Research Centre of Canada (Brampton,
ON)

Lisa Crossley, CEO, Reliq Health Technologies, Inc. (Ancaster, ON)
Natalie Dakers, Founding President and CEO, Accel-Rx Health Sciences
Accelerator (Vancouver, BC)

Fred Gault, Professorial Fellow, United Nations University-MERIT (Maastricht,
Netherlands)

Patrick D. Germain, Principal Engineering Specialist, Advanced Aerodynamics,
Bombardier Aerospace (Montréal, QC)

Robert Brian Haynes, O.C., FRSC, FCAHS, Professor Emeritus, DeGroote
School of Medicine, McMaster University (Hamilton, ON)

Susan Holt, Chief, Innovation and Business Relationships, Government of
New Brunswick (Fredericton, NB)

Pierre A. Mohnen, Professor, United Nations University-MERIT and Maastricht
University (Maastricht, Netherlands)

Peter J. M. Nicholson, C.M., Retired; Former and Founding President and
CEO, Council of Canadian Academies (Annapolis Royal, NS)

Raymond G. Siemens, Distinguished Professor, English and Computer Science
and Former Canada Research Chair in Humanities Computing, University of
Victoria (Victoria, BC) [pp. xii- xiv Print; pp. 15-16 PDF]

The proportion of women to men as reviewers jumped up to about 36% (4 of 11 reviewers) and there are two reviewers from the Maritime provinces. As usual, reviewers external to Canada were from Europe. Although this time, they came from Dutch institutions rather than UK or German institutions. Interestingly and unusually, there was no one from a US institution. When will they start using reviewers from other parts of the world?

As for the report itself, it is 244 pp. (PDF). (For the really curious, I have a  December 15, 2016 post featuring my comments on the preliminary data for the third assessment.)

To sum up, they had a lean expert panel tasked with bringing together two inquiries and two reports. I imagine that was daunting. Good on them for finding a way to make it manageable.

Bibliometrics, patents, and a survey

I wish more attention had been paid to some of the issues around open science, open access, and open data, which are changing how science is being conducted. (I have more about this from an April 5, 2018 article by James Somers for The Atlantic but more about that later.) If I understand rightly, they may not have been possible due to the nature of the questions posed by the government when requested the assessment.

As was done for the second assessment, there is an acknowledgement that the standard measures/metrics (bibliometrics [no. of papers published, which journals published them; number of times papers were cited] and technometrics [no. of patent applications, etc.] of scientific accomplishment and progress are not the best and new approaches need to be developed and adopted (from the report released April 10, 2018),

It is also worth noting that the Panel itself recognized the limits that come from using traditional historic metrics. Additional approaches will be needed the next time this assessment is done. [p. ix Print; p. 11 PDF]

For the second assessment and as a means of addressing some of the problems with metrics, the panel decided to take a survey which the panel for the third assessment has also done (from the report released April 10, 2018),

The Panel relied on evidence from multiple sources to address its charge, including a literature review and data extracted from statistical agencies and organizations such as Statistics Canada and the OECD. For international comparisons, the Panel focused on OECD countries along with developing countries that are among the top 20 producers of peer-reviewed research publications (e.g., China, India, Brazil, Iran, Turkey). In addition to the literature review, two primary research approaches informed the Panel’s assessment:
•a comprehensive bibliometric and technometric analysis of Canadian research publications and patents; and,
•a survey of top-cited researchers around the world.

Despite best efforts to collect and analyze up-to-date information, one of the Panel’s findings is that data limitations continue to constrain the assessment of R&D activity and excellence in Canada. This is particularly the case with industrial R&D and in the social sciences, arts, and humanities. Data on industrial R&D activity continue to suffer from time lags for some measures, such as internationally comparable data on R&D intensity by sector and industry. These data also rely on industrial categories (i.e., NAICS and ISIC codes) that can obscure important trends, particularly in the services sector, though Statistics Canada’s recent revisions to how this data is reported have improved this situation. There is also a lack of internationally comparable metrics relating to R&D outcomes and impacts, aside from those based on patents.

For the social sciences, arts, and humanities, metrics based on journal articles and other indexed publications provide an incomplete and uneven picture of research contributions. The expansion of bibliometric databases and methodological improvements such as greater use of web-based metrics, including paper views/downloads and social media references, will support ongoing, incremental improvements in the availability and accuracy of data. However, future assessments of R&D in Canada may benefit from more substantive integration of expert review, capable of factoring in different types of research outputs (e.g., non-indexed books) and impacts (e.g., contributions to communities or impacts on public policy). The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity. It is vital that such contributions are better measured and assessed. [p. xvii Print; p. 19 PDF]

My reading: there’s a problem and we’re not going to try and fix it this time. Good luck to those who come after us. As for this line: “The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity.” Did no one explain that when you use ‘no doubt’, you are introducing doubt? It’s a cousin to ‘don’t take this the wrong way’ and ‘I don’t mean to be rude but …’ .

Good news

This is somewhat encouraging (from the report released April 10, 2018),

Canada’s international reputation for its capacity to participate in cutting-edge R&D is strong, with 60% of top-cited researchers surveyed internationally indicating that Canada hosts world-leading infrastructure or programs in their fields. This share increased by four percentage points between 2012 and 2017. Canada continues to benefit from a highly educated population and deep pools of research skills and talent. Its population has the highest level of educational attainment in the OECD in the proportion of the population with
a post-secondary education. However, among younger cohorts (aged 25 to 34), Canada has fallen behind Japan and South Korea. The number of researchers per capita in Canada is on a par with that of other developed countries, andincreased modestly between 2004 and 2012. Canada’s output of PhD graduates has also grown in recent years, though it remains low in per capita terms relative to many OECD countries. [pp. xvii-xviii; pp. 19-20]

Don’t let your head get too big

Most of the report observes that our international standing is slipping in various ways such as this (from the report released April 10, 2018),

In contrast, the number of R&D personnel employed in Canadian businesses
dropped by 20% between 2008 and 2013. This is likely related to sustained and
ongoing decline in business R&D investment across the country. R&D as a share
of gross domestic product (GDP) has steadily declined in Canada since 2001,
and now stands well below the OECD average (Figure 1). As one of few OECD
countries with virtually no growth in total national R&D expenditures between
2006 and 2015, Canada would now need to more than double expenditures to
achieve an R&D intensity comparable to that of leading countries.

Low and declining business R&D expenditures are the dominant driver of this
trend; however, R&D spending in all sectors is implicated. Government R&D
expenditures declined, in real terms, over the same period. Expenditures in the
higher education sector (an indicator on which Canada has traditionally ranked
highly) are also increasing more slowly than the OECD average. Significant
erosion of Canada’s international competitiveness and capacity to participate
in R&D and innovation is likely to occur if this decline and underinvestment
continue.

Between 2009 and 2014, Canada produced 3.8% of the world’s research
publications, ranking ninth in the world. This is down from seventh place for
the 2003–2008 period. India and Italy have overtaken Canada although the
difference between Italy and Canada is small. Publication output in Canada grew
by 26% between 2003 and 2014, a growth rate greater than many developed
countries (including United States, France, Germany, United Kingdom, and
Japan), but below the world average, which reflects the rapid growth in China
and other emerging economies. Research output from the federal government,
particularly the National Research Council Canada, dropped significantly
between 2009 and 2014.(emphasis mine)  [p. xviii Print; p. 20 PDF]

For anyone unfamiliar with Canadian politics,  2009 – 2014 were years during which Stephen Harper’s Conservatives formed the government. Justin Trudeau’s Liberals were elected to form the government in late 2015.

During Harper’s years in government, the Conservatives were very interested in changing how the National Research Council of Canada operated and, if memory serves, the focus was on innovation over research. Consequently, the drop in their research output is predictable.

Given my interest in nanotechnology and other emerging technologies, this popped out (from the report released April 10, 2018),

When it comes to research on most enabling and strategic technologies, however, Canada lags other countries. Bibliometric evidence suggests that, with the exception of selected subfields in Information and Communication Technologies (ICT) such as Medical Informatics and Personalized Medicine, Canada accounts for a relatively small share of the world’s research output for promising areas of technology development. This is particularly true for Biotechnology, Nanotechnology, and Materials science [emphasis mine]. Canada’s research impact, as reflected by citations, is also modest in these areas. Aside from Biotechnology, none of the other subfields in Enabling and Strategic Technologies has an ARC rank among the top five countries. Optoelectronics and photonics is the next highest ranked at 7th place, followed by Materials, and Nanoscience and Nanotechnology, both of which have a rank of 9th. Even in areas where Canadian researchers and institutions played a seminal role in early research (and retain a substantial research capacity), such as Artificial Intelligence and Regenerative Medicine, Canada has lost ground to other countries.

Arguably, our early efforts in artificial intelligence wouldn’t have garnered us much in the way of ranking and yet we managed some cutting edge work such as machine learning. I’m not suggesting the expert panel should have or could have found some way to measure these kinds of efforts but I’m wondering if there could have been some acknowledgement in the text of the report. I’m thinking a couple of sentences in a paragraph about the confounding nature of scientific research where areas that are ignored for years and even decades then become important (e.g., machine learning) but are not measured as part of scientific progress until after they are universally recognized.

Still, point taken about our diminishing returns in ’emerging’ technologies and sciences (from the report released April 10, 2018),

The impression that emerges from these data is sobering. With the exception of selected ICT subfields, such as Medical Informatics, bibliometric evidence does not suggest that Canada excels internationally in most of these research areas. In areas such as Nanotechnology and Materials science, Canada lags behind other countries in levels of research output and impact, and other countries are outpacing Canada’s publication growth in these areas — leading to declining shares of world publications. Even in research areas such as AI, where Canadian researchers and institutions played a foundational role, Canadian R&D activity is not keeping pace with that of other countries and some researchers trained in Canada have relocated to other countries (Section 4.4.1). There are isolated exceptions to these trends, but the aggregate data reviewed by this Panel suggest that Canada is not currently a world leader in research on most emerging technologies.

The Hedy Lamarr treatment

We have ‘good looks’ (arts and humanities) but not the kind of brains (physical sciences and engineering) that people admire (from the report released April 10, 2018),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphases mine] It accounts for more than 5% of world researchin these fields. Conversely, Canada has lower research output than expected
in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

Couldn’t they have used a more buoyant tone? After all, science was known as ‘natural philosophy’ up until the 19th century. As for visual and performing arts, let’s include poetry as a performing and literary art (both have been the case historically and cross-culturally) and let’s also note that one of the great physics texts, (De rerum natura by Lucretius) was a multi-volume poem (from Lucretius’ Wikipedia entry; Note: Links have been removed).

His poem De rerum natura (usually translated as “On the Nature of Things” or “On the Nature of the Universe”) transmits the ideas of Epicureanism, which includes Atomism [the concept of atoms forming materials] and psychology. Lucretius was the first writer to introduce Roman readers to Epicurean philosophy.[15] The poem, written in some 7,400 dactylic hexameters, is divided into six untitled books, and explores Epicurean physics through richly poetic language and metaphors. Lucretius presents the principles of atomism; the nature of the mind and soul; explanations of sensation and thought; the development of the world and its phenomena; and explains a variety of celestial and terrestrial phenomena. The universe described in the poem operates according to these physical principles, guided by fortuna, “chance”, and not the divine intervention of the traditional Roman deities.[16]

Should you need more proof that the arts might have something to contribute to physical sciences, there’s this in my March 7, 2018 posting,

It’s not often you see research that combines biologically inspired engineering and a molecular biophysicist with a professional animator who worked at Peter Jackson’s (Lord of the Rings film trilogy, etc.) Park Road Post film studio. An Oct. 18, 2017 news item on ScienceDaily describes the project,

Like many other scientists, Don Ingber, M.D., Ph.D., the Founding Director of the Wyss Institute, [emphasis mine] is concerned that non-scientists have become skeptical and even fearful of his field at a time when technology can offer solutions to many of the world’s greatest problems. “I feel that there’s a huge disconnect between science and the public because it’s depicted as rote memorization in schools, when by definition, if you can memorize it, it’s not science,” says Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Professor of Bioengineering at the Harvard Paulson School of Engineering and Applied Sciences (SEAS). [emphasis mine] “Science is the pursuit of the unknown. We have a responsibility to reach out to the public and convey that excitement of exploration and discovery, and fortunately, the film industry is already great at doing that.”

“Not only is our physics-based simulation and animation system as good as other data-based modeling systems, it led to the new scientific insight [emphasis mine] that the limited motion of the dynein hinge focuses the energy released by ATP hydrolysis, which causes dynein’s shape change and drives microtubule sliding and axoneme motion,” says Ingber. “Additionally, while previous studies of dynein have revealed the molecule’s two different static conformations, our animation visually depicts one plausible way that the protein can transition between those shapes at atomic resolution, which is something that other simulations can’t do. The animation approach also allows us to visualize how rows of dyneins work in unison, like rowers pulling together in a boat, which is difficult using conventional scientific simulation approaches.”

It comes down to how we look at things. Yes, physical sciences and engineering are very important. If the report is to be believed we have a very highly educated population and according to PISA scores our students rank highly in mathematics, science, and reading skills. (For more information on Canada’s latest PISA scores from 2015 see this OECD page. As for PISA itself, it’s an OECD [Organization for Economic Cooperation and Development] programme where 15-year-old students from around the world are tested on their reading, mathematics, and science skills, you can get some information from my Oct. 9, 2013 posting.)

Is it really so bad that we choose to apply those skills in fields other than the physical sciences and engineering? It’s a little bit like Hedy Lamarr’s problem except instead of being judged for our looks and having our inventions dismissed, we’re being judged for not applying ourselves to physical sciences and engineering and having our work in other closely aligned fields dismissed as less important.

Canada’s Industrial R&D: an oft-told, very sad story

Bemoaning the state of Canada’s industrial research and development efforts has been a national pastime as long as I can remember. Here’s this from the report released April 10, 2018,

There has been a sustained erosion in Canada’s industrial R&D capacity and competitiveness. Canada ranks 33rd among leading countries on an index assessing the magnitude, intensity, and growth of industrial R&D expenditures. Although Canada is the 11th largest spender, its industrial R&D intensity (0.9%) is only half the OECD average and total spending is declining (−0.7%). Compared with G7 countries, the Canadian portfolio of R&D investment is more concentrated in industries that are intrinsically not as R&D intensive. Canada invests more heavily than the G7 average in oil and gas, forestry, machinery and equipment, and finance where R&D has been less central to business strategy than in many other industries. …  About 50% of Canada’s industrial R&D spending is in high-tech sectors (including industries such as ICT, aerospace, pharmaceuticals, and automotive) compared with the G7 average of 80%. Canadian Business Enterprise Expenditures on R&D (BERD) intensity is also below the OECD average in these sectors. In contrast, Canadian investment in low and medium-low tech sectors is substantially higher than the G7 average. Canada’s spending reflects both its long-standing industrial structure and patterns of economic activity.

R&D investment patterns in Canada appear to be evolving in response to global and domestic shifts. While small and medium-sized enterprises continue to perform a greater share of industrial R&D in Canada than in the United States, between 2009 and 2013, there was a shift in R&D from smaller to larger firms. Canada is an increasingly attractive place to conduct R&D. Investment by foreign-controlled firms in Canada has increased to more than 35% of total R&D investment, with the United States accounting for more than half of that. [emphasis mine]  Multinational enterprises seem to be increasingly locating some of their R&D operations outside their country of ownership, possibly to gain proximity to superior talent. Increasing foreign-controlled R&D, however, also could signal a long-term strategic loss of control over intellectual property (IP) developed in this country, ultimately undermining the government’s efforts to support high-growth firms as they scale up. [pp. xxii-xxiii Print; pp. 24-25 PDF]

Canada has been known as a ‘branch plant’ economy for decades. For anyone unfamiliar with the term, it means that companies from other countries come here, open up a branch and that’s how we get our jobs as we don’t have all that many large companies here. Increasingly, multinationals are locating R&D shops here.

While our small to medium size companies fund industrial R&D, it’s large companies (multinationals) which can afford long-term and serious investment in R&D. Luckily for companies from other countries, we have a well-educated population of people looking for jobs.

In 2017, we opened the door more widely so we can scoop up talented researchers and scientists from other countries, from a June 14, 2017 article by Beckie Smith for The PIE News,

Universities have welcomed the inclusion of the work permit exemption for academic stays of up to 120 days in the strategy, which also introduces expedited visa processing for some highly skilled professions.

Foreign researchers working on projects at a publicly funded degree-granting institution or affiliated research institution will be eligible for one 120-day stay in Canada every 12 months.

And universities will also be able to access a dedicated service channel that will support employers and provide guidance on visa applications for foreign talent.

The Global Skills Strategy, which came into force on June 12 [2017], aims to boost the Canadian economy by filling skills gaps with international talent.

As well as the short term work permit exemption, the Global Skills Strategy aims to make it easier for employers to recruit highly skilled workers in certain fields such as computer engineering.

“Employers that are making plans for job-creating investments in Canada will often need an experienced leader, dynamic researcher or an innovator with unique skills not readily available in Canada to make that investment happen,” said Ahmed Hussen, Minister of Immigration, Refugees and Citizenship.

“The Global Skills Strategy aims to give those employers confidence that when they need to hire from abroad, they’ll have faster, more reliable access to top talent.”

Coincidentally, Microsoft, Facebook, Google, etc. have announced, in 2017, new jobs and new offices in Canadian cities. There’s a also Chinese multinational telecom company Huawei Canada which has enjoyed success in Canada and continues to invest here (from a Jan. 19, 2018 article about security concerns by Matthew Braga for the Canadian Broadcasting Corporation (CBC) online news,

For the past decade, Chinese tech company Huawei has found no shortage of success in Canada. Its equipment is used in telecommunications infrastructure run by the country’s major carriers, and some have sold Huawei’s phones.

The company has struck up partnerships with Canadian universities, and say it is investing more than half a billion dollars in researching next generation cellular networks here. [emphasis mine]

While I’m not thrilled about using patents as an indicator of progress, this is interesting to note (from the report released April 10, 2018),

Canada produces about 1% of global patents, ranking 18th in the world. It lags further behind in trademark (34th) and design applications (34th). Despite relatively weak performance overall in patents, Canada excels in some technical fields such as Civil Engineering, Digital Communication, Other Special Machines, Computer Technology, and Telecommunications. [emphases mine] Canada is a net exporter of patents, which signals the R&D strength of some technology industries. It may also reflect increasing R&D investment by foreign-controlled firms. [emphasis mine] [p. xxiii Print; p. 25 PDF]

Getting back to my point, we don’t have large companies here. In fact, the dream for most of our high tech startups is to build up the company so it’s attractive to buyers, sell, and retire (hopefully before the age of 40). Strangely, the expert panel doesn’t seem to share my insight into this matter,

Canada’s combination of high performance in measures of research output and impact, and low performance on measures of industrial R&D investment and innovation (e.g., subpar productivity growth), continue to be viewed as a paradox, leading to the hypothesis that barriers are impeding the flow of Canada’s research achievements into commercial applications. The Panel’s analysis suggests the need for a more nuanced view. The process of transforming research into innovation and wealth creation is a complex multifaceted process, making it difficult to point to any definitive cause of Canada’s deficit in R&D investment and productivity growth. Based on the Panel’s interpretation of the evidence, Canada is a highly innovative nation, but significant barriers prevent the translation of innovation into wealth creation. The available evidence does point to a number of important contributing factors that are analyzed in this report. Figure 5 represents the relationships between R&D, innovation, and wealth creation.

The Panel concluded that many factors commonly identified as points of concern do not adequately explain the overall weakness in Canada’s innovation performance compared with other countries. [emphasis mine] Academia-business linkages appear relatively robust in quantitative terms given the extent of cross-sectoral R&D funding and increasing academia-industry partnerships, though the volume of academia-industry interactions does not indicate the nature or the quality of that interaction, nor the extent to which firms are capitalizing on the research conducted and the resulting IP. The educational system is high performing by international standards and there does not appear to be a widespread lack of researchers or STEM (science, technology, engineering, and mathematics) skills. IP policies differ across universities and are unlikely to explain a divergence in research commercialization activity between Canadian and U.S. institutions, though Canadian universities and governments could do more to help Canadian firms access university IP and compete in IP management and strategy. Venture capital availability in Canada has improved dramatically in recent years and is now competitive internationally, though still overshadowed by Silicon Valley. Technology start-ups and start-up ecosystems are also flourishing in many sectors and regions, demonstrating their ability to build on research advances to develop and deliver innovative products and services.

You’ll note there’s no mention of a cultural issue where start-ups are designed for sale as soon as possible and this isn’t new. Years ago, there was an accounting firm that published a series of historical maps (the last one I saw was in 2005) of technology companies in the Vancouver region. Technology companies were being developed and sold to large foreign companies from the 19th century to present day.

Part 2

Nanosafety Cluster newsletter—excerpts from the Spring 2016 issue

The European Commission’s NanoSafety Cluster Newsletter (no.7) Spring 2016 edition is some 50 pp. long and it provides a roundup of activities and forthcoming events. Here are a few excerpts,

“Closer to the Market” Roadmap (CTTM) now finalised

Hot off the press! the Cluster’s “Closer to the Market” Roadmap (CTTM)  is  a  multi-dimensional,  stepwise  plan  targeting  a framework to deliver safe nano-enabled products to the market. After some years of discussions, several consultations of a huge number of experts in the nanosafety-field, conferences at which the issue of market implementation of nanotechnologies was talked  about,  writing  hours/days,  and  finally  two public consultation rounds, the CTTM is now finalized.

As stated in the Executive Summary: “Nano-products and nano-enabled applications need a clear and easy-to-follow human and environmental safety framework for the development along the innovation chain from initial idea to market and beyond that facilitates  navigation  through  the  complex  regulatory and approval processes under which different product categories fall.

Download it here, and get involved in its implementation through the Cluster!
Authors: Andreas Falk* 1, Christa Schimpel1, Andrea Haase3, Benoît Hazebrouck4, Carlos Fito López5, Adriele Prina-Mello6, Kai Savolainen7, Adriënne Sips8, Jesús M. Lopez de Ipiña10, Iseult Lynch11, Costas Charitidis12, Visser Germ13

NanoDefine hosts Synergy Workshop with NSC projects

NanoDefine  organised  the  2nd Nanosafety  Cluster  (NSC)  Synergy Workshop  at  the  Netherlands  House  for Education  and  Research  in Brussels  on  2nd  February  2016. The  aim  was  to  identify  overlaps and synergies existing between different projects that could develop into
outstanding cooperation opportunities.

One central issue was the building of a common ontology and a European framework for data management and analysis, as planned within eNanoMapper, to facilitate a closer interdisciplinary collaboration between  NSC projects and to better address the need for proper data storage, analysis and sharing (Open Access).

Unexpectedly, there’s a Canadian connection,

Discovering protocols for nanoparticles: the soils case
NanoFASE WP7 & NanoSafety Cluster WG3 Exposure

In NanoFASE, of course, we focus on the exposure to nanomaterials. Having consistent and meaningful protocols to characterize the fate of nanomaterials in different environments is therefore of great interest to us. Soils and sediments are in this respect very cumbersome. Also in the case of conventional chemicals has the development of  protocols for fate description in terrestrial systems been a long route.

The special considerations of nanomaterials make this job even harder. For instance, how does one handle the fact that the interaction between soils and nanoparticles is always out of equilibrium? How does one distinguish between the nanoparticles that are still mobile and those that are attached to soil?

In the case of conventional chemicals, a single measurement of a filtered soil suspension often suffices to find the mobile fraction, as long one is sure that equilibrium has been attained. Equilibrium never occurs in the case of  nanoparticles, and the distinction between attached/suspended particles is analytically less clear to do.

Current activity in NanoFASE is focusing at finding protocols to characterize this interaction. Not only does the protocol have to provide meaningful parameters that can be used, e.g. in modelling, but also the method itself should be fast and cheap enough so that a lot of data can be collected in a reasonable amount of time. NanoFASE is  in a good position to do this, because of its focus on fate and because of the many international collaborators.

For  instance,  the Swedish  Agricultural  University (Uppsala)  is  collaborating  with  McGill  University (Montreal, Canada [emphasis mine]), an advisory partner to NanoFASE, in developing the OECD [Organization for Economic Cooperation and Development] protocol for column tests (OECD test nr 312:  “Leaching in soil columns”). The effort is led by Yasir Sultan from Environment Canada and by Karlheinz Weinfurtner from the Frauenhofer institute in Germany. Initial results show the transport of nanomaterials in soil columns to be very limited.

The OECD protocol therefore does not often lead to measurable breakthrough curves that can be modelled to provide information about  nanomaterial  mobility  in  soils  and  most  likely  requires adaptations  to  account  for  the  relatively  low mobility  of  typical pristine nanomaterials.

OECD 312 prescribes to use 40 cm columns, which is most likely too long to show a breakthrough in the case of nanoparticles. Testing in NanoFASE will therefore focus on working with shorter columns and also investigating the effect of the flow speed.

The progress and the results of this action will be reported on our website (www.nanofase.eu).

ENM [engineered nanomaterial] Transformation in and Release from Managed Waste Streams (WP5): The NanoFASE pilot Wastewater Treatment Plant is up and running and producing sludge – soon we’ll be dosing with nanoparticles to test “real world” aging.

Now, wastewater,

ENM [engineered nanomaterial] Transformation in and Release from Managed Waste Streams (WP5): The NanoFASE pilot Wastewater Treatment Plant is up and running and producing sludge – soon we’ll be dosing with nanoparticles to test “real world” aging.

WP5 led by Ralf Kaegi of EAWAG [Swiss Federal Institute of Aquatic Science and Technology] (Switzerland) will establish transformation and release rates of ENM during their passage through different reactors. We are focusing on wastewater treatment plants (WWTPs), solid waste and dedicated sewage sludge incinerators as well as landfills (see figure below). Additionally, lab-scale experiments using pristine and well characterized materials, representing the realistic fate relevant forms at each stage, will allow us to obtain a mechanistic understanding of the transformation processes in waste treatment reactors. Our experimental results will feed directly into the development of a mathematical model describing the transformation and transfer of ENMs through the investigated reactors.

I’m including this since I’ve been following the ‘silver nanoparticle story’ for some time,

NanoMILE publication update: NanoMILE on the air and on the cover

Dramatic  differences  in  behavior  of  nano-silver during  the  initial  wash  cycle  and  for  its  further dissolution/transformation potential over time depending on detergent composition and form.

In an effort to better relate nanomaterial aging procedures to those which they are most likely to undergo during the life cycle of nano-enhanced products, in this paper we describe the various transformations which are possible when exposing Ag engineered nanoparticles (ENPs) to a suite of commercially available washing detergents (Figure 1). While Ag ENP transformation and washing of textiles has received considerable attention in recent years, our study is novel in that we (1) used several commercially available detergents allowing us to estimate the various changes possible in individual homes and commercial washing settings; (2) we have continued  method  development  of  state  of  the  art nanometrology techniques, including single particle ICP-MS, for the detection and characterization of ENPs in complex media; and (3) we were able to provide novel additions to the knowledge base of the environmental nanotechnology research community both in terms of the analytical methods (e.g. the first time ENP aggregates have been definitively analyzed via single particle ICP-MS) and broadening the scope of “real world” conditions that should be considered when understanding AgENP through their life cycle.

Our findings, which were recently published in Environmental Science and Toxicology (2015, 49: 9665), indicate that the washing detergent chemistry causes dramatic differences in ENP behavior during the initial wash cycle and has ramifications for the dissolution/transformation potential of the Ag ENPs over time (see Figure 2). The use of silver as an  antimicrobial  treatment  in  textiles  continues  to garner  considerable  attention.  Last  year  we  published  a manuscript in ACS Nano that considered how various silver treatments to textiles (conventional and nano) both release  nano-sized  material  after  the  wash  cycle  with  similar chemical  characteristics.  That  study  essentially conveyed that multiple silver treatments would become more similar through the product life cycle. Our newest  work expands this by investigating one silver ENP under various washing conditions thereby creating more varied silver products as an end result.

Fascinating stuff if you’ve been following the issues around nanotechnology and safety.

Towards the end of the newsletter on pp. 46-48, they list opportunities for partnerships, collaboration, and research posts and they list websites where you can check out job opportunities. Good Luck!

Green chemistry and zinc oxide nanoparticles from Iran (plus some unhappy scoop about Elsevier and access)

It’s been a while since I’ve featured any research from Iran partly due to the fact that I find the information disappointingly scant. While the Dec. 22, 2013 news item on Nanowerk doesn’t provide quite as much detail as I’d like it does shine a light on an aspect of Iranian nanotechnology research that I haven’t previously encountered, green chemistry (Note: A link has been removed),

Researchers used a simple and eco-friendly method to produce homogenous zinc oxide (ZnO) nanoparticles with various applications in medical industries due to their photocatalytic and antibacterial properties (“Sol–gel synthesis, characterization, and neurotoxicity effect of zinc oxide nanoparticles using gum tragacanth”).

Zinc oxide nanoparticles have numerous applications, among which mention can be made of photocatalytic issues, piezoelectric devices, synthesis of pigments, chemical sensors, drug carriers in targeted drug delivery, and the production of cosmetics such as sunscreen lotions.

The Dec. 22, 2013 Iran Nanotechnology Initiative Council (INIC) news release, which originated the news item, provides a bit more detail (Note: Links have been removed),

By using natural materials found in the geography of Iran and through sol-gel technique, the researchers synthesized zinc oxide nanoparticles in various sizes. To this end, they used zinc nitrate hexahydrate and gum tragacanth obtained from the Northern parts of Khorassan Razavi Province as the zinc-providing source and the agent to control the size of particles in aqueous solution, respectively.

Among the most important characteristics of the synthesis method, mention can be made of its simplicity, the use of cost-effective materials, conservation of green chemistry principals to prevent the use of hazardous materials to human safety and environment, production of nanoparticles in homogeneous size and with high efficiency, and most important of all, the use of native materials that are only found in Iran and its introduction to the world.

Here’s a link to and a citation for the paper,

Sol–gel synthesis, characterization, and neurotoxicity effect of zinc oxide nanoparticles using gum tragacanth by Majid Darroudi, Zahra Sabouri, Reza Kazemi Oskuee, Ali Khorsand Zak, Hadi Kargar, and Mohamad Hasnul Naim Abd Hamidf. Ceramics International, Volume 39, Issue 8, December 2013, Pages 9195–9199

There’s a bit more technical information in the paper’s abstract,

The use of plant extract in the synthesis of nanomaterials can be a cost effective and eco-friendly approach. In this work we report the “green” and biosynthesis of zinc oxide nanoparticles (ZnO-NPs) using gum tragacanth. Spherical ZnO-NPs were synthesized at different calcination temperatures. Transmission electron microscopy (TEM) imaging showed the formation most of nanoparticles in the size range of below 50 nm. The powder X-ray diffraction (PXRD) analysis revealed wurtzite hexagonal ZnO with preferential orientation in (101) reflection plane. In vitro cytotoxicity studies on neuro2A cells showed a dose dependent toxicity with non-toxic effect of concentration below 2 µg/mL. The synthesized ZnO-NPs using gum tragacanth were found to be comparable to those obtained from conventional reduction methods using hazardous polymers or surfactants and this method can be an excellent alternative for the synthesis of ZnO-NPs using biomaterials.

I was not able to find the DOI (digital object identifier) and this paper is behind a paywall.

Elsevier and access

On a final note, Elsevier, the company that publishes Ceramics International and many other journals, is arousing some ire with what appears to be its latest policies concerning access according to a Dec. 20, 2013 posting by Mike Masnick for Techdirt Note: Links have been removed),

We just recently wrote about the terrible anti-science/anti-knowledge/anti-learning decision by publishing giant Elsevier to demand that Academia.edu take down copies of journal articles that were submitted directly by the authors, as Elsevier wished to lock all that knowledge (much of it taxpayer funded) in its ridiculously expensive journals. Mike Taylor now alerts us that Elsevier is actually going even further in its war on access to knowledge. Some might argue that Elsevier was okay in going after a “central repository” like Academia.edu, but at least it wasn’t going directly after academics who were posting pdfs of their own research on their own websites. While some more enlightened publishers explicitly allow this, many (including Elsevier) technically do not allow it, but have always looked the other way when authors post their own papers.

That’s now changed. As Taylor highlights, the University of Calgary sent a letter to its staff saying that a company “representing” Elsevier, was demanding that they take down all such articles on the University’s network.

While I do feature the topic of open access and other issues with intellectual property from time to time, you’ll find Masnick’s insights and those of his colleagues are those of people who are more intimately familiar (albeit firmly committed to open access) with the issues should you choose to read his Dec. 20, 2013 posting in its entirely.

Memories, science, archiving, and authenticity

This is going to be one of my more freewheeling excursions into archiving and memory. I’ll be starting with  a movement afoot in the US government to give citizens open access to science research moving onto a network dedicated to archiving nanoscience- and nanotechnology-oriented information, examining the notion of authenticity in regard to the Tiananmen Square incident on June 4, 1989, and finishing with the Council of Canadian Academies’ Expert Panel on Memory Institutions and the Digital Revolution.

In his June 4, 2013 posting on the Pasco Phronesis blog, David Bruggeman features information and an overview of  the US Office of Science and Technology Policy’s efforts to introduce open access to science research for citizens (Note: Links have been removed),

Back in February, the Office of Science and Technology Policy (OSTP) issued a memorandum to federal science agencies on public access for research results.  Federal agencies with over $100 million in research funding have until August 22 to submit their access plans to OSTP.  This access includes research publications, metadata on those publications, and underlying research data (in a digital format).

A collection of academic publishers, including the Association of American Publishers and the organization formerly known as the American Association for the Advancement of Science (publisher of Science), has offered a proposal for a publishing industry repository for pubic access to federally funded research that they publish.

David provides a somewhat caustic perspective on the publishers’ proposal while Jocelyn Kaiser’s June 4, 2013 article for ScienceInsider details the proposal in more detail (Note: Links have been removed),

Organized in part by the Association of American Publishers (AAP), which represents many commercial and nonprofit journals, the group calls its project the Clearinghouse for the Open Research of the United States (CHORUS). In a fact sheet that AAP gave to reporters, the publishers describe CHORUS as a “framework” that would “provide a full solution for agencies to comply with the OSTP memo.”

As a starting point, the publishers have begun to index papers by the federal grant numbers that supported the work. That index, called FundRef, debuted in beta form last week. You can search by agency and get a list of papers linked to the journal’s own websites through digital object identifiers (DOIs), widely used ID codes for individual papers. The pilot project involved just a few agencies and publishers, but many more will soon join FundRef, says Fred Dylla, executive director of the American Institute of Physics. (AAAS, which publishes ScienceInsider, is among them and has also signed on to CHORUS.)

The next step is to make the full-text papers freely available after agencies decide on embargo dates, Dylla says. (The OSTP memo suggests 12 months but says that this may need to be adjusted for some fields and journals.) Eventually, the full CHORUS project will also allow searches of the full-text articles. “We will make the corpus available for anybody’s search tool,” says Dylla, who adds that search agreements will be similar to those that publishers already have with Google Scholar and Microsoft Academic Search.

I couldn’t find any mention in Kaiser’s article as to how long the materials would be available. Is this supposed to be an archive, as well as, a repository? Regardless, I found the beta project, FundRef, a little confusing. The link from the ScienceInsider article takes you to this May 28, 2013 news release,

FundRef, the funder identification service from CrossRef [crossref.org], is now available for publishers to contribute funding data and for retrieval of that information. FundRef is the result of collaboration between funding agencies and publishers that correlates grants and other funding with the scholarly output of that support.

Publishers participating in FundRef add funding data to the bibliographic metadata they already provide to CrossRef for reference linking. FundRef data includes the name of the funder and a grant or award number. Manuscript tracking systems can incorporate a taxonomy of 4000 global funder names, which includes alternate names, aliases, and abbreviations enabling authors to choose from a standard list of funding names. Then the tagged funding data will travel through publishers’ production systems to be stored at CrossRef.

I was hoping that clicking on the FundRef button would take me to a database that I could test or tour. At this point, I wouldn’t have described the project as being at the beta stage (from a user’s perspective) as they are still building it and gathering data. However, there is lots of information on the FundRef webpage including an Additional Resources section featuring a webinar,

Attend an Introduction to FundRef Webinar – Thursday, June 6, 2013 at 11:00 am EDT

You do need to sign up for the webinar. Happily, it is open to international participants, as well as, US participants.

Getting back to my question on whether or not this effort is also an archive of sorts, there is a project closer to home (nanotechnologywise, anyway) that touches on these issues from an unexpected perspective, from the Nanoscience and Emerging Technologies in Society (NETS); sharing research and learning tools About webpage,

The Nanoscience and Emerging Technologies in Society: Sharing Research and Learning Tools (NETS) is an IMLS-funded [Institute of Museum and Library Services] project to investigate the development of a disciplinary repository for the Ethical, Legal and Social Implications (ELSI) of nanoscience and emerging technologies research. NETS partners will explore future integration of digital services for researchers studying ethical, legal, and social implications associated with the development of nanotechnology and other emerging technologies.

NETS will investigate digital resources to advance the collection, dissemination, and preservation of this body of research,  addressing the challenge of marshaling resources, academic collaborators, appropriately skilled data managers, and digital repository services for large-scale, multi-institutional and disciplinary research projects. The central activity of this project involves a spring 2013 workshop that will gather key researchers in the field and digital librarians together to plan the development of a disciplinary repository of data, curricula, and methodological tools.

Societal dimensions research investigating the impacts of new and emerging technologies in nanoscience is among the largest research programs of its kind in the United States, with an explicit mission to communicate outcomes and insights to the public. By 2015, scholars across the country affiliated with this program will have spent ten years collecting qualitative and quantitative data and developing analytic and methodological tools for examining the human dimensions of nanotechnology. The sharing of data and research tools in this field will foster a new kind of social science inquiry and ensure that the outcomes of research reach public audiences through multiple pathways.

NETS will be holding a stakeholders workshop June 27 – 28, 2013 (invite only), from the workshop description webpage,

What is the value of creating a dedicated Nano ELSI repository?
The benefits of having these data in a shared infrastructure are: the centralization of research and ease of discovery; uniformity of access; standardization of metadata and the description of projects; and facilitation of compliance with funder requirements for data management going forward. Additional benefits of this project will be the expansion of data curation capabilities for data repositories into the nanotechnology domain, and research into the development of disciplinary repositories, for which very little literature exists.

What would a dedicated Nano ELSI repository contain?
Potential materials that need to be curated are both qualitative and quantitative in nature, including:

  • survey instruments, data, and analyses
  • interview transcriptions and analyses
  • images or multimedia
  • reports
  • research papers, books, and their supplemental data
  • curricular materials

What will the Stakeholder Workshop accomplish?
The Stakeholder Workshop aims to bring together the key researchers and digital librarians to draft a detailed project plan for the implementation of a dedicated Nano ELSI repository. The Workshop will be used as a venue to discuss questions such as:

  • How can a repository extend research in this area?
  • What is the best way to collect all the research in this area?
  • What tools would users envision using with this resource?
  • Who should maintain and staff a repository like this?
  • How much would a repository like this cost?
  • How long will it take to implement?

What is expected of Workshop participants?
The workshop will bring together key researchers and digital librarians to discuss the requirements for a dedicated Nano ELSI repository. To inform that discussion, some participants will be requested to present on their current or past research projects and collaborations. In addition, workshop participants will be enlisted to contribute to the draft of the final project report and make recommendations for the implementation plan.

While my proposal did not get accepted (full disclosure), I do look forward to hearing more about the repository although I notice there’s no mention made of archiving the materials.

The importance of repositories and archives was brought home to me when I came across a June 4, 2013 article by Glyn Moody for Techdirt about the Tiananmen Square incident and subtle and unsubtle ways of censoring access to information,

Today is June 4th, a day pretty much like any other day in most parts of the world. But in China, June 4th has a unique significance because of the events that took place in Tiananmen Square on that day in 1989.

Moody recounts some of the ways in which people have attempted to commemorate the day online while evading the authorities’ censorship efforts. Do check out the article for the inside scoop on why ‘Big Yellow Duck’ is a censored term. One of the more subtle censorship efforts provides some chills (from the Moody article),

… according to this article in the Wall Street Journal, it looks like the Chinese authorities are trying out a new tactic for handling this dangerous topic:

On Friday, a China Real Time search for “Tiananmen Incident” did not return the customary message from Sina informing the user that search results could not be displayed due to “relevant laws, regulations and policies.” Instead the search returned results about a separate Tiananmen incident that occurred on Tomb Sweeping Day in 1976, when Beijing residents flooded the area to protest after they were prevented from mourning the recently deceased Premiere [sic] Zhou Enlai.

This business of eliminating and substituting a traumatic and disturbing historical event with something less contentious reminded me both of the saying ‘history is written by the victors’ and of Luciana Duranti and her talk titled, Trust and Authenticity in the Digital Environment: An Increasingly Cloudy Issue, which took place in Vancouver (Canada) last year (mentioned in my May 18, 2012 posting).

Duranti raised many, many issues that most of us don’t consider when we blithely store information in the ‘cloud’ or create blogs that turn out to be repositories of a sort (and then don’t know what to do with them; ça c’est moi). She also previewed a Sept. 26 – 28, 2013 conference to be hosted in Vancouver by UNESCO [United Nations Educational, Scientific, and Cultural Organization), “Memory of the World in the Digital Age: Digitization and Preservation.” (UNESCO’s Memory of the World programme hosts a number of these themed conferences and workshops.)

The Sept. 2013 UNESCO ‘memory of the world’ conference in Vancouver seems rather timely in retrospect. The Council of Canadian Academies (CCA) announced that Dr. Doug Owram would be chairing their Memory Institutions and the Digital Revolution assessment (mentioned in my Feb. 22, 2013 posting; scroll down 80% of the way) and, after checking recently, I noticed that the Expert Panel has been assembled and it includes Duranti. Here’s the assessment description from the CCA’s ‘memory institutions’ webpage,

Library and Archives Canada has asked the Council of Canadian Academies to assess how memory institutions, which include archives, libraries, museums, and other cultural institutions, can embrace the opportunities and challenges of the changing ways in which Canadians are communicating and working in the digital age.
Background

Over the past three decades, Canadians have seen a dramatic transformation in both personal and professional forms of communication due to new technologies. Where the early personal computer and word-processing systems were largely used and understood as extensions of the typewriter, advances in technology since the 1980s have enabled people to adopt different approaches to communicating and documenting their lives, culture, and work. Increased computing power, inexpensive electronic storage, and the widespread adoption of broadband computer networks have thrust methods of communication far ahead of our ability to grasp the implications of these advances.

These trends present both significant challenges and opportunities for traditional memory institutions as they work towards ensuring that valuable information is safeguarded and maintained for the long term and for the benefit of future generations. It requires that they keep track of new types of records that may be of future cultural significance, and of any changes in how decisions are being documented. As part of this assessment, the Council’s expert panel will examine the evidence as it relates to emerging trends, international best practices in archiving, and strengths and weaknesses in how Canada’s memory institutions are responding to these opportunities and challenges. Once complete, this assessment will provide an in-depth and balanced report that will support Library and Archives Canada and other memory institutions as they consider how best to manage and preserve the mass quantity of communications records generated as a result of new and emerging technologies.

The Council’s assessment is running concurrently with the Royal Society of Canada’s expert panel assessment on Libraries and Archives in 21st century Canada. Though similar in subject matter, these assessments have a different focus and follow a different process. The Council’s assessment is concerned foremost with opportunities and challenges for memory institutions as they adapt to a rapidly changing digital environment. In navigating these issues, the Council will draw on a highly qualified and multidisciplinary expert panel to undertake a rigorous assessment of the evidence and of significant international trends in policy and technology now underway. The final report will provide Canadians, policy-makers, and decision-makers with the evidence and information needed to consider policy directions. In contrast, the RSC panel focuses on the status and future of libraries and archives, and will draw upon a public engagement process.

Question

How might memory institutions embrace the opportunities and challenges posed by the changing ways in which Canadians are communicating and working in the digital age?

Sub-questions

With the use of new communication technologies, what types of records are being created and how are decisions being documented?
How is information being safeguarded for usefulness in the immediate to mid-term across technologies considering the major changes that are occurring?
How are memory institutions addressing issues posed by new technologies regarding their traditional roles in assigning value, respecting rights, and assuring authenticity and reliability?
How can memory institutions remain relevant as a trusted source of continuing information by taking advantage of the collaborative opportunities presented by new social media?

From the Expert Panel webpage (go there for all the links), here’s a complete listing of the experts,

Expert Panel on Memory Institutions and the Digital Revolution

Dr. Doug Owram, FRSC, Chair
Professor and Former Deputy Vice-Chancellor and Principal, University of British Columbia Okanagan Campus (Kelowna, BC)

Sebastian Chan     Director of Digital and Emerging Media, Smithsonian Cooper-Hewitt National Design Museum (New York, NY)

C. Colleen Cook     Trenholme Dean of Libraries, McGill University (Montréal, QC)

Luciana Duranti   Chair and Professor of Archival Studies, the School of Library, Archival and Information Studies at the University of British Columbia (Vancouver, BC)

Lesley Ellen Harris     Copyright Lawyer; Consultant, Author, and Educator; Owner, Copyrightlaws.com (Washington, D.C.)

Kate Hennessy     Assistant Professor, Simon Fraser University, School of Interactive Arts and Technology (Surrey, BC)

Kevin Kee     Associate Vice-President Research (Social Sciences and Humanities) and Canada Research Chair in Digital Humanities, Brock University (St. Catharines, ON)

Slavko Manojlovich     Associate University Librarian (Information Technology), Memorial University of Newfoundland (St. John’s, NL)

David Nostbakken     President/CEO of Nostbakken and Nostbakken, Inc. (N + N); Instructor of Strategic Communication and Social Entrepreneurship at the School of Journalism and Communication, Carleton University (Ottawa, ON)

George Oates     Art Director, Stamen Design (San Francisco, CA)

Seamus Ross     Dean and Professor, iSchool, University of Toronto (Toronto, ON)

Bill Waiser, SOM, FRSC     Professor of History and A.S. Morton Distinguished Research Chair, University of Saskatchewan (Saskatoon, SK)

Barry Wellman, FRSC     S.D. Clark Professor, Department of Sociology, University of Toronto (Toronto, ON)

I notice they have a lawyer whose specialty is copyright, Lesley Ellen Harris. I did check out her website, copyrightlaws.com and could not find anything that hinted at any strong opinions on the topic. She seems to feel that copyright is a good thing but how far she’d like to take this is a mystery to me based on the blog postings I viewed.

I’ve also noticed that this panel has 13 people, four of whom are women which equals a little more (June 5, 2013, 1:35 pm PDT, I substituted the word ‘less’ for the word ‘more’; my apologies for the arithmetic error) than 25% representation. That’s a surprising percentage given how heavily weighted the fields of library and archival studies are weighted towards women.

I have meandered somewhat but my key points are this:

  • How we are going to keep information available? It’s all very well to have repository but how long will the data be kept in the repository and where does it go afterwards?
  • There’s a bias certainly with the NETS workshop and, likely, the CCA Expert Panel on Memory Institutions and the Digital Revolution toward institutions as the source for information that’s worth keeping for however long or short a time that should be. What about individual efforts? e.g. Don’t Leave Canada Behind ; FrogHeart; Techdirt; The Last Word on Nothing, and many other blogs?
  • The online redirection of Tiananmen Square incident queries is chilling but I’ve often wondered what happen if someone wanted to remove ‘objectionable material’ from an e-book, e.g. To Kill a Mockingbird. A new reader wouldn’t notice the loss if the material has been excised in a subtle or professional  fashion.

As for how this has an impact on science, it’s been claimed that Isaac Newton attempted to excise Robert Hooke from history (my Jan. 19, 2012 posting). Whether it’s true or not, there is remarkably little about Robert Hooke despite his accomplishments and his languishment is a reminder that we must always take care that we retain our memories.

ETA June 6, 2013: David Bruggeman added some more information links about CHORUS in his June 5, 2013 post (On The Novelty Of Corporate-Government Partnership In STEM Education),

Before I dive into today’s post, a brief word about CHORUS. Thanks to commenter Joe Kraus for pointing me to this Inside Higher Ed post, which includes a link to the fact sheet CHORUS organizers distributed to reporters. While there are additional details, there are still not many details to sink one’s teeth in. And I remain surprised at the relative lack of attention the announcement has received. On a related note, nobody who’s been following open access should be surprised by Michael Eisen’s reaction to CHORUS.

I encourage you to check out David’s post as he provides some information about a new STEM (science, technology, engineering, mathematics) collaboration between the US National Science Foundation and companies such as GE and Intel.

Free the nano—stop patenting publicly funded research

Joshua Pearce, a professor at Michigan Technological University, has written a commentary on patents and nanotechnology for Nature magazine which claims the current patent regimes strangle rather than encourage innovation. From the free article,  Physics: Make nanotechnology research open-source by Joshua Pearce in Nature 491, 519–521 (22 November 2012) doi:10.1038/491519a (Note: I have removed footnotes),

Any innovator wishing to work on or sell products based on single-walled carbon nanotubes in the United States must wade through more than 1,600 US patents that mention them. He or she must obtain a fistful of licences just to use this tubular form of naturally occurring graphite rolled from a one-atom-thick sheet. This is because many patents lay broad claims: one nanotube example covers “a composition of matter comprising at least about 99% by weight of single-wall carbon molecules”. Tens of others make overlapping claims.

Patent thickets occur in other high-tech fields, but the consequences for nanotechnology are dire because of the potential power and immaturity of the field. Advances are being stifled at birth because downstream innovation almost always infringes some early broad patents. By contrast, computing, lasers and software grew up without overzealous patenting at the outset.

Nanotechnology is big business. According to a 2011 report by technology consultants Cientifica, governments around the world have invested more than US$65 billion in nanotechnology in the past 11 years [my July 15, 2011 posting features an interview with Tim Harper, Cientfica CEO and founder, about the then newly released report]. The sector contributed more than $250 billion to the global economy in 2009 and is expected to reach $2.4 trillion a year by 2015, according to business analysts Lux Research. Since 2001, the United States has invested $18 billion in the National Nanotechnology Initiative; the 2013 US federal budget will add $1.8 billion more.

This investment is spurring intense patent filing by industry and academia. The number of nanotechnology patent applications to the US Patent and Trademark Office (USPTO) is rising each year and is projected to exceed 4,000 in 2012. Anyone who discovers a new and useful process, machine, manufacture or composition of matter, or any new and useful improvement thereof, may obtain a patent that prevents others from using that development unless they have the patent owner’s permission.

Pearce makes some convincing points (Note: I have removed a footnote),

Examples of patents that cover basic components include one owned by the multinational chip manufacturer Intel, which covers a method for making almost any nanostructure with a diameter less than 50 nm; another, held by nanotechnology company NanoSys of Palo Alto, California, covers composites consisting of a matrix and any form of nanostructure. And Rice University in Houston, Texas, has a patent covering “composition of matter comprising at least about 99% by weight of fullerene nanotubes”.

The vast majority of publicly announced IP licence agreements are now exclusive, meaning that only a single person or entity may use the technology or any other technology dependent on it. This cripples competition and technological development, because all other would-be innovators are shut out of the market. Exclusive licence agreements for building-block patents can restrict entire swathes of future innovation.

Pearce’s argument for open source,

This IP rush assumes that a financial incentive is necessary to innovate, and that without the market exclusivity (monopoly) offered by a patent, development of commercially viable products will be hampered. But there is another way, as decades of innovation for free and open-source software show. Large Internet-based companies such as Google and Facebook use this type of software. Others, such as Red Hat, make more than $1 billion a year from selling services for products that they give away for free, like Red Hat’s version of the computer operating system Linux.

An open-source model would leave nanotechnology companies free to use the best tools, materials and devices available. Costs would be cut because most licence fees would no longer be necessary. Without the shelter of an IP monopoly, innovation would be a necessity for a company to survive. Openness reduces the barrier for small, nimble entities entering the market.

John Timmer in his Nov. 23, 2012 article for Wired.co.uk expresses both support and criticism,

Some of Pearce’s solutions are perfectly reasonable. He argues that the National Science Foundation adopt the NIH model of making all research it funds open access after a one-year time limit. But he also calls for an end of patents derived from any publicly funded research: “Congress should alter the Bayh-Dole Act to exclude private IP lockdown of publicly funded innovations.” There are certainly some indications that Bayh-Dole hasn’t fostered as much innovation as it might (Pearce notes that his own institution brings in 100 times more money as grants than it does from licensing patents derived from past grants), but what he’s calling for is not so much a reform of Bayh-Dole as its elimination.

Pearce wants changes in patenting to extend well beyond the academic world, too. He argues that the USPTO should put a moratorium on patents for “nanotechnology-related fundamental science, materials, and concepts.” As we described above, the difference between a process innovation and the fundamental properties resulting in nanomaterial is a very difficult thing to define. The USPTO has struggled to manage far simpler distinctions; it’s unrealistic to expect it to manage a moratorium effectively.

While Pearce points to the 3-D printing sector admiringly, there are some issues even there, as per Mike Masnick’s Nov.  21, 2012 posting on Techdirt.com (Note:  I have removed links),

We’ve been pointing out for a while that one of the reasons why advancements in 3D printing have been relatively slow is because of patents holding back the market. However, a bunch of key patents have started expiring, leading to new opportunities. One, in particular, that has received a fair bit of attention was the Formlabs 3D printer, which raised nearly $3 million on Kickstarter earlier this year. It got a ton of well-deserved attention for being one of the first “low end” (sub ~$3,000) 3D printers with very impressive quality levels.

Part of the reason the company said it could offer such a high quality printer at a such a low price, relative to competitors, was because some of the key patents had expired, allowing it to build key components without having to pay astronomical licensing fees. A company called 3D Systems, however, claims that Formlabs missed one patent. It holds US Patent 5,597,520 on a “Simultaneous multiple layer curing in stereolithography.” While I find it ridiculous that 3D Systems is going legal, rather than competing in the marketplace, it’s entirely possible that the patent is valid. It just highlights how the system holds back competition that drives important innovation, though.

3D Systems claims that Formlabs “took deliberate acts to avoid learning” about 3D Systems’ live patents. The lawsuit claims that Formlabs looked only for expired patents — which seems like a very odd claim. Why would they only seek expired patents? …

I strongly suggest reading both Pearce’s and Timmer’s articles as they both provide some very interesting perspectives about nanotechnology IP (intellectual property) open access issues. I also recommend Mike Masnick’s piece for exposure to a rather odd but unfortunately not uncommon legal suit designed to limit competition in a relatively new technology (3-D printers).

Australians weigh in on Open Access publication proposal in UK

Misguided is the word used in the June 20, 2012 editorial for The Conversation by Jason Norrie to describe the UK proposal to adopt ‘open access’ publishing, from physorg.com,

The British government has enlisted the services of Wikipedia founder Jimmy Wales in a bid to support open access publishing for all scholarly work by UK researchers, regardless of whether it is also published in a subscription-only journal.

The cost of doing so would range from £50 to £60 million a year, according to an independent study commissioned by the government. Professor Dame Janet Finch, who led the study, said that “in the longer term, the future lies with open access publishing.” Her report says that “the principle that the results of research that has been publicly funded should be freely accessible in the public domain is a compelling one, and fundamentally unanswerable.”

Norrie’s June 20,2012  editorial can also be found on The Conversation website where he includes responses from academics to the proposal,

Emeritus Professor Colin Steele, former librarian of the Australian National University, said that although report was supportive of the principles of open access, it proposed a strategy that was unnecessarily costly and could not be duplicated in Australia.

“The way they’ve gone about it almost totally focuses, presumably due to publisher pressure, on the gold model of open access,” he said. “As a result of that, the amount of money needed to carry out the transition – the money needed for article processing charges – is very large. It’s not surprising that the publishers have come out in favour of the report, because it will guarantee they retain their profits.

“It certainly wouldn’t work in Australia because there simply isn’t that amount of research council funding available.

Stevan Harnad, a Professor in the Department of Psychology at Université du Québec à Montréal, said the report had scrubbed the green model from the UK policy agenda and replaced it with a “vague, slow evolution toward gold open access publishing, at the publishers’ pace and price. The result would be very little open access, very slowly, and at a high price … taken out of already scarce UK research funds, instead of the rapid and cost-free open access growth vouchsafed by green open access mandates from funders and universities.”

For anyone not familiar with the differences between the ‘green’ and ‘gold models, the Wikipedia essay on Open Access offers a definition (Note: I have removed links and footnotes),

OA can be provided in two ways

  • Green OA Self Archiving – authors publish in any journal and then self-archive a version of the article for free public use in their institutional repository, in a central repository (such as PubMed Central), or on some other OA website What is deposited is the peer-reviewed postprint – either the author’s refereed, revised final draft or the publisher’s version of record. Green OA journal publishers endorse immediate OA self-archiving by their authors. OA self-archiving was first formally proposed in 1994 by Stevan Harnad [emphasis mine]. However, self-archiving was already being done by computer scientists in their local FTP archives in the ’80s, later harvested into Citeseer. High-energy physicists have been self-archiving centrally in arXiv since 1991.
  • Gold OA Publishing – authors publish in an open access journal that provides immediate OA to all of its articles on the publisher’s website. (Hybrid open access journals provide Gold OA only for those individual articles for which their authors (or their author’s institution or funder) pay an OA publishing fee.) Examples of OA publishers are BioMed Central and the Public Library of Science.

I guess that Wikipedia entry explains why Hamad is quoted in Norrie’s editorial.

While money is one of the most discussed issues surrounding the ‘open access publication’ discussion, I am beginning to wonder why there isn’t more mention of the individual career-building, institution science reputation-building and national science reputation-building that the current publication model helps make possible.

I have posted on this topic previously, the May 28, 2012 posting is my most comprehensive (huge) take on the subject.

As for The Conversation, it’s my first encounter with this very interesting Australian experiment in communicating research to the public, from the Who We Are page,

The Conversation is an independent source of analysis, commentary and news from the university and research sector — written by acknowledged experts and delivered directly to the public. Our team of professional editors work with more than 3,100 registered academics and researchers to make this wealth of knowledge and expertise accessible to all.

We aim to be a site you can trust. All published work will carry attribution of the authors’ expertise and, where appropriate, will disclose any potential conflicts of interest, and sources of funding. Where errors or misrepresentations occur, we will correct these promptly.

Sincere thanks go to our Founding Partners who gave initial funding support: CSIRO, Monash University, University of Melbourne, University of Technology Sydney and University of Western Australia.

Our initial content partners include those institutions, Strategic Partner RMIT University and a growing list of member institutions. More than 180 institutions contribute content, including Australia’s research-intensive, Group of Eight universities.

We are based in Melbourne, Australia, and wholly owned by The Conversation Media Trust, a not-for-profit company.

The copyright notice at the bottom of The Conversation’s web pages suggest it was founded in 2010. It certainly seems to have been embraced by Australian academics and other interested parties as per the Home page,

The Conversation is an independent source of analysis, commentary and news from the university and research sector viewed by 350,000 readers each month. Our team of professional editors work with more than 2,900 registered academics and researchers from 200 institutions.

I wonder if there’s any chance we’ll see something like this here in Canada?

Opening it all up (open software, Nature, and Naked Science)

I’m coming back to the ‘open access’ well this week since there’ve been a few new developments since my massive May 28, 2012 posting on the topic.

A June 5, 2012 posting by Glyn Moody at the Techdirt website brought yet another aspect of ‘open access’ to my attention,

Computers need software, and some of that software will be specially written or adapted from existing code to meet the particular needs of the scientists’ work. This makes computer software a vital component of the scientific process. It also means that being able to check that code for errors is as important as being able to check the rest of the experiment’s methodology. And yet very rarely can other scientists do that, because the code employed is not made available.

That’s right,  there’s open access scientific software.

Meanwhile over at the Guardian newspaper website, Paul Campbell, Nature journal’s editor-in-chief,  notes that open access to research is inevitable in a June 8, 2012 article by Alok Jha,

Open access to scientific research articles will “happen in the long run”, according to the editor-in-chief of Nature, one of the world’s premier scientific journals.

Philip Campbell said that the experience for readers and researchers of having research freely available is “very compelling”. But other academic publishers said that any large-scale transition to making research freely available had to take into account the value and investments they added to the scientific process.

“My personal belief is that that’s what’s going to happen in the long run,” said Campbell. However, he added that the case for open access was stronger for some disciplines, such as climate research, than others.

Campbell was speaking at a briefing hosted by the Science Media Centre.  Interestingly, ScienceOnline Vancouver’s upcoming (June 12, 2012, 6:30 pm mingling starts, 7-9 pm PDT for the panel discussion) meeting about open access (titled, Naked Science; Excuse me: your science is showing) features a speaker from Canada’s Science Media Centre (from the event page),

  1. Heather Piwowar is a postdoc with Duke University and the Dept of Zoology at UBC.  She’s a researcher on the NSF-funded DataONE and Dryad projects, studying data.  Specifically, how, when, and why do scientists publicly archive the datasets they collect?  When do they reuse the data of others?  What related policies and tools would help facilitate more efficient and effective use of data resources?  Heather is also a co-founder of total-impact, a web application that reveals traditional and non-traditional impact metrics of scholarly articles, datasets, software, slides, and blog posts.
  2. Heather Morrison is a Vancouver-based, well-known international open access advocate and practitioner of open scholarship, through her blogs The Imaginary Journal of Poetic Economics http://poeticeconomics.blogspot.com and her dissertation-blog http://pages.cmns.sfu.ca/heather-morrison/
  3. Lesley Evans Ogden is a freelance science journalist and the Vancouver media officer for the Science Media Centre of Canada. In the capacity of freelance journalist, she is a contributing science writer at Natural History magazine, and has written for a variety of publications including YES Mag, Scientific American (online), The Guardian, Canadian Running, and Bioscience. She has a PhD in wildlife ecology, and spent more than a decade slogging through mud and climbing mountains to study the breeding and winter ecology of migratory birds. She is also an alumni of the Science Communications program at the Banff Centre. (She will be speaking in the capacity of freelance journalist).
  4. Joy Kirchner is the Scholarly Communications Coordinator at University of British Columbia where she heads the University’s developing Copyright office in addition to the Scholarly Communications office based in the Library. Her role involves coordinating the University’s copyright education services, identifying recommended and sustainable service models to support scholarly communication activities on the campus and coordinating formalized discussion and education of these issues with faculty, students, research and publishing constituencies on the UBC campus. Joy has also been instrumental in working with faculty to host their open access journals through the Library’s open access journal hosting program; she was involved in the implementation and content recruitment of the Library’s open access  institutional repository, and she was instrumental in establishing the Provost’s Scholarly Communications Steering Committee and associated working groups where she sits as a key member of the Committee looking into an open access position at UBC amongst other things..  Joy is also chair of UBC’s Copyright Advisory Committee and working groups. She is also a faculty member with the Association of Research Libraries (ARL) / Association of College and Research Libraries (ACRL) Institute for Scholarly Communication, she assists with the coordination and program development of ACRL’s much lauded Scholarly Communications Road Show program, she is a Visiting Program Officer with ACRL in support of their scholarly communications programs, and she is a Fellow with ARL’s Research Library Leadership Fellows executive program (RLLF). Previous positions includes Librarian, for Collections, Licensing & Digital Scholarship (UBC), Electronic Resources Coordinator (Columbia Univ.), Medical & Allied Health Librarian and Science & Engineering Librarian. She holds a BA and an MLIS from the University of British Columbia.

I’m starting to get the impression that there is a concerted communications effort taking place. Between this listing and the one in my May 28, 2012 posting, there are just too many articles and events occurring to be purely chance.

Special issue on nanotechnology and regulations from EJLT

The European Journal of Law and Technology (EJLT) is featuring 15 articles on the theme of nanotechnology and regulations in a special issue. From the Dec. 12, 2011 news item on Nanowerk,

The issue contains 15 contributions that canvass some of the most pressing philosophical, ethical and regulatory questions currently being debated around the world in relation to nanotechnologies and more specifically nanomaterials.

The EJLT is an open access journal so you can view these articles or any others that may interest you. Here’s the Table of Contents for the special issue,

Table of Contents

Editorial

Editorial
Philip Leith, Abdul Paliwala

Introduction to the Special Issue

Why the elephant in the room appears to be more than a nano-sized challenge
Joel D’Silva, Diana Meagan Bowman

Nano Technology Special Edition

Decision Ethics and Emergent Technologies: The Case of Nanotechnology
David Berube
Justice or Beneficence: What Regulatory Virtue for Nano-Governance?
Hailemichael Teshome Demissie
Regulating Nanoparticles: the Problem of Uncertainty
Roger Strand, Kamilla Lein Kjølberg
Complexities of labelling of nanoproducts on the consumer markets
Harald Throne-Holst, Arie Rip
Soft regulation and responsible nanotechnological development in the European Union: Regulating occupational health and safety in the Netherlands
Bärbel Dorbeck-Jung
Nanomaterials and the European Water Framework Directive
Steffen Foss Hansen, Anders Baun, Catherine Ganzleben
The Proposed Ban on Certain Nanomaterials for Electrical and Electronic Equipment in Europe and Its Global Security Implications: A Search for an Alternative Regulatory Approach
Hitoshi Nasu, Thomas Faunce
The Regulation of Nano-particles under the European Biocidal Products Directive: Challenges for Effective Civil Society Participation
Michael T Reinsborough, Gavin Sullivan
Value chains as a linking-pin framework for exploring governance and innovation in nano-involved sectors: illustrated for nanotechnologies and the food packaging sector
Douglas Robinson
Food and nano-food within the Chinese regulatory system: no need to have overregulation.Less physicality can produce more power.
Margherita Poto
Regulation and Governance of Nanotechnology in China: Regulatory Challenges and Effectiveness
Darryl Stuart Jarvis, Noah Richmond
How Resilient is India to Nanotechnology Risks? Examining Current Developments, Capacities and an Approach for Effective Risk Governance and Regulation
Shilpanjali Deshpande Sarma
Toward Safe and Sustainable Nanomaterials: Chemical Information Call-in to Manufacturers of Nanomaterials by California as a Case Study
William Ryan, Sho Takatori, Thomas Booze, Hai-Yong Kang
De minimis curat lex: New Zealand law and the challenge of the very small
Colin Gavaghan, Jennifer Moore

I notice that the last article was authored by the same people who produced a review of New Zealand’s nanotechnology regulatory framework in Sept. 2011. The Science Media Centre of New Zealand noted this in a Sept. 6, 2011 article about the review,

The “Review of the Adequacy of New Zealand’s Regulatory Systems to Manage the Possible Impacts of Manufactured Nanomaterials” by Colin Gavaghan (in Dunedin) and Jennifer Moore (in Wellington) lists three possible levels of regulatory gaps, but points to a lack of consensus on just what constitutes a “gap”.

The authors note where such nanomaterials are not covered by existing regulation, and where these regulations are triggered by the presence of the nanomaterials. They focus on first and second generation products and say that as nanomaterials evolve, more work will need to be done on regulation.

“Some reviews of this topic have suggested that subsequent generations of nanotechnologies are likely to present a much more significant challenge to existing regulatory structures,” the authors say.

The EJLT special issue looks like it has a pretty interesting range of articles representing nanotechnology and regulations in various jurisdictions. I’m thrilled to see a couple of articles on China, one on India, and, of course, the piece on New Zealand as I don’t often find material on those countries. Thank you EJLT!

Beethoven inspires Open Research

“Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose.” That was written in 1945, proving “plus ça change; plus c’est la même chose.” It’s taken from an essay, As We May Think, by Vannevar Bush for the July 1945 issue of The Atlantic magazine. Here’s the editor’s introduction,

As Director of the Office of Scientific Research and Development, Dr. Vannevar Bush has coordinated the activities of some six thousand leading American scientists in the application of science to warfare. In this significant article he holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For years inventions have extended man’s physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but not the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson’s famous address of 1837 on “The American Scholar,” this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge. —THE EDITOR

These days with the open data and open access initiatives, there seems to be a new interest in making science more accessible and this time it’s coming from the grassroots. Over at Techdirt, Glyn Moody in his Nov. 18, 2011 posting highlights a new project for making science research accessible. It’s called ‘Beethoven’s open repository’ and here’s more about the project from the organizers (from the Transforming the way we publish research webpage),

We want to change the way research is communicated, both amongst researchers, as well as with health practitioners, patients and the wider public. Inspired by Beethoven, we want to build a research version of his repository and try to tackle the question What if the public scientific record would be updated directly as research proceeds?

Every year, over 1 million scholarly articles are being published in around 25,000 journals. No researcher – let alone the public – can keep track of all the relevant information any more, not even in small fields. To make things worse, only about 20% of these articles are freely accessible in one way or another, but the majority is not. Our project aims at providing a technically feasible solution: open-access articles that evolve along with the topic they cover.

This would allow researchers, research funders and the public to stay up to date with research in their fields of interest. It would save researchers time because when they write their results up, they could make use of the context provided by the existing articles, and outreach would be built in from the beginning, rather than being perceived as an extra burden that comes after a traditional publication. It would also save funders time because monitoring research progress would amount to checking the change logs of the respective articles. It would also save patients time, especially when a disease makes their clocks tick faster. Last but not least, it would open the doors for science as a spectator sport, and allow for enhanced interaction between citizen science and more traditional approaches to research.

Chris Mietchen is one of the moving forces (organizers) for this effort. From the About Me page,

A biophysicist by training, I have used a number of techniques from the physical sciences to investigate biological systems and their evolution. My focus so far was on the application of Magnetic Resonance Imaging techniques to fossils, embryonic development and cold tolerance but I did some excursions into music perception, measuring brain structure, or vocal production in elephants as well.

For the prototyping of Beethoven’s open repository of research, I have teamed up with brain scientist M. Fabiana Kubke (@kubke) of the University of Auckland, and we invite everyone to join us in shaping the project.

The organizers are raising funds for ‘Beethoven’s open repository’ at RocketHub. They have also posted this video (which explains the reference to Beethoven as well as other details about their project),

I have featured the issue of access to research previously in my Nov. 3, 2011 posting, Disrupting scientific research. There is also a US federal government public consultation mentioned in my Nov. 7, 2011 posting. The consultation is open to comments until January 2012.

I wish Mietchen and Kubke the best of luck as they raise funds for ‘Beethoven’s open repository’.