Tag Archives: plastics

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.

Saving modern art with 3D-printed artwork

I first wrote about the NanoRestART project in an April 4. 2016 post highlighting work which focuses on a problem unique to modern and contemporary art, the rapid deterioration of the plastics and synthetic materials used to create the art and the lack of conservation techniques for preserving those materials. A Dec. 22, 2016 news item on phys.org provides an update on the project,

Many contemporary artworks are endangered due to their extremely fast degradation processes. NANORESTART—a project developing nanomaterials to protect and restore this cultural heritage—has created a 3-D printed artwork with a view to testing restoration methods.

The 3D printed sculpture was designed by engineer-artist Tom Lomax – a UK-based sculptor and painter specialised in 3D-printed colour sculpture. Drawing inspiration from the aesthetic of early 20th century artworks, the sculpture was made using state-of-the-art 3D printing processes and can be downloaded for free. [I believe the downloadable files are available at the end of the paper in Heritage Science in the section titled: Additional files, just prior to the References {see below for citation and link to the paper}

Fig. 1
Images of the RP artwork “Out of the Cauldron” designed by Tom Lomax produced with the most common RP Technologies: (1) stereolithography (SLA®) (2) polyjet (3) 3D printing (3DP) (4) selective laser sintering (SLS). Before (above) and after (below) photodegradation
Courtesy: Heritage Science

A Dec. 21, 2016 Cordis press release, which originated the news item, provides more information about the artist and his 3D printed sculpture,

‘As an artist I previously had little idea of the conservation threat facing contemporary art – preferring to leave these issues for conservators and focus on the creative process. But while working on this project with UCL [University College of London] I began to realise that artists themselves have a crucial role to play,’ Lomax explains.

The structure has been printed using the most common rapid prototyping (RP) technologies, which are gaining popularity among designers and artists. It will be a key tool for the project team to test how these structures degrade and come up with solutions to better preserve them.

As Caroline Coon, researcher at the UCL Institute for Sustainable Heritage, notes, ‘Art is being transformed by fast-changing new technologies and it is therefore vital to preempt conservation issues, rather than react to them, if we are to preserve our best contemporary works for future generations. This research project will benefit both artists and academics alike – but ultimately it is in the best interests of the public that art and science combine to preserve works.’

The NANORESTART team subjected the artwork to accelerated testing, discovering that many 3D-printing technologies use materials that degrade particularly rapidly. It is particularly true for polymers, whose only-recently achieved cultural heritage status also means that conservation experience is almost inexistent.

Preserving or not: an intricate question for artists

The experiments were part of a UCL paper entitled ‘Preserving Rapid Prototypes: A Review’, published in late November in Heritage Science. In this review, Caroline Coon and her team have critically assessed the most commonly used technologies used to tackle the degradation of materials, noting that ‘to conserve RP artworks it is necessary to have an understanding of the process of creation, the different technologies involved, the materials used as well as their chemical and mechanical properties.’

Besides technical concerns, the paper also voices those of artists, in particular the importance of the original artefact and the debate around the appropriateness of preventing the degradation process of artworks. Whilst digital conservation of these artworks would prevent degradation and allow designs to be printed on-demand, some artists argue that the original artefact is actually the one with artistic value as it references a specific time and place. On the other hand, some artists actually embrace and accept the natural degradation of their art as part of its charm.

With two more years to go before its completion, NANORESTART will undoubtedly bring valuable results, resources and reflexions to both conservators and artists. The nanomaterials it aims to develop will bring the EU at the forefront of a conservation market estimated at some EUR 5 billion per year.

Here`s a link to and a citation for the paper,

Preserving rapid prototypes: a review by Carolien Coon, Boris Pretzel, Tom Lomax, and Matija Strlič. Heritage Science 2016 4:40 DOI: 10.1186/s40494-016-0097-y Published: 22 November 2016

©  The Author(s) 2016

This paper is open access.

Mimicking rain and sun to test plastic for nanoparticle release

One of Canada’s nanotechnology experts once informed a House of Commons Committee on Health that nanoparticles encased in plastic (he was talking about cell phones) weren’t likely to harm you except in two circumstances (when workers were using them in the manufacturing process and when the product was being disposed of). Apparently, under some circumstances, that isn’t true any more. From a Sept. 30, 2016 news item on Nanowerk,

If the 1967 film “The Graduate” were remade today, Mr. McGuire’s famous advice to young Benjamin Braddock would probably be updated to “Plastics … with nanoparticles.” These days, the mechanical, electrical and durability properties of polymers—the class of materials that includes plastics—are often enhanced by adding miniature particles (smaller than 100 nanometers or billionths of a meter) made of elements such as silicon or silver. But could those nanoparticles be released into the environment after the polymers are exposed to years of sun and water—and if so, what might be the health and ecological consequences?

A Sept. 30, 2016 US National Institute of Standards and Technology (NIST) news release, which originated the news item, describes how the research was conducted and its results (Note: Links have been removed),

In a recently published paper (link is external), researchers from the National Institute of Standards and Technology (NIST) describe how they subjected a commercial nanoparticle-infused coating to NIST-developed methods for accelerating the effects of weathering from ultraviolet (UV) radiation and simulated washings of rainwater. Their results indicate that humidity and exposure time are contributing factors for nanoparticle release, findings that may be useful in designing future studies to determine potential impacts.

In their recent experiment, the researchers exposed multiple samples of a commercially available polyurethane coating containing silicon dioxide nanoparticles to intense UV radiation for 100 days inside the NIST SPHERE (Simulated Photodegradation via High-Energy Radiant Exposure), a hollow, 2-meter (7-foot) diameter black aluminum chamber lined with highly UV reflective material that bears a casual resemblance to the Death Star in the film “Star Wars.” For this study, one day in the SPHERE was equivalent to 10 to 15 days outdoors. All samples were weathered at a constant temperature of 50 degrees Celsius (122 degrees Fahrenheit) with one group done in extremely dry conditions (approximately 0 percent humidity) and the other in humid conditions (75 percent humidity).

To determine if any nanoparticles were released from the polymer coating during UV exposure, the researchers used a technique they created and dubbed “NIST simulated rain.” Filtered water was converted into tiny droplets, sprayed under pressure onto the individual samples, and then the runoff—with any loose nanoparticles—was collected in a bottle. This procedure was conducted at the beginning of the UV exposure, at every two weeks during the weathering run and at the end. All of the runoff fluids were then analyzed by NIST chemists for the presence of silicon and in what amounts. Additionally, the weathered coatings were examined with atomic force microscopy (AFM) and scanning electron microscopy (SEM) to reveal surface changes resulting from UV exposure.

Both sets of coating samples—those weathered in very low humidity and the others in very humid conditions—degraded but released only small amounts of nanoparticles. The researchers found that more silicon was recovered from the samples weathered in humid conditions and that nanoparticle release increased as the UV exposure time increased. Microscopic examination showed that deformations in the coating surface became more numerous with longer exposure time, and that nanoparticles left behind after the coating degraded often bound together in clusters.

“These data, and the data from future experiments of this type, are valuable for developing computer models to predict the long-term release of nanoparticles from commercial coatings used outdoors, and in turn, help manufacturers, regulatory officials and others assess any health and environmental impacts from them,” said NIST research chemist Deborah Jacobs, lead author on the study published in the Journal of Coatings Technology and Research (link is external).

Here’s a link to and a citation for the paper,

Surface degradation and nanoparticle release of a commercial nanosilica/polyurethane coating under UV exposure by Deborah S. Jacobs, Sin-Ru Huang, Yu-Lun Cheng, Savelas A. Rabb, Justin M. Gorham, Peter J. Krommenhoek, Lee L. Yu, Tinh Nguyen, Lipiin Sung. J Coat Technol Res (2016) 13: 735. doi:10.1007/s11998-016-9796-2 First published online 13 July 2016

This paper is behind a paywall.

For anyone interested in the details about the House of Commons nano story I told at the start of this post, here’s the June 23, 2010 posting where I summarized the hearing on nanotechnology. If you scroll down about 50% of the way, you’ll find Dr. Nils Petersen’s (then director of Canada’s National Institute of Nanotechnology) comments about nanoparticles being encased. The topic had been nanosunscreens and he was describing the conditions under which he believed nanoparticles could be dangerous.

Interfaces are the device—organic semiconductors and their edges

Researchers at the University of British Columbia (UBC; Canada) have announced a startling revelation according to an Oct. 6, 2015 news item on ScienceDaily,

As the push for thinner and faster electronics continues, a new finding by University of British Columbia scientists could help inform the design of the next generation of cheaper, more efficient devices.

The work, published this week in Nature Communications, details how electronic properties at the edges of organic molecular systems differ from the rest of the material.

An Oct. 6, 2015 UBC news release on EurekAlert, which originated the news item, expands on the theme,

Organic [as in carbon-based] materials–plastics–are of great interest for use in solar panels, light emitting diodes and transistors. They’re low-cost, light, and take less energy to produce than silicon. Interfaces–where one type of material meets another–play a key role in the functionality of all these devices.

“We found that the polarization-induced energy level shifts from the edge of these materials to the interior are significant, and can’t be neglected when designing components,” says UBC PhD researcher Katherine Cochrane, lead author of the paper.

‘While we were expecting some differences, we were surprised by the size of the effect and that it occurred on the scale of a single molecule,” adds UBC researcher Sarah Burke, an expert on nanoscale electronic and optoelectronic materials and author on the paper.

The researchers looked at ‘nano-islands’ of clustered organic molecules. The molecules were deposited on a silver crystal coated with an ultra-thin layer of salt only two atoms deep. The salt is an insulator and prevents electrons in the organic molecules from interacting with those in the silver–the researchers wanted to isolate the interactions of the molecules.

Not only did the molecules at the edge of the nano-islands have very different properties than in the middle, the variation in properties depended on the position and orientation of other molecules nearby.

The researchers, part of UBC’s Quantum Matter Institute, used a simple, analytical model to explain the differences which can be extended to predict interface properties in much more complex systems, like those encountered in a real device.

Herbert Kroemer said in his Nobel Lecture that ‘The interface is the device’ and it’s equally true for organic materials,” says Burke. [emphasis mine] “The differences we’ve seen at the edges of molecular clusters highlights one effect that we’ll need to consider as we design new materials for these devices, but likely they are many more surprises waiting to be discovered.”

Cochrane and colleagues plan to keep looking at what happens at interfaces in these materials and to work with materials chemists to guide the design rules for the structure and electronic properties of future devices.

Methods

The experiment was performed at UBC’s state-of-the-art Laboratory for Atomic Imaging Research, which features three specially designed ultra-quiet rooms that allow the instruments to sit in complete silence, totally still, to perform their delicate measurements. This allowed the researchers to take dense data sets with a tool called a scanning tunnelling microscope (STM) that showed them the energy levels in real-space on the scale of single atoms.

Here’s a link to and a citation for the paper,

Pronounced polarization-induced energy level shifts at boundaries of organic semiconductor nanostructures by K. A. Cochrane, A. Schiffrin, T. S. Roussy, M. Capsoni, & S. A. Burke. Nature Communications 6, Article number: 8312 doi:10.1038/ncomms9312 Published 06 October 2015

This paper is open access. Yes, I borrowed from Nobel Laureate, Herbert Kroemer for the headline. As Woody Guthrie (legendary American folksinger) once said, more or less, “Only steal from the best.”

Controlling crystal growth for plastic electronics

A July 4, 2013 news item on Nanowerk highlights research into plastic electronics taking place at Imperial College London (ICL), Note: A link has been removed,

Scientists have discovered a way to better exploit a process that could revolutionise the way that electronic products are made.

The scientists from Imperial College London say improving the industrial process, which is called crystallisation, could revolutionise the way we produce electronic products, leading to advances across a whole range of fields; including reducing the cost and improving the design of plastic solar cells.

The process of making many well-known products from plastics involves controlling the way that microscopic crystals are formed within the material. By controlling the way that these crystals are grown engineers can determine the properties they want such as transparency and toughness. Controlling the growth of these crystals involves engineers adding small amounts of chemical additives to plastic formulations. This approach is used in making food boxes and other transparent plastic containers, but up until now it has not been used in the electronics industry.

The team from Imperial have now demonstrated that these additives can also be used to improve how an advanced type of flexible circuitry called plastic electronics is made.

The team found that when the additives were included in the formulation of plastic electronic circuitry they could be printed more reliably and over larger areas, which would reduce fabrication costs in the industry.

The team reported their findings this month in the journal Nature Materials (“Microstructure formation in molecular and polymer semiconductors assisted by nucleation agents”).

The June 7, 2013 Imperial College London news release by Joshua Howgego, which originated the news item, describes the researchers and the process in more detail,

Dr Natalie Stingelin, the leader of the study from the Department of Materials and Centre of Plastic Electronics at Imperial, says:

“Essentially, we have demonstrated a simple way to gain control over how crystals grow in electrically conducting ‘plastic’ semiconductors. Not only will this help industry fabricate plastic electronic devices like solar cells and sensors more efficiently. I believe it will also help scientists experimenting in other areas, such as protein crystallisation, an important part of the drug development process.”

Dr Stingelin and research associate Neil Treat looked at two additives, sold under the names IrgaclearÒ XT 386 and MilladÒ 3988, which are commonly used in industry. These chemicals are, for example, some of the ingredients used to improve the transparency of plastic drinking bottles. The researchers experimented with adding tiny amounts of these chemicals to the formulas of several different electrically conducting plastics, which are used in technologies such as security key cards, solar cells and displays.

The researchers found the additives gave them precise control over where crystals would form, meaning they could also control which parts of the printed material would conduct electricity. In addition, the crystallisations happened faster than normal. Usually plastic electronics are exposed to high temperatures to speed up the crystallisation process, but this can degrade the materials. This heat treatment treatment is no longer necessary if the additives are used.

Another industrially important advantage of using small amounts of the additives was that the crystallisation process happened more uniformly throughout the plastics, giving a consistent distribution of crystals.  The team say this could enable circuits in plastic electronics to be produced quickly and easily with roll-to-roll printing procedures similar to those used in the newspaper industry. This has been very challenging to achieve previously.

Dr Treat says: “Our work clearly shows that these additives are really good at controlling how materials crystallise. We have shown that printed electronics can be fabricated more reliably using this strategy. But what’s particularly exciting about all this is that the additives showed fantastic performance in many different types of conducting plastics. So I’m excited about the possibilities that this strategy could have in a wide range of materials.”

Dr Stingelin and Dr Treat collaborated with scientists from the University of California Santa Barbara (UCSB), and the National Renewable Energy Laboratory in Golden, US, and the Swiss Federal Institute of Technology on this study. The team are planning to continue working together to see if subtle chemical changes to the additives improve their effects – and design new additives.

There are some big plans for this discovery, from the news release,

They [the multinational team from ICL, UCSB, National Renewable Energy Laboratory, and Swiss Federal Institute of Technology]  will be working with the new Engineering and Physical Sciences Research Council (EPSRC)-funded Centre for Innovative Manufacturing in Large Area Electronics in order to drive the industrial exploitation of their process. The £5.6 million of funding for this centre, to be led by researchers from Cambridge University, was announced earlier this year [2013]. They are also exploring collaborations with printing companies with a view to further developing their circuit printing technique.

For the curious, here’s a link to and a citation for the published paper,

Microstructure formation in molecular and polymer semiconductors assisted by nucleation agents by Neil D. Treat, Jennifer A. Nekuda Malik, Obadiah Reid, Liyang Yu, Christopher G. Shuttle, Garry Rumbles, Craig J. Hawker, Michael L. Chabinyc, Paul Smith, & Natalie Stingelin. Nature Materials 12, 628–633 (2013) doi:10.1038/nmat3655 Published online 02 June 2013

This article is open access (at least for now).