Tag Archives: physics

Putting science back into pop culture and selling books

Clifford V. Johnson is very good at promoting books. I tip my hat to him; that’s an excellent talent to have, especially when you’ve written a book, in his case, it’s a graphic novel titled ‘The Dialogues: Conversations about the Nature of the Universe‘.

I first stumbled across professor (University of Southern California) and physicist Johnson and his work in this January 18, 2018 news item on phys.org,

How often do you, outside the requirements of an assignment, ponder things like the workings of a distant star, the innards of your phone camera, or the number and layout of petals on a flower? Maybe a little bit, maybe never. Too often, people regard science as sitting outside the general culture: A specialized, difficult topic carried out by somewhat strange people with arcane talents. It’s somehow not for them.

But really science is part of the wonderful tapestry of human culture, intertwined with things like art, music, theater, film and even religion. These elements of our culture help us understand and celebrate our place in the universe, navigate it and be in dialogue with it and each other. Everyone should be able to engage freely in whichever parts of the general culture they choose, from going to a show or humming a tune to talking about a new movie over dinner.

Science, though, gets portrayed as opposite to art, intuition and mystery, as though knowing in detail how that flower works somehow undermines its beauty. As a practicing physicist, I disagree. Science can enhance our appreciation of the world around us. It should be part of our general culture, accessible to all. Those “special talents” required in order to engage with and even contribute to science are present in all of us.

Here’s more his January 18, 2018 essay on The Conversation (which was the origin for the news item), Note: Links have been removed,

… in addition to being a professor, I work as a science advisor for various forms of entertainment, from blockbuster movies like the recent “Thor: Ragnarok,” or last spring’s 10-hour TV dramatization of the life and work of Albert Einstein (“Genius,” on National Geographic), to the bestselling novel “Dark Matter,” by Blake Crouch. People spend a lot of time consuming entertainment simply because they love stories like these, so it makes sense to put some science in there.

Science can actually help make storytelling more entertaining, engaging and fun – as I explain to entertainment professionals every chance I get. From their perspective, they get potentially bigger audiences. But good stories, enhanced by science, also spark valuable conversations about the subject that continue beyond the movie theater.
Science can be one of the topics woven into the entertainment we consume – via stories, settings and characters. ABC Television

Nonprofit organizations have been working hard on this mission. The Alfred P. Sloan Foundation helps fund and develop films with science content – “The Man Who Knew Infinity” (2015) and “Robot & Frank” (2012) are two examples. (The Sloan Foundation is also a funding partner of The Conversation US.)

The National Academy of Sciences set up the Science & Entertainment Exchange to help connect people from the entertainment industry to scientists. The idea is that such experts can provide Hollywood with engaging details and help with more accurate portrayals of scientists that can enhance the narratives they tell. Many of the popular Marvel movies – including “Thor” (2011), “Ant-Man” (2015) and the upcoming “Avengers: Infinity War” – have had their content strengthened in this way.

Encouragingly, a recent Pew Research Center survey in the U.S. showed that entertainment with science or related content is watched by people across “all demographic, educational and political groups,” and that overall they report positive impressions of the science ideas and scenarios contained in them.

Many years ago I realized it is hard to find books on the nonfiction science shelf that let readers see themselves as part of the conversation about science. So I envisioned an entire book of conversations about science taking place between ordinary people. While “eavesdropping” on those conversations, readers learn some science ideas, and are implicitly invited to have conversations of their own. It’s a resurrection of the dialogue form, known to the ancient Greeks, and to Galileo, as a device for exchanging ideas, but with contemporary settings: cafes, restaurants, trains and so on.

Clifford Johnson at his drafting table. Clifford V. Johnson, CC BY-ND

So over six years I taught myself the requisite artistic and other production techniques, and studied the language and craft of graphic narratives. I wrote and drew “The Dialogues: Conversations About the Nature of the Universe” as proof of concept: A new kind of nonfiction science book that can inspire more people to engage in their own conversations about science, and celebrate a spirit of plurality in everyday science participation.

I so enjoyed Johnson’s writing and appreciated how he introduced his book into the piece that I searched for more and found a three-part interview with Henry Jenkins on his Confessions of an Aca-Fan (Academic-Fan) blog. Before moving onto the interview, here’s some information about the interviewer, Henry Jenkins, (Note: Links have been removed),

Henry Jenkins is the Provost Professor of Communication, Journalism, Cinematic Arts and Education at the University of Southern California. He arrived at USC in Fall 2009 after spending more than a decade as the Director of the MIT Comparative Media Studies Program and the Peter de Florez Professor of Humanities. He is the author and/or editor of seventeen books on various aspects of media and popular culture, including Textual Poachers: Television Fans and Participatory Culture, Hop on Pop: The Politics and Pleasures of Popular Culture,  From Barbie to Mortal Kombat: Gender and Computer Games, Convergence Culture: Where Old and New Media Collide, Spreadable Media: Creating Meaning and Value in a Networked Culture, and By Any Media Necessary: The New Youth Activism. He is currently editing a handbook on the civic imagination and writing a book on “comics and stuff”. He has written for Technology Review, Computer Games, Salon, and The Huffington Post.

Jenkins is the principal investigator for The Civic Imagination Project, funded by the MacArthur Foundation, to explore ways to inspire creative collaborations within communities as they work together to identify shared values and visions for the future. This project grew out of the Media, Activism, and Participatory Politics research group, also funded by MacArthur, which did case studies of innovative organizations that have been effective at getting young people involved in the political process. He is also the Chief Advisor to the Annenberg Innovation Lab. Jenkins also serves on the jury that selects the Peabody Awards, which recognizes “stories that matter” from radio, television, and the web.

He has previously worked as the principal investigator for  Project New Media Literacies (NML), a group which originated as part of the MacArthur Digital Media and Learning Initiative. Jenkins wrote a white paper on learning in a participatory culture that has become the springboard for the group’s efforts to develop and test educational materials focused on preparing students for engagement with the new media landscape. He also was the founder for the Convergence Culture Consortium, a faculty network which seeks to build bridges between academic researchers and the media industry in order to help inform the rethinking of consumer relations in an age of participatory culture.  The Consortium lives on today via the Transforming Hollywood conference, run jointly between USC and UCLA, which recently hosted its 8th event.  

While at MIT, he was one of the principal investigators for The Education Arcade, a consortium of educators and business leaders working to promote the educational use of computer and video games. Jenkins also plays a significant role as a public advocate for fans, gamers and bloggers: testifying before the U.S. Senate Commerce Committee investigation into “Marketing Violence to Youth” following the Columbine shootings; advocating for media literacy education before the Federal Communications Commission; calling for a more consumer-oriented approach to intellectual property at a closed door meeting of the governing body of the World Economic Forum; signing amicus briefs in opposition to games censorship;  regularly speaking to the press and other media about aspects of media change and popular culture; and most recently, serving as an expert witness in the legal struggle over the fan-made film, Prelude to Axanar.  He also has served as a consultant on the Amazon children’s series Lost in Oz, where he provided insights on world-building and transmedia strategies as well as new media literacy issues.

Jenkins has a B.A. in Political Science and Journalism from Georgia State University, a M.A. in Communication Studies from the University of Iowa and a PhD in Communication Arts from the University of Wisconsin-Madison.

Well, that didn’t seem so simple after all. For a somewhat more personal account of who I am, read on.

About Me

The first thing you are going to discover about me, oh reader of this blog, is that I am prolific as hell. The second is that I am also long-winded as all get out. As someone famous once said, “I would have written it shorter, but I didn’t have enough time.”

My earliest work centered on television fans – particularly science fiction fans. Part of what drew me into graduate school in media studies was a fascination with popular culture. I grew up reading Mad magazine and Famous Monsters of Filmland – and, much as my parents feared, it warped me for life. Early on, I discovered the joys of comic books and science fiction, spent time playing around with monster makeup, started writing scripts for my own Super 8 movies (The big problem was that I didn’t have access to a camera until much later), and collecting television-themed toys. By the time I went to college, I was regularly attending science fiction conventions. Through the woman who would become my wife, I discovered fan fiction. And we spent a great deal of time debating our very different ways of reading our favorite television series.

When I got to graduate school, I was struck by how impoverished the academic framework for thinking about media spectatorship was – basically, though everyone framed it differently, consumers were assumed to be passive, brainless, inarticulate, and brainwashed. None of this jelled well with my own robust experience of being a fan of popular culture. I was lucky enough to get to study under John Fiske, first at Iowa and then at the University of Wisconsin-Madison, who introduced me to the cultural studies perspective. Fiske was a key advocate of ethnographic audience research, arguing that media consumers had more tricks up their sleeves than most academic theory acknowledged.

Out of this tension between academic theory and fan experience emerged first an essay, “Star Trek Reread, Rerun, Rewritten” and then a book, Textual Poachers: Television Fans and Participatory Culture. Textual Poachers emerged at a moment when fans were still largely marginal to the way mass media was produced and consumed, and still hidden from the view of most “average consumers.” As such, the book represented a radically different way of thinking about how one might live in relation to media texts. In the book, I describe fans as “rogue readers.” What most people took from that book was my concept of “poaching,” the idea that fans construct their own culture – fan fiction, artwork, costumes, music and videos – from content appropriated from mass media, reshaping it to serve their own needs and interests. There are two other key concepts in this early work which takes on greater significance in my work today – the idea of participatory culture (which runs throughout Convergence Culture) and the idea of a moral economy (that is, the presumed ethical norms which govern the relations between media producers and consumers).

As for the interview, here’s Jenkins’ introduction to the series and a portion of part one (from Comics and Popular Science: An Interview with Clifford V. Johnson (Part One) posted on November 15, 2017),

unnamed.jpg

Clifford V. Johnson is the first theoretical physicist who I have ever interviewed for my blog. Given the sharp divide that our society constructs between the sciences and the humanities, he may well be the last, but he would be the first to see this gap as tragic, a consequence of the current configuration of disciplines. Johnson, as I have discovered, is deeply committed to helping us recognize the role that science plays in everyday life, a project he pursues actively through his involvement as one of the leaders of the Los Angeles Institute for the Humanities (of which I am also a member), as a consultant on various film and television projects, and now, as the author of a graphic novel, The Dialogues, which is being released this week. We were both on a panel about contemporary graphic storytelling Tara McPherson organized for the USC Sydney Harmon Institute for Polymathic Study and we’ve continued to bat around ideas about the pedagogical potential of comics ever since.

Here’s what I wrote when I was asked to provide a blurb for his new book:

“Two superheroes walk into a natural history museum — what happens after that will have you thinking and talking for a long time to come. Clifford V. Johnson’s The Dialogues joins a select few examples of recent texts, such as Scott McCloud’s Understanding Comics, Larry Gonick’s Cartoon History of the Universe, Nick Sousanis’s Unflattening, Bryan Talbot’s Alice in Sunderland, or Joe Sacco’s Palestine, which use the affordances of graphic storytelling as pedagogical tools for changing the ways we think about the world around us. Johnson displays a solid grasp of the craft of comics, demonstrating how this medium can be used to represent different understandings of the relationship between time and space, questions central to his native field of physics. He takes advantage of the observational qualities of contemporary graphic novels to explore the place of scientific thinking in our everyday lives.”

To my many readers who care about sequential art, this is a book which should be added to your collection — Johnson makes good comics, smart comics, beautiful comics, and comics which are doing important work, all at the same time. What more do you want!

In the interviews that follows, we explore more fully what motivated this particular comics and how approaching comics as a theoretical physicist has helped him to discover some interesting formal aspects of this medium.

What do you want your readers to learn about science over the course of these exchanges? I am struck by the ways you seek to demystify aspects of the scientific process, including the role of theory, equations, and experimentation.

unnamed-2.jpg

 

That participatory aspect is core, for sure. Conversations about science by random people out there in the world really do happen – I hear them a lot on the subway, or in cafes, and so I wanted to highlight those and celebrate them. So the book becomes a bit of an invitation to everyone to join in. But then I can show so many other things that typically just get left out of books about science: The ordinariness of the settings in which such conversations can take place, the variety of types of people involved, and indeed the main tools, like equations and technical diagrams, that editors usually tell you to leave out for fear of scaring away the audience. …

I looked for book reviews and found two. This first one is from Starburst Magazine, which strangely does not have the date or author listed (from the review),

The Dialogues is a series of nine conversations about science told in graphic novel format; the conversationalists are men, women, children, and amateur science buffs who all have something to say about the nature of the universe. Their discussions range from multiverse and string theory to immortality, black holes, and how it’s possible to put just a cup of rice in the pan but end up with a ton more after Mom cooks it. Johnson (who also illustrated the book) believes the graphic form is especially suited for physics because “one drawing can show what it would take many words to explain” and it’s hard to argue with his noble intentions, but despite some undoubtedly thoughtful content The Dialogues doesn’t really work. Why not? Because, even with its plethora of brightly-coloured pictures, it’s still 200+ pages of talking heads. The individual conversations might give us plenty to think about, but the absence of any genuine action (or even a sense of humour) still makes The Dialogues read like very pretty homework.

Adelmar Bultheel’s December 8, 2017 review for the European Mathematical Society acknowledges issues with the book while noting its strong points,

So what is the point of producing such a graphic novel if the reader is not properly instructed about anything? In my opinion, the true message can be found in the one or two pages of notes that follow each of the eleven conversations. If you are not into the subject that you were eavesdropping, you probably have heard words, concepts, theories, etc. that you did not understand, or you might just be curious about what exactly the two were discussing. Then you should look that up on the web, or if you want to do it properly, you should consult some literature. This is what these notes are providing: they are pointing to the proper books to consult. …

This is a most unusual book for this subject and the way this is approached is most surprising. Not only the contents is heavy stuff, it is also physically heavy to read. Some 250 pages on thick glossy paper makes it a quite heavy book to hold. You probably do not want to read this in bed or take it on a train, unless you have a table in front of you to put it on. Many subjects are mentioned, but not all are explained in detail. The reader should definitely be prepared to do some extra reading to understand things better. Since most references concern other popularising books on the subject, it may require quite a lot of extra reading. But all this hard science is happening in conversations by young enthusiastic people in casual locations and it is all wrapped up in beautiful graphics showing marvellous realistic decors.

I am fascinated by this book which I have yet to read but I did find a trailer for it (from thedialoguesbook.com),

Enjoy!

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (1 of 2)

Before launching into the assessment, a brief explanation of my theme: Hedy Lamarr was considered to be one of the great beauties of her day,

“Ziegfeld Girl” Hedy Lamarr 1941 MGM *M.V.
Titles: Ziegfeld Girl
People: Hedy Lamarr
Image courtesy mptvimages.com [downloaded from https://www.imdb.com/title/tt0034415/mediaviewer/rm1566611456]

Aside from starring in Hollywood movies and, before that, movies in Europe, she was also an inventor and not just any inventor (from a Dec. 4, 2017 article by Laura Barnett for The Guardian), Note: Links have been removed,

Let’s take a moment to reflect on the mercurial brilliance of Hedy Lamarr. Not only did the Vienna-born actor flee a loveless marriage to a Nazi arms dealer to secure a seven-year, $3,000-a-week contract with MGM, and become (probably) the first Hollywood star to simulate a female orgasm on screen – she also took time out to invent a device that would eventually revolutionise mobile communications.

As described in unprecedented detail by the American journalist and historian Richard Rhodes in his new book, Hedy’s Folly, Lamarr and her business partner, the composer George Antheil, were awarded a patent in 1942 for a “secret communication system”. It was meant for radio-guided torpedoes, and the pair gave to the US Navy. It languished in their files for decades before eventually becoming a constituent part of GPS, Wi-Fi and Bluetooth technology.

(The article goes on to mention other celebrities [Marlon Brando, Barbara Cartland, Mark Twain, etc] and their inventions.)

Lamarr’s work as an inventor was largely overlooked until the 1990’s when the technology community turned her into a ‘cultish’ favourite and from there her reputation grew and acknowledgement increased culminating in Rhodes’ book and the documentary by Alexandra Dean, ‘Bombshell: The Hedy Lamarr Story (to be broadcast as part of PBS’s American Masters series on May 18, 2018).

Canada as Hedy Lamarr

There are some parallels to be drawn between Canada’s S&T and R&D (science and technology; research and development) and Ms. Lamarr. Chief amongst them, we’re not always appreciated for our brains. Not even by people who are supposed to know better such as the experts on the panel for the ‘Third assessment of The State of Science and Technology and Industrial Research and Development in Canada’ (proper title: Competing in a Global Innovation Economy: The Current State of R&D in Canada) from the Expert Panel on the State of Science and Technology and Industrial Research and Development in Canada.

A little history

Before exploring the comparison to Hedy Lamarr further, here’s a bit more about the history of this latest assessment from the Council of Canadian Academies (CCA), from the report released April 10, 2018,

This assessment of Canada’s performance indicators in science, technology, research, and innovation comes at an opportune time. The Government of Canada has expressed a renewed commitment in several tangible ways to this broad domain of activity including its Innovation and Skills Plan, the announcement of five superclusters, its appointment of a new Chief Science Advisor, and its request for the Fundamental Science Review. More specifically, the 2018 Federal Budget demonstrated the government’s strong commitment to research and innovation with historic investments in science.

The CCA has a decade-long history of conducting evidence-based assessments about Canada’s research and development activities, producing seven assessments of relevance:

The State of Science and Technology in Canada (2006) [emphasis mine]
•Innovation and Business Strategy: Why Canada Falls Short (2009)
•Catalyzing Canada’s Digital Economy (2010)
•Informing Research Choices: Indicators and Judgment (2012)
The State of Science and Technology in Canada (2012) [emphasis mine]
The State of Industrial R&D in Canada (2013) [emphasis mine]
•Paradox Lost: Explaining Canada’s Research Strength and Innovation Weakness (2013)

Using similar methods and metrics to those in The State of Science and Technology in Canada (2012) and The State of Industrial R&D in Canada (2013), this assessment tells a similar and familiar story: Canada has much to be proud of, with world-class researchers in many domains of knowledge, but the rest of the world is not standing still. Our peers are also producing high quality results, and many countries are making significant commitments to supporting research and development that will position them to better leverage their strengths to compete globally. Canada will need to take notice as it determines how best to take action. This assessment provides valuable material for that conversation to occur, whether it takes place in the lab or the legislature, the bench or the boardroom. We also hope it will be used to inform public discussion. [p. ix Print, p. 11 PDF]

This latest assessment succeeds the general 2006 and 2012 reports, which were mostly focused on academic research, and combines it with an assessment of industrial research, which was previously separate. Also, this third assessment’s title (Competing in a Global Innovation Economy: The Current State of R&D in Canada) makes what was previously quietly declared in the text, explicit from the cover onwards. It’s all about competition, despite noises such as the 2017 Naylor report (Review of fundamental research) about the importance of fundamental research.

One other quick comment, I did wonder in my July 1, 2016 posting (featuring the announcement of the third assessment) how combining two assessments would impact the size of the expert panel and the size of the final report,

Given the size of the 2012 assessment of science and technology at 232 pp. (PDF) and the 2013 assessment of industrial research and development at 220 pp. (PDF) with two expert panels, the imagination boggles at the potential size of the 2016 expert panel and of the 2016 assessment combining the two areas.

I got my answer with regard to the panel as noted in my Oct. 20, 2016 update (which featured a list of the members),

A few observations, given the size of the task, this panel is lean. As well, there are three women in a group of 13 (less than 25% representation) in 2016? It’s Ontario and Québec-dominant; only BC and Alberta rate a representative on the panel. I hope they will find ways to better balance this panel and communicate that ‘balanced story’ to the rest of us. On the plus side, the panel has representatives from the humanities, arts, and industry in addition to the expected representatives from the sciences.

The imbalance I noted then was addressed, somewhat, with the selection of the reviewers (from the report released April 10, 2018),

The CCA wishes to thank the following individuals for their review of this report:

Ronald Burnett, C.M., O.B.C., RCA, Chevalier de l’ordre des arts et des
lettres, President and Vice-Chancellor, Emily Carr University of Art and Design
(Vancouver, BC)

Michelle N. Chretien, Director, Centre for Advanced Manufacturing and Design
Technologies, Sheridan College; Former Program and Business Development
Manager, Electronic Materials, Xerox Research Centre of Canada (Brampton,
ON)

Lisa Crossley, CEO, Reliq Health Technologies, Inc. (Ancaster, ON)
Natalie Dakers, Founding President and CEO, Accel-Rx Health Sciences
Accelerator (Vancouver, BC)

Fred Gault, Professorial Fellow, United Nations University-MERIT (Maastricht,
Netherlands)

Patrick D. Germain, Principal Engineering Specialist, Advanced Aerodynamics,
Bombardier Aerospace (Montréal, QC)

Robert Brian Haynes, O.C., FRSC, FCAHS, Professor Emeritus, DeGroote
School of Medicine, McMaster University (Hamilton, ON)

Susan Holt, Chief, Innovation and Business Relationships, Government of
New Brunswick (Fredericton, NB)

Pierre A. Mohnen, Professor, United Nations University-MERIT and Maastricht
University (Maastricht, Netherlands)

Peter J. M. Nicholson, C.M., Retired; Former and Founding President and
CEO, Council of Canadian Academies (Annapolis Royal, NS)

Raymond G. Siemens, Distinguished Professor, English and Computer Science
and Former Canada Research Chair in Humanities Computing, University of
Victoria (Victoria, BC) [pp. xii- xiv Print; pp. 15-16 PDF]

The proportion of women to men as reviewers jumped up to about 36% (4 of 11 reviewers) and there are two reviewers from the Maritime provinces. As usual, reviewers external to Canada were from Europe. Although this time, they came from Dutch institutions rather than UK or German institutions. Interestingly and unusually, there was no one from a US institution. When will they start using reviewers from other parts of the world?

As for the report itself, it is 244 pp. (PDF). (For the really curious, I have a  December 15, 2016 post featuring my comments on the preliminary data for the third assessment.)

To sum up, they had a lean expert panel tasked with bringing together two inquiries and two reports. I imagine that was daunting. Good on them for finding a way to make it manageable.

Bibliometrics, patents, and a survey

I wish more attention had been paid to some of the issues around open science, open access, and open data, which are changing how science is being conducted. (I have more about this from an April 5, 2018 article by James Somers for The Atlantic but more about that later.) If I understand rightly, they may not have been possible due to the nature of the questions posed by the government when requested the assessment.

As was done for the second assessment, there is an acknowledgement that the standard measures/metrics (bibliometrics [no. of papers published, which journals published them; number of times papers were cited] and technometrics [no. of patent applications, etc.] of scientific accomplishment and progress are not the best and new approaches need to be developed and adopted (from the report released April 10, 2018),

It is also worth noting that the Panel itself recognized the limits that come from using traditional historic metrics. Additional approaches will be needed the next time this assessment is done. [p. ix Print; p. 11 PDF]

For the second assessment and as a means of addressing some of the problems with metrics, the panel decided to take a survey which the panel for the third assessment has also done (from the report released April 10, 2018),

The Panel relied on evidence from multiple sources to address its charge, including a literature review and data extracted from statistical agencies and organizations such as Statistics Canada and the OECD. For international comparisons, the Panel focused on OECD countries along with developing countries that are among the top 20 producers of peer-reviewed research publications (e.g., China, India, Brazil, Iran, Turkey). In addition to the literature review, two primary research approaches informed the Panel’s assessment:
•a comprehensive bibliometric and technometric analysis of Canadian research publications and patents; and,
•a survey of top-cited researchers around the world.

Despite best efforts to collect and analyze up-to-date information, one of the Panel’s findings is that data limitations continue to constrain the assessment of R&D activity and excellence in Canada. This is particularly the case with industrial R&D and in the social sciences, arts, and humanities. Data on industrial R&D activity continue to suffer from time lags for some measures, such as internationally comparable data on R&D intensity by sector and industry. These data also rely on industrial categories (i.e., NAICS and ISIC codes) that can obscure important trends, particularly in the services sector, though Statistics Canada’s recent revisions to how this data is reported have improved this situation. There is also a lack of internationally comparable metrics relating to R&D outcomes and impacts, aside from those based on patents.

For the social sciences, arts, and humanities, metrics based on journal articles and other indexed publications provide an incomplete and uneven picture of research contributions. The expansion of bibliometric databases and methodological improvements such as greater use of web-based metrics, including paper views/downloads and social media references, will support ongoing, incremental improvements in the availability and accuracy of data. However, future assessments of R&D in Canada may benefit from more substantive integration of expert review, capable of factoring in different types of research outputs (e.g., non-indexed books) and impacts (e.g., contributions to communities or impacts on public policy). The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity. It is vital that such contributions are better measured and assessed. [p. xvii Print; p. 19 PDF]

My reading: there’s a problem and we’re not going to try and fix it this time. Good luck to those who come after us. As for this line: “The Panel has no doubt that contributions from the humanities, arts, and social sciences are of equal importance to national prosperity.” Did no one explain that when you use ‘no doubt’, you are introducing doubt? It’s a cousin to ‘don’t take this the wrong way’ and ‘I don’t mean to be rude but …’ .

Good news

This is somewhat encouraging (from the report released April 10, 2018),

Canada’s international reputation for its capacity to participate in cutting-edge R&D is strong, with 60% of top-cited researchers surveyed internationally indicating that Canada hosts world-leading infrastructure or programs in their fields. This share increased by four percentage points between 2012 and 2017. Canada continues to benefit from a highly educated population and deep pools of research skills and talent. Its population has the highest level of educational attainment in the OECD in the proportion of the population with
a post-secondary education. However, among younger cohorts (aged 25 to 34), Canada has fallen behind Japan and South Korea. The number of researchers per capita in Canada is on a par with that of other developed countries, andincreased modestly between 2004 and 2012. Canada’s output of PhD graduates has also grown in recent years, though it remains low in per capita terms relative to many OECD countries. [pp. xvii-xviii; pp. 19-20]

Don’t let your head get too big

Most of the report observes that our international standing is slipping in various ways such as this (from the report released April 10, 2018),

In contrast, the number of R&D personnel employed in Canadian businesses
dropped by 20% between 2008 and 2013. This is likely related to sustained and
ongoing decline in business R&D investment across the country. R&D as a share
of gross domestic product (GDP) has steadily declined in Canada since 2001,
and now stands well below the OECD average (Figure 1). As one of few OECD
countries with virtually no growth in total national R&D expenditures between
2006 and 2015, Canada would now need to more than double expenditures to
achieve an R&D intensity comparable to that of leading countries.

Low and declining business R&D expenditures are the dominant driver of this
trend; however, R&D spending in all sectors is implicated. Government R&D
expenditures declined, in real terms, over the same period. Expenditures in the
higher education sector (an indicator on which Canada has traditionally ranked
highly) are also increasing more slowly than the OECD average. Significant
erosion of Canada’s international competitiveness and capacity to participate
in R&D and innovation is likely to occur if this decline and underinvestment
continue.

Between 2009 and 2014, Canada produced 3.8% of the world’s research
publications, ranking ninth in the world. This is down from seventh place for
the 2003–2008 period. India and Italy have overtaken Canada although the
difference between Italy and Canada is small. Publication output in Canada grew
by 26% between 2003 and 2014, a growth rate greater than many developed
countries (including United States, France, Germany, United Kingdom, and
Japan), but below the world average, which reflects the rapid growth in China
and other emerging economies. Research output from the federal government,
particularly the National Research Council Canada, dropped significantly
between 2009 and 2014.(emphasis mine)  [p. xviii Print; p. 20 PDF]

For anyone unfamiliar with Canadian politics,  2009 – 2014 were years during which Stephen Harper’s Conservatives formed the government. Justin Trudeau’s Liberals were elected to form the government in late 2015.

During Harper’s years in government, the Conservatives were very interested in changing how the National Research Council of Canada operated and, if memory serves, the focus was on innovation over research. Consequently, the drop in their research output is predictable.

Given my interest in nanotechnology and other emerging technologies, this popped out (from the report released April 10, 2018),

When it comes to research on most enabling and strategic technologies, however, Canada lags other countries. Bibliometric evidence suggests that, with the exception of selected subfields in Information and Communication Technologies (ICT) such as Medical Informatics and Personalized Medicine, Canada accounts for a relatively small share of the world’s research output for promising areas of technology development. This is particularly true for Biotechnology, Nanotechnology, and Materials science [emphasis mine]. Canada’s research impact, as reflected by citations, is also modest in these areas. Aside from Biotechnology, none of the other subfields in Enabling and Strategic Technologies has an ARC rank among the top five countries. Optoelectronics and photonics is the next highest ranked at 7th place, followed by Materials, and Nanoscience and Nanotechnology, both of which have a rank of 9th. Even in areas where Canadian researchers and institutions played a seminal role in early research (and retain a substantial research capacity), such as Artificial Intelligence and Regenerative Medicine, Canada has lost ground to other countries.

Arguably, our early efforts in artificial intelligence wouldn’t have garnered us much in the way of ranking and yet we managed some cutting edge work such as machine learning. I’m not suggesting the expert panel should have or could have found some way to measure these kinds of efforts but I’m wondering if there could have been some acknowledgement in the text of the report. I’m thinking a couple of sentences in a paragraph about the confounding nature of scientific research where areas that are ignored for years and even decades then become important (e.g., machine learning) but are not measured as part of scientific progress until after they are universally recognized.

Still, point taken about our diminishing returns in ’emerging’ technologies and sciences (from the report released April 10, 2018),

The impression that emerges from these data is sobering. With the exception of selected ICT subfields, such as Medical Informatics, bibliometric evidence does not suggest that Canada excels internationally in most of these research areas. In areas such as Nanotechnology and Materials science, Canada lags behind other countries in levels of research output and impact, and other countries are outpacing Canada’s publication growth in these areas — leading to declining shares of world publications. Even in research areas such as AI, where Canadian researchers and institutions played a foundational role, Canadian R&D activity is not keeping pace with that of other countries and some researchers trained in Canada have relocated to other countries (Section 4.4.1). There are isolated exceptions to these trends, but the aggregate data reviewed by this Panel suggest that Canada is not currently a world leader in research on most emerging technologies.

The Hedy Lamarr treatment

We have ‘good looks’ (arts and humanities) but not the kind of brains (physical sciences and engineering) that people admire (from the report released April 10, 2018),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphases mine] It accounts for more than 5% of world researchin these fields. Conversely, Canada has lower research output than expected
in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

Couldn’t they have used a more buoyant tone? After all, science was known as ‘natural philosophy’ up until the 19th century. As for visual and performing arts, let’s include poetry as a performing and literary art (both have been the case historically and cross-culturally) and let’s also note that one of the great physics texts, (De rerum natura by Lucretius) was a multi-volume poem (from Lucretius’ Wikipedia entry; Note: Links have been removed).

His poem De rerum natura (usually translated as “On the Nature of Things” or “On the Nature of the Universe”) transmits the ideas of Epicureanism, which includes Atomism [the concept of atoms forming materials] and psychology. Lucretius was the first writer to introduce Roman readers to Epicurean philosophy.[15] The poem, written in some 7,400 dactylic hexameters, is divided into six untitled books, and explores Epicurean physics through richly poetic language and metaphors. Lucretius presents the principles of atomism; the nature of the mind and soul; explanations of sensation and thought; the development of the world and its phenomena; and explains a variety of celestial and terrestrial phenomena. The universe described in the poem operates according to these physical principles, guided by fortuna, “chance”, and not the divine intervention of the traditional Roman deities.[16]

Should you need more proof that the arts might have something to contribute to physical sciences, there’s this in my March 7, 2018 posting,

It’s not often you see research that combines biologically inspired engineering and a molecular biophysicist with a professional animator who worked at Peter Jackson’s (Lord of the Rings film trilogy, etc.) Park Road Post film studio. An Oct. 18, 2017 news item on ScienceDaily describes the project,

Like many other scientists, Don Ingber, M.D., Ph.D., the Founding Director of the Wyss Institute, [emphasis mine] is concerned that non-scientists have become skeptical and even fearful of his field at a time when technology can offer solutions to many of the world’s greatest problems. “I feel that there’s a huge disconnect between science and the public because it’s depicted as rote memorization in schools, when by definition, if you can memorize it, it’s not science,” says Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Professor of Bioengineering at the Harvard Paulson School of Engineering and Applied Sciences (SEAS). [emphasis mine] “Science is the pursuit of the unknown. We have a responsibility to reach out to the public and convey that excitement of exploration and discovery, and fortunately, the film industry is already great at doing that.”

“Not only is our physics-based simulation and animation system as good as other data-based modeling systems, it led to the new scientific insight [emphasis mine] that the limited motion of the dynein hinge focuses the energy released by ATP hydrolysis, which causes dynein’s shape change and drives microtubule sliding and axoneme motion,” says Ingber. “Additionally, while previous studies of dynein have revealed the molecule’s two different static conformations, our animation visually depicts one plausible way that the protein can transition between those shapes at atomic resolution, which is something that other simulations can’t do. The animation approach also allows us to visualize how rows of dyneins work in unison, like rowers pulling together in a boat, which is difficult using conventional scientific simulation approaches.”

It comes down to how we look at things. Yes, physical sciences and engineering are very important. If the report is to be believed we have a very highly educated population and according to PISA scores our students rank highly in mathematics, science, and reading skills. (For more information on Canada’s latest PISA scores from 2015 see this OECD page. As for PISA itself, it’s an OECD [Organization for Economic Cooperation and Development] programme where 15-year-old students from around the world are tested on their reading, mathematics, and science skills, you can get some information from my Oct. 9, 2013 posting.)

Is it really so bad that we choose to apply those skills in fields other than the physical sciences and engineering? It’s a little bit like Hedy Lamarr’s problem except instead of being judged for our looks and having our inventions dismissed, we’re being judged for not applying ourselves to physical sciences and engineering and having our work in other closely aligned fields dismissed as less important.

Canada’s Industrial R&D: an oft-told, very sad story

Bemoaning the state of Canada’s industrial research and development efforts has been a national pastime as long as I can remember. Here’s this from the report released April 10, 2018,

There has been a sustained erosion in Canada’s industrial R&D capacity and competitiveness. Canada ranks 33rd among leading countries on an index assessing the magnitude, intensity, and growth of industrial R&D expenditures. Although Canada is the 11th largest spender, its industrial R&D intensity (0.9%) is only half the OECD average and total spending is declining (−0.7%). Compared with G7 countries, the Canadian portfolio of R&D investment is more concentrated in industries that are intrinsically not as R&D intensive. Canada invests more heavily than the G7 average in oil and gas, forestry, machinery and equipment, and finance where R&D has been less central to business strategy than in many other industries. …  About 50% of Canada’s industrial R&D spending is in high-tech sectors (including industries such as ICT, aerospace, pharmaceuticals, and automotive) compared with the G7 average of 80%. Canadian Business Enterprise Expenditures on R&D (BERD) intensity is also below the OECD average in these sectors. In contrast, Canadian investment in low and medium-low tech sectors is substantially higher than the G7 average. Canada’s spending reflects both its long-standing industrial structure and patterns of economic activity.

R&D investment patterns in Canada appear to be evolving in response to global and domestic shifts. While small and medium-sized enterprises continue to perform a greater share of industrial R&D in Canada than in the United States, between 2009 and 2013, there was a shift in R&D from smaller to larger firms. Canada is an increasingly attractive place to conduct R&D. Investment by foreign-controlled firms in Canada has increased to more than 35% of total R&D investment, with the United States accounting for more than half of that. [emphasis mine]  Multinational enterprises seem to be increasingly locating some of their R&D operations outside their country of ownership, possibly to gain proximity to superior talent. Increasing foreign-controlled R&D, however, also could signal a long-term strategic loss of control over intellectual property (IP) developed in this country, ultimately undermining the government’s efforts to support high-growth firms as they scale up. [pp. xxii-xxiii Print; pp. 24-25 PDF]

Canada has been known as a ‘branch plant’ economy for decades. For anyone unfamiliar with the term, it means that companies from other countries come here, open up a branch and that’s how we get our jobs as we don’t have all that many large companies here. Increasingly, multinationals are locating R&D shops here.

While our small to medium size companies fund industrial R&D, it’s large companies (multinationals) which can afford long-term and serious investment in R&D. Luckily for companies from other countries, we have a well-educated population of people looking for jobs.

In 2017, we opened the door more widely so we can scoop up talented researchers and scientists from other countries, from a June 14, 2017 article by Beckie Smith for The PIE News,

Universities have welcomed the inclusion of the work permit exemption for academic stays of up to 120 days in the strategy, which also introduces expedited visa processing for some highly skilled professions.

Foreign researchers working on projects at a publicly funded degree-granting institution or affiliated research institution will be eligible for one 120-day stay in Canada every 12 months.

And universities will also be able to access a dedicated service channel that will support employers and provide guidance on visa applications for foreign talent.

The Global Skills Strategy, which came into force on June 12 [2017], aims to boost the Canadian economy by filling skills gaps with international talent.

As well as the short term work permit exemption, the Global Skills Strategy aims to make it easier for employers to recruit highly skilled workers in certain fields such as computer engineering.

“Employers that are making plans for job-creating investments in Canada will often need an experienced leader, dynamic researcher or an innovator with unique skills not readily available in Canada to make that investment happen,” said Ahmed Hussen, Minister of Immigration, Refugees and Citizenship.

“The Global Skills Strategy aims to give those employers confidence that when they need to hire from abroad, they’ll have faster, more reliable access to top talent.”

Coincidentally, Microsoft, Facebook, Google, etc. have announced, in 2017, new jobs and new offices in Canadian cities. There’s a also Chinese multinational telecom company Huawei Canada which has enjoyed success in Canada and continues to invest here (from a Jan. 19, 2018 article about security concerns by Matthew Braga for the Canadian Broadcasting Corporation (CBC) online news,

For the past decade, Chinese tech company Huawei has found no shortage of success in Canada. Its equipment is used in telecommunications infrastructure run by the country’s major carriers, and some have sold Huawei’s phones.

The company has struck up partnerships with Canadian universities, and say it is investing more than half a billion dollars in researching next generation cellular networks here. [emphasis mine]

While I’m not thrilled about using patents as an indicator of progress, this is interesting to note (from the report released April 10, 2018),

Canada produces about 1% of global patents, ranking 18th in the world. It lags further behind in trademark (34th) and design applications (34th). Despite relatively weak performance overall in patents, Canada excels in some technical fields such as Civil Engineering, Digital Communication, Other Special Machines, Computer Technology, and Telecommunications. [emphases mine] Canada is a net exporter of patents, which signals the R&D strength of some technology industries. It may also reflect increasing R&D investment by foreign-controlled firms. [emphasis mine] [p. xxiii Print; p. 25 PDF]

Getting back to my point, we don’t have large companies here. In fact, the dream for most of our high tech startups is to build up the company so it’s attractive to buyers, sell, and retire (hopefully before the age of 40). Strangely, the expert panel doesn’t seem to share my insight into this matter,

Canada’s combination of high performance in measures of research output and impact, and low performance on measures of industrial R&D investment and innovation (e.g., subpar productivity growth), continue to be viewed as a paradox, leading to the hypothesis that barriers are impeding the flow of Canada’s research achievements into commercial applications. The Panel’s analysis suggests the need for a more nuanced view. The process of transforming research into innovation and wealth creation is a complex multifaceted process, making it difficult to point to any definitive cause of Canada’s deficit in R&D investment and productivity growth. Based on the Panel’s interpretation of the evidence, Canada is a highly innovative nation, but significant barriers prevent the translation of innovation into wealth creation. The available evidence does point to a number of important contributing factors that are analyzed in this report. Figure 5 represents the relationships between R&D, innovation, and wealth creation.

The Panel concluded that many factors commonly identified as points of concern do not adequately explain the overall weakness in Canada’s innovation performance compared with other countries. [emphasis mine] Academia-business linkages appear relatively robust in quantitative terms given the extent of cross-sectoral R&D funding and increasing academia-industry partnerships, though the volume of academia-industry interactions does not indicate the nature or the quality of that interaction, nor the extent to which firms are capitalizing on the research conducted and the resulting IP. The educational system is high performing by international standards and there does not appear to be a widespread lack of researchers or STEM (science, technology, engineering, and mathematics) skills. IP policies differ across universities and are unlikely to explain a divergence in research commercialization activity between Canadian and U.S. institutions, though Canadian universities and governments could do more to help Canadian firms access university IP and compete in IP management and strategy. Venture capital availability in Canada has improved dramatically in recent years and is now competitive internationally, though still overshadowed by Silicon Valley. Technology start-ups and start-up ecosystems are also flourishing in many sectors and regions, demonstrating their ability to build on research advances to develop and deliver innovative products and services.

You’ll note there’s no mention of a cultural issue where start-ups are designed for sale as soon as possible and this isn’t new. Years ago, there was an accounting firm that published a series of historical maps (the last one I saw was in 2005) of technology companies in the Vancouver region. Technology companies were being developed and sold to large foreign companies from the 19th century to present day.

Part 2

Entanglement and biological systems

I think it was about five years ago thatI wrote a paper on something I called ‘cognitive entanglement’ (mentioned in my July 20,2012 posting) so the latest from Northwestern University (Chicago, Illinois, US) reignited my interest in entanglement. A December 5, 2017 news item on ScienceDaily describes the latest ‘entanglement’ research,

Nearly 75 years ago, Nobel Prize-winning physicist Erwin Schrödinger wondered if the mysterious world of quantum mechanics played a role in biology. A recent finding by Northwestern University’s Prem Kumar adds further evidence that the answer might be yes.

Kumar and his team have, for the first time, created quantum entanglement from a biological system. This finding could advance scientists’ fundamental understanding of biology and potentially open doors to exploit biological tools to enable new functions by harnessing quantum mechanics.

A December 5, 2017 Northwestern University news release (also on EurekAlert), which originated the news item, provides more detail,

“Can we apply quantum tools to learn about biology?” said Kumar, professor of electrical engineering and computer science in Northwestern’s McCormick School of Engineering and of physics and astronomy in the Weinberg College of Arts and Sciences. “People have asked this question for many, many years — dating back to the dawn of quantum mechanics. The reason we are interested in these new quantum states is because they allow applications that are otherwise impossible.”

Partially supported by the [US] Defense Advanced Research Projects Agency [DARPA], the research was published Dec. 5 [2017] in Nature Communications.

Quantum entanglement is one of quantum mechanics’ most mystifying phenomena. When two particles — such as atoms, photons, or electrons — are entangled, they experience an inexplicable link that is maintained even if the particles are on opposite sides of the universe. While entangled, the particles’ behavior is tied one another. If one particle is found spinning in one direction, for example, then the other particle instantaneously changes its spin in a corresponding manner dictated by the entanglement. Researchers, including Kumar, have been interested in harnessing quantum entanglement for several applications, including quantum communications. Because the particles can communicate without wires or cables, they could be used to send secure messages or help build an extremely fast “quantum Internet.”

“Researchers have been trying to entangle a larger and larger set of atoms or photons to develop substrates on which to design and build a quantum machine,” Kumar said. “My laboratory is asking if we can build these machines on a biological substrate.”

In the study, Kumar’s team used green fluorescent proteins, which are responsible for bioluminescence and commonly used in biomedical research. The team attempted to entangle the photons generated from the fluorescing molecules within the algae’s barrel-shaped protein structure by exposing them to spontaneous four-wave mixing, a process in which multiple wavelengths interact with one another to produce new wavelengths.

Through a series of these experiments, Kumar and his team successfully demonstrated a type of entanglement, called polarization entanglement, between photon pairs. The same feature used to make glasses for viewing 3D movies, polarization is the orientation of oscillations in light waves. A wave can oscillate vertically, horizontally, or at different angles. In Kumar’s entangled pairs, the photons’ polarizations are entangled, meaning that the oscillation directions of light waves are linked. Kumar also noticed that the barrel-shaped structure surrounding the fluorescing molecules protected the entanglement from being disrupted.

“When I measured the vertical polarization of one particle, we knew it would be the same in the other,” he said. “If we measured the horizontal polarization of one particle, we could predict the horizontal polarization in the other particle. We created an entangled state that correlated in all possibilities simultaneously.”

Now that they have demonstrated that it’s possible to create quantum entanglement from biological particles, next Kumar and his team plan to make a biological substrate of entangled particles, which could be used to build a quantum machine. Then, they will seek to understand if a biological substrate works more efficiently than a synthetic one.

Here’s an image accompanying the news release,

Featured in the cuvette on the left, green fluorescent proteins responsible for bioluninescence in jellyfish. Courtesy: Northwestern University

Here’s a link to and a citation for the paper,

Generation of photonic entanglement in green fluorescent proteins by Siyuan Shi, Prem Kumar & Kim Fook Lee. Nature Communications 8, Article number: 1934 (2017) doi:10.1038/s41467-017-02027-9 Published online: 05 December 2017

This paper is open access.

Book commentaries: The Science of Orphan Black: The Official Companion and Star Trek Treknology; The Science of Star Trek from Tricorders to Warp Drive

I got more than I expected from both books (“The Science of Orphan Black: The Official Companion” by Casey Griffin and Nina Nesseth and “Star Trek Treknology; The Science of Star Trek from Tricorders to Warp Drive” by Ethan Siegel) I’m going to discuss by changing my expectations.

The Science of Orphan Black: The Official Companion

I had expected a book about the making of the series with a few insider stories about the production along with some science. Instead, I was treated to a season by season breakdown of the major scientific and related ethical issues in the fields of cloning and genetics.I don’t follow those areas exhaustively but from my inexpert perspective, the authors covered everything I could have hoped for (e.g., CRISPR/CAS9, Henrietta Lacks, etc.) in an accessible but demanding writing style  In other words, it’s a good read but it’s not a light read.

There are many, many pictures of Tatiana Maslany as one of her various clone identities in the book. Unfortunately, the images do not boast good reproduction values. This was disconcerting as it can lead a reader (yes, that was me) to false expectations (e.g., this is a picture book) concerning the contents. The boxed snippets from the scripts and explanatory notes inset into the text helped to break up some of the more heavy going material while providing additional historical/scripting/etc. perspectives. One small niggle, the script snippets weren’t always as relevant to the discussion at hand as the authors no doubt hoped.

I suggest reading both the Foreword by Cosima Herter, the series science consultant, and (although it could have done with a little editing) The Conversation between Cosima Herter and Graeme Manson (one of the producers). That’s where you’ll find that the series seems to have been incubated in Vancouver, Canada. It’s also where you’ll find out how much of Cosima Herter’s real life story is included in the Cosima clone’s life story.

The Introduction tells you how the authors met (as members of ‘the clone club’) and started working together as recappers for the series. (For anyone unfamiliar with the phenomenon or terminology, episodes of popular series are recapitulated [recapped] on one or more popular websites. These may or may not be commercial, i.e., some are fan sites.)

One of the authors, Casey Griffin, is a PhD candidate at the University of Southern California (USC) studying in the field of developmental and stem cell biology. I was not able to get much more information but did find her LinkedIn profile. The other author also has a science background. Nina Nesseth is described as a science communicator on the back cover of the book but she’s described as a staff scientist for Science North, a science centre located in Sudbury, Ontario, Canada. Her LinkedIn profile lists an honours Bachelor of Science (Biological and Medical Sciences) from Laurentian University, also located in Sudbury, Ontario.

It’s no surprise, given the authors’ educational background, that a bibliography (selected) has been included. This is something I very much appreciated. Oddly, given that Nesseth lists a graduate certificate in publishing as one of her credentials (on LinkedIn), there is no index (!?!). Unusually, the copyright page is at the back of the book instead of the front and boasts a fairly harsh copyright notice (summary: don’t copy anything, ever … unless you get written permission from ECW Press and the other copyright owners; Note: Herter is the copyright owner of her Foreword while the authors own the rest).

There are logos on the copyright page—more than I’m accustomed to seeing. Interestingly, two of them are government logos. It seems that taxpayers contributed to the publication of this book. The copyright notice seems a little facey to me since taxpayers (at least partially) subsidized the book, as well, Canadian copyright law has a concept called fair dealing (in the US, there’s something similar: fair use). In other words, if I chose, I could copy portions of the text without asking for permission if there’s no intent to profit from it and as long as I give attributions.

How, for example, could anyone profit from this?

In fact, in January 2017, Jun Wu and colleagues published their success in creating pig-human hybrids. (description of real research on chimeras on p. 98)

Or this snippet of dialogue,

[Charlotte] You’re my big sister.

[Sarah] How old are you? (p. 101)

All the quoted text is from “The Science of Orphan Black: The Official Companion” by Casey Griffin and Nina Nesseth (paperback published August 22, 2017).

On the subject of chimeras, the Canadian Broadcasting Corporation (CBC) featured a January 26, 2017 article about the pig-human chimeras on its website along with a video,

Getting back to the book, copyright silliness aside, it’s a good book for anyone interested in some of the  science and the issues associated with biotechnology, synthetic biology, genomes, gene editing technologies, chimeras, and more. I don’t think you need to have seen the series in order to appreciate the book.

Star Trek Treknology; The Science of Star Trek from Tricorders to Warp Drive

This looks and feels like a coffee table book. The images in this book are of a much higher quality than those in the ‘Orphan Black’ book. With thicker paper and extensive ink coverage lending to its glossy, attractive looks, it’s a physically heavy book. The unusually heavy use of black ink  would seem to be in service of conveying the feeling that you are exploring the far reaches of outer space.

It’s clear that “Star Trek Treknology; The Science of Star Trek from Tricorders to Warp Drive’s” author, Ethan Siegel, PhD., is a serious Star Trek and space travel fan. All of the series and movies are referenced at one time or another in the book in relationship to technology (treknology).

Unlike Siegel, while I love science fiction and Star Trek, I have never been personally interested in space travel. Regardless, Siegel did draw me in with his impressive ability to describe and explain physics-related ideas. Unfortunately, his final chapter on medical and biological ‘treknology’ is not as good. He covers a wide range of topics but no one is an expert on everything.

Siegel has a Wikipedia entry, which notes this (Note: Links have been removed),

Ethan R. Siegel (August 3, 1978, Bronx)[1] is an American theoretical astrophysicist and science writer, who studies Big Bang theory. He is a professor at Lewis & Clark College and he blogs at Starts With a Bang, on ScienceBlogs and also on Forbes.com since 2016.

By contrast with the ‘Orphan Black’ book, the tone is upbeat. It’s one of the reasons Siegel appreciates Star Trek in its various iterations,

As we look at the real-life science and technology behind the greatest advances anticipated by Star Trek, it’s worth remembering that the greatest legacy of the show is its message of hope. The future can be brighter and better than our past or present has ever been. It’s our continuing mission to make it so. (p. 6)

All the quoted text is from “Star Trek Treknology; The Science of Star Trek from Tricorders to Warp Drive” by Ethan Siegel (hard cover published October 15, 2017).

This book too has one of those copyright notices that fail to note you don’t need permission when it’s fair dealing to copy part of the text. While it does have an index, it’s on the anemic side and, damningly, there are neither bibliography nor reference notes of any sort. If Siegel hadn’t done such a good writing job, I might not have been so distressed.

For example, it’s frustrating for someone like me who’s been trying to get information on cortical/neural  implants and finds this heretofore unknown and intriguing tidbit in Siegel’s text,

In 2016, the very first successful cortical implant into a patient with ALS [amyotrophic lateral sclerosis] was completed, marking the very first fully implanted brain-computer interface in a human being. (p. 180)

Are we talking about the Australia team, which announced human clinical trials for their neural/cortical implant (my February 15, 2016 posting) or was it preliminary work by a team in Ohio (US) which later (?) announced a successful implant for a quadriplegic (also known as tetraplegic) patient who was then able to move hands and fingers (see my April 19, 2016 posting)? Or is it an entirely different team?

One other thing, I was a bit surprised to see no mention of quantum or neuromorphic computing in the chapter on computing. I don’t believe either was part of the Star Trek universe but they (neuromorphic and quantum computing) are important developments and Siegel makes a point, on at least a few occasions, of contrasting present day research with what was and wasn’t ‘predicted’ by Star Trek.

As for the ‘predictions’, there’s a longstanding interplay between storytellers and science and sometimes it can be a little hard to figure out which came first. I think Siegel might have emphasized that give and take a bit more.

Regardless of my nitpicking, Siegel is a good writer and managed to put an astonishing amount of ‘educational’ material into a lively and engaging book. That is not easy.

Final thoughts

I enjoyed both books and am very excited to see grounded science being presented along with the fictional stories of both universes (Star Trek and Orphan Black).

Yes, both books have their shortcomings (harsh copyright notices, no index, no bibliography, no reference notes, etc.) but in the main they offer adults who are sufficiently motivated a wealth of current scientific and technical information along with some elucidation of ethical issues.

A new wave of physics: electrons flow like liquid in graphene

Unfortunately I couldn’t find a credit for the artist for the graphic (I really like it) which accompanies the news about a new physics and graphene,

Courtesy: University of Manchester

From an Aug. 22, 2017 news item on phys.org (Note: A link has been removed),

A new understanding of the physics of conductive materials has been uncovered by scientists observing the unusual movement of electrons in graphene.

Graphene is many times more conductive than copper thanks, in part, to its two-dimensional structure. In most metals, conductivity is limited by crystal imperfections which cause electrons to frequently scatter like billiard balls when they move through the material.

Now, observations in experiments at the National Graphene Institute have provided essential understanding as to the peculiar behaviour of electron flows in graphene, which need to be considered in the design of future Nano-electronic circuits.

An Aug. 22, 2017 University of Manchester press release, which originated the news item, delves further into the research (Note: Links have been removed),

Appearing today in Nature Physics, researchers at The University of Manchester, in collaboration with theoretical physicists led by Professor Marco Polini and Professor Leonid Levitov, show that Landauer’s fundamental limit can be breached in graphene. Even more fascinating is the mechanism responsible for this.

Last year, a new field in solid-state physics termed ‘electron hydrodynamics’ generated huge scientific interest. Three different experiments, including one performed by The University of Manchester, demonstrated that at certain temperatures, electrons collide with each other so frequently they start to flow collectively like a viscous fluid.

The new research demonstrates that this viscous fluid is even more conductive than ballistic electrons. The result is rather counter-intuitive, since typically scattering events act to lower the conductivity of a material, because they inhibit movement within the crystal. However, when electrons collide with each other, they start working together and ease current flow.

This happens because some electrons remain near the crystal edges, where momentum dissipation is highest, and move rather slowly. At the same time, they protect neighbouring electrons from colliding with those regions. Consequently, some electrons become super-ballistic as they are guided through the channel by their friends.

Sir Andre Geim said: “We know from school that additional disorder always creates extra electrical resistance. In our case, disorder induced by electron scattering actually reduces rather than increase resistance. This is unique and quite counterintuitive: Electrons when make up a liquid start propagating faster than if they were free, like in vacuum”.

The researchers measured the resistance of graphene constrictions, and found it decreases upon increasing temperature, in contrast to the usual metallic behaviour expected for doped graphene.

By studying how the resistance across the constrictions changes with temperature, the scientists revealed a new physical quantity which they called the viscous conductance. The measurements allowed them to determine electron viscosity to such a high precision that the extracted values showed remarkable quantitative agreement with theory.

Here’s a link to and a citation for the paper,

Superballistic flow of viscous electron fluid through graphene constrictions by R. Krishna Kumar, D. A. Bandurin, F. M. D. Pellegrino, Y. Cao, A. Principi, H. Guo, G. H. Auton, M. Ben Shalom, L. A. Ponomarenko, G. Falkovich, K. Watanabe, T. Taniguchi, I. V. Grigorieva, L. S. Levitov, M. Polini, & A. K. Geim. Nature Physics (2017) doi:10.1038/nphys4240 Published online 21 August 2017

This paper is behind a paywall.

Bubble physics could explain language patterns

According to University of Portsmouth physicist, James Burriidge, determining how linguistic dialects form is a question for physics and mathematics.  Here’s more about Burridge and his latest work on the topic from a July 24, 2017 University of Portsmouth press release (also on EurekAlert),

Language patterns could be predicted by simple laws of physics, a new study has found.

Dr James Burridge from the University of Portsmouth has published a theory using ideas from physics to predict where and how dialects occur.

He said: “If you want to know where you’ll find dialects and why, a lot can be predicted from the physics of bubbles and our tendency to copy others around us.

“Copying causes large dialect regions where one way of speaking dominates. Where dialect regions meet, you get surface tension. Surface tension causes oil and water to separate out into layers, and also causes small bubbles in a bubble bath to merge into bigger ones.

“The bubbles in the bath are like groups of people – they merge into the bigger bubbles because they want to fit in with their neighbours.

“When people speak and listen to each other, they have a tendency to conform to the patterns of speech they hear others using, and therefore align their dialects. Since people typically remain geographically local in their everyday lives, they tend to align with those nearby.”

Dr Burridge from the University’s department of mathematics departs from the existing approaches in studying dialects to formulate a theory of how country shape and population distribution play an important role in how dialect regions evolve.

Traditional dialectologists use the term ‘isogloss’ to describe a line on a map marking an area which has a distinct linguistic feature.

Dr Burridge said: “These isoglosses are like the edges of bubbles – the maths used to describe bubbles can also describe dialects.

“My model shows that dialects tend to move outwards from population centres, which explains why cities have their own dialects. Big cities like London and Birmingham are pushing on the walls of their own bubbles.

“This is why many dialects have a big city at their heart – the bigger the city, the greater this effect. It’s also why new ways of speaking often spread outwards from a large urban centre.

“If people live near a town or city, we assume they experience more frequent interactions with people from the city than with those living outside it, simply because there are more city dwellers to interact with.

His model also shows that language boundaries get smoother and straighter over time, which stabilises dialects.

Dr Burridge’s research is driven by a long-held interest in spatial patterns and the idea that humans and animal behaviour can evolve predictably. His research has been funded by the Leverhulme Trust.

Here’s an image illustrating language distribution in the UK<

Caption: These maps show a simulation of three language variants that are initially distributed throughout Great Britain in a random pattern. As time passes (left to right), the boundaries between language variants tend to shorten in length. One can also see evidence of boundary lines fixing to river inlets and other coastal indentations. Credit: James Burridge, University of Portsmouth

Burridge has written an Aug. 2, 2017 essay for The Conversation which delves into the history of using physics and mathematics to understand social systems and further explains his own theory (Note: Links have been removed),

What do the physics of bubbles have in common with the way you and I speak? Not a lot, you might think. But my recently published research uses the physics of surface tension (the effect that determines the shape of bubbles) to explore language patterns – where and how dialects occur.

This connection between physical and social systems may seem surprising, but connections of this kind have a long history. The 19th century physicist Ludwig Boltzmann spent much of his life trying to explain how the physical world behaves based on some simple assumptions about the atoms from which it is made. His theories, which link atomic behaviour to the large scale properties of matter, are called “statistical mechanics”. At the time, there was considerable doubt that atoms even existed, so Boltzmann’s success is remarkable because the detailed properties of the systems he was studying were unknown.

The idea that details don’t matter when you are considering a very large number of interacting agents is tantalising for those interested in the collective behaviour of large groups of people. In fact, this idea can be traced back to another 19th century great, Leo Tolstoy, who argued in War and Peace:

“To elicit the laws of history we must leave aside kings, ministers, and generals, and select for study the homogeneous, infinitesimal elements which influence the masses.”

Mathematical history

Tolstoy was, in modern terms, advocating a statistical mechanics of history. But in what contexts will this approach work? If we are guided by what worked for Boltzmann, then the answer is quite simple. We need to look at phenomena which arise from large numbers of interactions between individuals rather than phenomena imposed from above by some mighty ruler or political movement.

To test a physical theory, one just needs a lab. But a mathematical historian must look for data that have already been collected, or can be extracted from existing sources. An ideal example is language dialects. For centuries, humans have been drawing maps of the spatial domains in which they live, creating records of their languages, and sometimes combining the two to create linguistic atlases. The geometrical picture which emerges is fascinating. As we travel around a country, the way that people use language, from their choices of words to their pronunciation of vowels, changes. Researchers quantify differences using “linguistic variables”.

For example, in 1950s England, the ulex shrub went by the name “gorse”, “furze”, “whim” or “broom” depending on where you were in the country. If we plot where these names are used on a map, we find large regions where one name is in common use, and comparatively narrow transition regions where the most common word changes. Linguists draw lines, called “isoglosses”, around the edges of regions where one word (or other linguistic variable) is common. As you approach an isogloss, you find people start to use a different word for the same thing.

A similar effect can be seen in sheets of magnetic metal where individual atoms behave like miniature magnets which want to line up with their neighbours. As a result, large regions appear in which the magnetic directions of all atoms are aligned. If we think of magnetic direction as an analogy for choice of linguistic variant – say up is “gorse” and down is “broom” – then aligning direction is like beginning to use the local word for ulex.

Linguistic maths

I made just one assumption about language evolution: that people tend to pick up ways of speaking which they hear in the geographical region where they spend most of their time. Typically, this region will be a few miles or tens of miles wide and centred on their home, but its shape may be skewed by the presence of a nearby city which they visit more often than the surrounding countryside.

For example, in 1950s England, the ulex shrub went by the name “gorse”, “furze”, “whim” or “broom” depending on where you were in the country. If we plot where these names are used on a map, we find large regions where one name is in common use, and comparatively narrow transition regions where the most common word changes. Linguists draw lines, called “isoglosses”, around the edges of regions where one word (or other linguistic variable) is common. As you approach an isogloss, you find people start to use a different word for the same thing.

My equations predict that isoglosses tend to get pushed away from cities, and drawn towards parts of the coast which are indented, like bays or river mouths. The city effect can be explained by imagining you live near an isogloss at the edge of a city. Because there are a lot more people on the city side of the isogloss, you will tend to have more conversations with them than with rural people living on the other side. For this reason, you will probably start using the linguistic variable used in the city. If lots of people do this, then the isogloss will move further out into the countryside.

My one simple assumption – that people pick up local ways of speaking – leading to equations which describe the physics of bubbles, allowed me to gain new insight into the formation of language patterns. Who knows what other linguistic patterns mathematics could explain?

Burridge’s paper can be found here,

Spatial Evolution of Human Dialects by James Burridge. Phys. Rev. X 7, 031008 Vol. 7, Iss. 3 — July – September 2017 Published 17 July 2017

This paper is open access and it is quite readable as these things go. In other words, you may not understand all of the mathematics, physics, or linguistics but it is written so that a relatively well informed person should be able to understand the basics if not all the nuances.

Congratulate China on the world’s first quantum communication network

China has some exciting news about the world’s first quantum network; it’s due to open in late August 2017 so you may want to have your congratulations in order for later this month.

An Aug. 4, 2017 news item on phys.org makes the announcement,

As malicious hackers find ever more sophisticated ways to launch attacks, China is about to launch the Jinan Project, the world’s first unhackable computer network, and a major milestone in the development of quantum technology.

Named after the eastern Chinese city where the technology was developed, the network is planned to be fully operational by the end of August 2017. Jinan is the hub of the Beijing-Shanghai quantum network due to its strategic location between the two principal Chinese metropolises.

“We plan to use the network for national defence, finance and other fields, and hope to spread it out as a pilot that if successful can be used across China and the whole world,” commented Zhou Fei, assistant director of the Jinan Institute of Quantum Technology, who was speaking to Britain’s Financial Times.

An Aug. 3, 2017 CORDIS (Community Research and Development Research Information Service [for the European Commission]) press release, which originated the news item, provides more detail about the technology,

By launching the network, China will become the first country worldwide to implement quantum technology for a real life, commercial end. It also highlights that China is a key global player in the rush to develop technologies based on quantum principles, with the EU and the United States also vying for world leadership in the field.

The network, known as a Quantum Key Distribution (QKD) network, is more secure than widely used electronic communication equivalents. Unlike a conventional telephone or internet cable, which can be tapped without the sender or recipient being aware, a QKD network alerts both users to any tampering with the system as soon as it occurs. This is because tampering immediately alters the information being relayed, with the disturbance being instantly recognisable. Once fully implemented, it will make it almost impossible for other governments to listen in on Chinese communications.

In the Jinan network, some 200 users from China’s military, government, finance and electricity sectors will be able to send messages safe in the knowledge that only they are reading them. It will be the world’s longest land-based quantum communications network, stretching over 2 000 km.

Also speaking to the ‘Financial Times’, quantum physicist Tim Byrnes, based at New York University’s (NYU) Shanghai campus commented: ‘China has achieved staggering things with quantum research… It’s amazing how quickly China has gotten on with quantum research projects that would be too expensive to do elsewhere… quantum communication has been taken up by the commercial sector much more in China compared to other countries, which means it is likely to pull ahead of Europe and US in the field of quantum communication.’

However, Europe is also determined to also be at the forefront of the ‘quantum revolution’ which promises to be one of the major defining technological phenomena of the twenty-first century. The EU has invested EUR 550 million into quantum technologies and has provided policy support to researchers through the 2016 Quantum Manifesto.

Moreover, with China’s latest achievement (and a previous one already notched up from July 2017 when its quantum satellite – the world’s first – sent a message to Earth on a quantum communication channel), it looks like the race to be crowned the world’s foremost quantum power is well and truly underway…

Prior to this latest announcement, Chinese scientists had published work about quantum satellite communications, a development that makes their imminent terrestrial quantum network possible. Gabriel Popkin wrote about the quantum satellite in a June 15, 2017 article Science magazine,

Quantum entanglement—physics at its strangest—has moved out of this world and into space. In a study that shows China’s growing mastery of both the quantum world and space science, a team of physicists reports that it sent eerily intertwined quantum particles from a satellite to ground stations separated by 1200 kilometers, smashing the previous world record. The result is a stepping stone to ultrasecure communication networks and, eventually, a space-based quantum internet.

“It’s a huge, major achievement,” says Thomas Jennewein, a physicist at the University of Waterloo in Canada. “They started with this bold idea and managed to do it.”

Entanglement involves putting objects in the peculiar limbo of quantum superposition, in which an object’s quantum properties occupy multiple states at once: like Schrödinger’s cat, dead and alive at the same time. Then those quantum states are shared among multiple objects. Physicists have entangled particles such as electrons and photons, as well as larger objects such as superconducting electric circuits.

Theoretically, even if entangled objects are separated, their precarious quantum states should remain linked until one of them is measured or disturbed. That measurement instantly determines the state of the other object, no matter how far away. The idea is so counterintuitive that Albert Einstein mocked it as “spooky action at a distance.”

Starting in the 1970s, however, physicists began testing the effect over increasing distances. In 2015, the most sophisticated of these tests, which involved measuring entangled electrons 1.3 kilometers apart, showed once again that spooky action is real.

Beyond the fundamental result, such experiments also point to the possibility of hack-proof communications. Long strings of entangled photons, shared between distant locations, can be “quantum keys” that secure communications. Anyone trying to eavesdrop on a quantum-encrypted message would disrupt the shared key, alerting everyone to a compromised channel.

But entangled photons degrade rapidly as they pass through the air or optical fibers. So far, the farthest anyone has sent a quantum key is a few hundred kilometers. “Quantum repeaters” that rebroadcast quantum information could extend a network’s reach, but they aren’t yet mature. Many physicists have dreamed instead of using satellites to send quantum information through the near-vacuum of space. “Once you have satellites distributing your quantum signals throughout the globe, you’ve done it,” says Verónica Fernández Mármol, a physicist at the Spanish National Research Council in Madrid. …

Popkin goes on to detail the process for making the discovery in easily accessible (for the most part) writing and in a video and a graphic.

Russell Brandom writing for The Verge in a June 15, 2017 article about the Chinese quantum satellite adds detail about previous work and teams in other countries also working on the challenge (Note: Links have been removed),

Quantum networking has already shown promise in terrestrial fiber networks, where specialized routing equipment can perform the same trick over conventional fiber-optic cable. The first such network was a DARPA-funded connection established in 2003 between Harvard, Boston University, and a private lab. In the years since, a number of companies have tried to build more ambitious connections. The Swiss company ID Quantique has mapped out a quantum network that would connect many of North America’s largest data centers; in China, a separate team is working on a 2,000-kilometer quantum link between Beijing and Shanghai, which would rely on fiber to span an even greater distance than the satellite link. Still, the nature of fiber places strict limits on how far a single photon can travel.

According to ID Quantique, a reliable satellite link could connect the existing fiber networks into a single globe-spanning quantum network. “This proves the feasibility of quantum communications from space,” ID Quantique CEO Gregoire Ribordy tells The Verge. “The vision is that you have regional quantum key distribution networks over fiber, which can connect to each other through the satellite link.”

China isn’t the only country working on bringing quantum networks to space. A collaboration between the UK’s University of Strathclyde and the National University of Singapore is hoping to produce the same entanglement in cheap, readymade satellites called Cubesats. A Canadian team is also developing a method of producing entangled photons on the ground before sending them into space.

I wonder if there’s going to be an invitational event for scientists around the world to celebrate the launch.

Hollywood and physics: which movie gets it right?

Colin Hunter has written a May 18, 2017 posting for the Perimeter Institute’s (Waterloo, Ontario, Canada) Slice of Pi blog about Hollywood and physics,

Sometimes filmmakers base plotlines and special effects on well-established science. Sometimes they’re even prescient, anticipating or inspiring later scientific and technological advances (remember when a videophone was the stuff of Jetsons-like fantasy?). [For anyone unfamiliar with The Jetsons cartoon]

Other times, filmmakers take a bit (or a lot) of creative license with science, resulting in scenes or entire films that are considerably more “fi” than “sci.” We focused a scientific lens on some of our favourite films (and some duds) and graded them for accuracy.

What did we miss? Comment below, or tweet your favourite movie-science wins and fails to @Perimeter.

Watch a great MinutePhysics video explaining time dilation and the so-called twins paradox.

I encourage you to read the whole piece. It’s an easy read.

3D picture language for mathematics

There’s a new, 3D picture language for mathematics called ‘quon’ according to a March 3, 2017 news item on phys.org,

Galileo called mathematics the “language with which God wrote the universe.” He described a picture-language, and now that language has a new dimension.

The Harvard trio of Arthur Jaffe, the Landon T. Clay Professor of Mathematics and Theoretical Science, postdoctoral fellow Zhengwei Liu, and researcher Alex Wozniakowski has developed a 3-D picture-language for mathematics with potential as a tool across a range of topics, from pure math to physics.

Though not the first pictorial language of mathematics, the new one, called quon, holds promise for being able to transmit not only complex concepts, but also vast amounts of detail in relatively simple images. …

A March 2, 2017 Harvard University news release by Peter Reuell, which originated the news item, provides more context for the research,

“It’s a big deal,” said Jacob Biamonte of the Quantum Complexity Science Initiative after reading the research. “The paper will set a new foundation for a vast topic.”

“This paper is the result of work we’ve been doing for the past year and a half, and we regard this as the start of something new and exciting,” Jaffe said. “It seems to be the tip of an iceberg. We invented our language to solve a problem in quantum information, but we have already found that this language led us to the discovery of new mathematical results in other areas of mathematics. We expect that it will also have interesting applications in physics.”

When it comes to the “language” of mathematics, humans start with the basics — by learning their numbers. As we get older, however, things become more complex.

“We learn to use algebra, and we use letters to represent variables or other values that might be altered,” Liu said. “Now, when we look at research work, we see fewer numbers and more letters and formulas. One of our aims is to replace ‘symbol proof’ by ‘picture proof.’”

The new language relies on images to convey the same information that is found in traditional algebraic equations — and in some cases, even more.

“An image can contain information that is very hard to describe algebraically,” Liu said. “It is very easy to transmit meaning through an image, and easy for people to understand what they see in an image, so we visualize these concepts and instead of words or letters can communicate via pictures.”

“So this pictorial language for mathematics can give you insights and a way of thinking that you don’t see in the usual, algebraic way of approaching mathematics,” Jaffe said. “For centuries there has been a great deal of interaction between mathematics and physics because people were thinking about the same things, but from different points of view. When we put the two subjects together, we found many new insights, and this new language can take that into another dimension.”

In their most recent work, the researchers moved their language into a more literal realm, creating 3-D images that, when manipulated, can trigger mathematical insights.

“Where before we had been working in two dimensions, we now see that it’s valuable to have a language that’s Lego-like, and in three dimensions,” Jaffe said. “By pushing these pictures around, or working with them like an object you can deform, the images can have different mathematical meanings, and in that way we can create equations.”

Among their pictorial feats, Jaffe said, are the complex equations used to describe quantum teleportation. The researchers have pictures for the Pauli matrices, which are fundamental components of quantum information protocols. This shows that the standard protocols are topological, and also leads to discovery of new protocols.

“It turns out one picture is worth 1,000 symbols,” Jaffe said.

“We could describe this algebraically, and it might require an entire page of equations,” Liu added. “But we can do that in one picture, so it can capture a lot of information.”

Having found a fit with quantum information, the researchers are now exploring how their language might also be useful in a number of other subjects in mathematics and physics.

“We don’t want to make claims at this point,” Jaffe said, “but we believe and are thinking about quite a few other areas where this picture-language could be important.”

Sadly, there are no artistic images illustrating quon but this is from the paper,

An n-quon is represented by n hemispheres. We call the flat disc on the boundary of each hemisphere a boundary disc. Each hemisphere contains a neutral diagram with four boundary points on its boundary disk. The dotted box designates the internal structure that specifies the quon vector. For example, the 3-quon is represented as

Courtesy: PNAS and Harvard University

I gather the term ‘quon’ is meant to suggest quantum particles.

Here’s a link and a citation for the paper,

Quon 3D language for quantum information by Zhengwei Liu, Alex Wozniakowski, and Arthur M. Jaffe. Proceedins of the National Academy of Sciences Published online before print February 6, 2017, doi: 10.1073/pnas.1621345114 PNAS March 7, 2017 vol. 114 no. 10

This paper appears to be open access.