A year later i still don’t know what came over me but I got the idea that I could write a 10-year (2010 – 2019) review of science culture in Canada during the last few days of 2019. Somehow two and half months later, I managed to publish my 25,000+ multi-part series.
COVID-19 was mentioned and featured here a number of times throughout the year. I’m highlighting two of those postings. The first is a June 24, 2020 posting titled, Tiny sponges lure coronavirus away from lung cells. It’s a therapeutic approach that is not a vaccine but a way of neutralizing the virus. The idea is that the nanosponge is coated in the material that the virus seeks in a human cell. Once the virus locks onto the sponge, it is unable to seek out cells. If I remember rightly, the sponges along with the virus are disposed of by the body’s usual processes.
The second COVID-19 posting I’m highlighting is my first ever accepted editorial opinion by the Canadian Science Policy Centre (CSPC). I republished the piece here in a May 15, 2020 posting, which included all of my references. However, the magazine version is more attractively displayed in the CSPC Featured Editorial Series Volume 1, Issue 2, May 2020 PDF on pp. 31-2.
Artist Joseph Nechvatal reached out to me earlier this year regarding his viral symphOny (2006-2008), a 1 hour 40 minute collaborative electronic noise music symphony. It was featured in an April 7, 2020 posting which seemed strangely à propos during a pandemic even though the work was focused on viral artificial life. You can access it for free https://archive.org/details/ViralSymphony but the Internet Archive where this is stored is requesting donations.
Also on a vaguely related COVID-19 note, there’s my December 7, 2020 posting titled, Digital aromas? And a potpourri of ‘scents and sensibility’. As any regular readers may know, I have a longstanding interest in scent and fragrances. The COVID-19 part of the posting (it’s not about losing your sense of smell) is in the subsection titled, Smelling like an old book. Apparently some folks are missing the smell of bookstores and Powell’s books have responded to that need with a new fragrance.
For anyone who may have missed it, I wrote an update of the CRISPR twin affair in my July 28, 2020 posting, titled, July 2020 update on Dr. He Jiankui (the CRISPR twins) situation.
Finishing off with 2020, I wrote a commentary (mostly focused on the Canada chapter) about a book titled, Communicating Science: A Global Perspective in my December 10, 2020 posting. The book offers science communication perspectives from 39 different countries.
Things future
I have no doubt there will be delights ahead but as they are in the realm of discovery and, at this point, they are currently unknown.
My future plans include a posting about trust and governance. This has come about since writing my Dec. 29, 2020 posting titled, “Governments need to tell us when and how they’re using AI (artificial intelligence) algorithms to make decisions” and stumbling across a reference to a December 15, 2020 article by Dr. Andrew Maynard titled, Why Trustworthiness Matters in Building Global Futures. Maynard’s focus was on a newly published report titled, Trust & Tech Governance.
I will also be considering the problematic aspects of science communication and my own shortcomings. On the heels of reading more than usually forthright discussions of racism in Canada across multiple media platforms, I was horrified to discover I had featured, without any caveats, work by a man who was deeply problematic with regard to his beliefs about race. He was a eugenicist, as well as, a zoologist, naturalist, philosopher, physician, professor, marine biologist, and artist who coined many terms in biology, including ecology, phylum, phylogeny, and Protista; see his Wikipedia entry.
A Dec. 23, 2020 news release on EurekAlert (Scientists at Tel Aviv University develop new gene therapy for deafness) and a December 2020 article by Sarah Zhang for The Atlantic about prenatal testing and who gets born have me wanting to further explore the field of how genetic testing and therapies will affect our concepts of ‘normality’. Fingers crossed I’ll be able to get Dr. Gregor Wolbring to answer a few questions for publication here. (Gregor is a tenured associate professor [in Alberta, Canada] at the University of Calgary’s Cumming School of Medicine and a scholar in the field of ‘ableism’. He is deeply knowledgeable about notions of ability vs disability.)
As 2021 looms, I’m hopeful that I’ll be featuring more art/sci (or sciart) postings, which is my segue to a more hopeful note about 2021 will bring us,
It’s an apple! This is one of the many images embedded in Annie Ewbank’s January 6, 2020 article about rare and beautiful apples for Atlas Obscura (featured on getpocket.com),
In early 2020, inside a bright Brooklyn gallery that is plastered in photographs of apples, William Mullan is being besieged with questions.
A writer is researching apples for his novel set in post-World War II New York. An employee of a fruit-delivery company, who covetously eyes the round table on which Mullan has artfully arranged apples, asks where to buy his artwork.
But these aren’t your Granny Smith’s apples. A handful of Knobbed Russets slumping on the table resemble rotting masses. Despite their brown, wrinkly folds, they’re ripe, with clean white interiors. Another, the small Roberts Crab, when sliced by Mullan through the middle to show its vermillion flesh, looks less like an apple than a Bing cherry. The entire lineup consists of apples assembled by Mullan, who, by publishing his fruit photographs in a book and on Instagram, is putting the glorious diversity of apples in the limelight.
Nearly a year after the first case of Covid-19 was reported in the Chinese city of Wuhan in December 2019, the world could be nearing the beginning of the end of a pandemic that has killed more than 1.7 million people. Vaccination for Covid-19 is underway in the United States and the United Kingdom, and promising antibody treatments could help doctors fight back against the disease more effectively. Tied to those breakthroughs: a host of new billionaires who have emerged in 2020, their fortunes propelled by a stock market surge as investors flocked to companies involved in the development of vaccines, treatments, medical devices and everything in between.
Altogether, Forbes found 50 new billionaires in the healthcare sector in 2020. …
Carl Hansen
Net worth: $2.9 billion
Citizenship: Canada
Source of wealth: AbCellera
Hansen is the CEO and cofounder of Vancouver-based AbCellera, a biotech firm that uses artificial intelligence and machine learning to identify the most promising antibody treatments for diseases. He founded the company in 2012. Until 2019 he also worked as a professor at the University of British Columbia, but shifted to focus full-time on AbCellera. That decision seems to have paid off, and Hansen’s 23% stake earned him a spot in the billionaire club after AbCellera’s successful listing on the Nasdaq on December 11. The U.S. government has ordered 300,000 doses of bamlanivimab, an antibody AbCellera discovered in partnership with Eli Lilly that received FDA approval as a Covid-19 treatment in November [2020].
AbCellera, a local biotechnology company founded at UBC, has developed a method that can search immune responses more deeply than any other technology. Using a microfluidic technology developed at the Michael Smith Laboratories, advanced immunology, protein chemistry, performance computing, and machine learning, AbCellera is changing the game for antibody therapeutics.
…
I believe a great deal of research that is commercialized was initially funded by taxpayers and I cannot recall any entrepreneurs here in Canada or elsewhere acknowledging that help in a big way. Should you be able to remember any comments of that type, please do let me know in the Comments.
Just prior to this financial bonanza, AbCellera was touting two new board members, John Montalbano on Nov. 18, 2020 and Peter Thiel on Nov. 19, 2020.
November 18, 2020 – AbCellera, a technology company that searches, decodes, and analyzes natural immune systems to find antibodies that can be developed to prevent and treat disease, today announced the appointment of John Montalbano to its Board of Directors. Mr. Montalbano will serve as the Chair of the Audit Committee of the Board of Directors.
…
Mr. Montalbano is Principal of Tower Beach Capital Ltd. and serves on the boards of the Canada Pension Plan Investment Board, Aritzia Inc., and the Asia Pacific Foundation of Canada. His previous appointments include the former Vice Chair of RBC Wealth Management and CEO of RBC Global Asset Management (RBC GAM). When Mr. Montalbano retired as CEO of RBC GAM in 2015, it was among the largest 50 asset managers worldwide with $370 billion under management and offices in Canada, the United States, the United Kingdom, and Hong Kong.
…
Montalbano has been on this blog before in a Nov. 4, 2015 posting. If you scroll down to the subsection “Justin Trudeau and his British Columbia connection,” you’ll see mention of Montalbano’s unexpected exit as member and chair of UBC’s board of governors.
AbCellera, a technology company that searches, decodes, and analyzes natural immune systems to find antibodies that can be developed to prevent and treat disease, today announced the appointment of Peter Thiel to its Board of Directors.
“Peter has been a valued AbCellera investor and brings deep experience in scaling global technology companies,” said Carl Hansen, Ph.D., CEO of AbCellera. “We share his optimistic vision for the future, faith in technological progress, and long-term view on company building. We’re excited to have him join our board and look forward to working with him over the coming years.”
Mr. Thiel is a technology entrepreneur, investor, and author. He was a co-founder and CEO of PayPal, a company that he took public before it was acquired by eBay for $1.5 billion in 2002. Mr. Thiel subsequently co-founded Palantir Technologies in 2004, where he continues to serve as Chairman. As a technology investor, Mr. Thiel made the first outside investment in Facebook, where he has served as a director since 2005, and provided early funding for LinkedIn, Yelp, and dozens of technology companies. He is a partner at Founders Fund, a Silicon Valley venture capital firm that has funded companies including SpaceX and Airbnb.
“AbCellera is executing a long-term plan to make biotech move faster. I am proud to help them as they raise our expectations of what’s possible,” said Mr. Thiel.
…
Some Canadian business journalists got very excited over Thiel’s involvement in particular. Perhaps they were anticipating this December 10, 2020 AbCellera news release announcing an initial public offering. Much money seems to have been made not least for Mr. Montalbano, Mr. Thiel, and Mr. Hansen.
As for Mr. Thiel and taxes, I don’t know for certain but can infer that he’s not a big fan from this portion of his Wikipedia entry,
…
Thiel is an ideological libertarian,[108] though more recently he has espoused support for national conservatism[109] and criticized libertarian attitudes towards free trade[110] and big tech.[109]
…
My understanding is that libertarians object to taxes and prefer as little government structure as possible.
In any event, it seems that COVID-19 has been quite the bonanza for some people. If you’re curious you can find out more about AbCellera here.
Onto Avo Media and how it has contributed to the AbCellera story.
Avo Media, The Tyee, and Science Telephone
Vancouver (Canada)-based Avo Media describes itself this way on its homepage,
We make documentary, educational, and branded content.
We specialize in communicating science and other complex concepts in a clear, engaging way.
I think that description boils down to videos and podcasts. There’s no mention of AbCellera as one of their clients but they do list The Tyee, which in a July 1, 2020 posting (The Vancouver Company Turning Blood into a COVID Treatment: A Tyee Video) by Mashal Butt hosts a video about AbCellera,
The world anxiously awaits a vaccine to end the pandemic. But having a treatment could save countless lives in the meantime.
This Tyee video explains how Vancouver biotech company AbCellera, with funding from the federal government, is racing to develop an antibody-based therapy treatment as quickly as possible.
Experts — immunologist Ralph Pantophlet at Simon Fraser University, and co-founder and COO of AbCellera Véronique Lecault — explain what an antibody treatment is and how it can protect us from COVID-19.
…
It is not a cure, but it can help save lives as we wait for the cure.
This video was made in partnership with Vancouver’s Avo Media team of Jesse Lupini, Koby Michaels and Lucas Kavanagh.
It’s a video with a good explanation of AbCellera’s research. Interestingly, the script notes that the Canadian federal government gave the company over $175M for its COVID-19 work.
Why The Tyee?
While Avo Media is a local company, I notice that Jessica Yingling is listed in the final credits for the video. Yingling founded Little Dog Communications, which is based in both California and Utah. If you read the AbCellera news releases, you’ll see that she’s the media contact.
Is there a more unlikely media outlet to feature a stock market star, which probably will be making billions of dollars from this pandemic, than The Tyee? Politically, its ideology could be described as the polar opposite to libertarian ideology.
I wonder what the thought process was for the media placement and how someone based in San Diego (check out her self description on this Twitter feed @jyingling) came up with the idea?
Science Telephone
Avo Media’s latest project seems to be a podcast series, Science Telephone (this link is to the Spotify platform). Here’s more about the series and the various platforms where episodes can be found (from the Avo Media, Our Work, Science Telephone webpage) ,
Science Telephone is a new podcast that tests how well the science holds up when comedians get their hands onto it
Laugh while you learn, as the classic game of telephone is repurposed for scientific research. Each episode, one scientist explains their research to a comedian, who then has to explain it to the next comedian, and so on until it’s almost unrecognizable. See what sticks and what changes, with a rotating cast of brilliant scientists and hysterical comedians.
See a preview of the show below, or visit www.sciencetelephone.com to subscribe or listen to past episodes.
Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.
Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.
Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.
“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”
The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.
“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”
The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.
For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”
At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.
“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”
“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.
This image illustrates the interplay between the various level dynamics,
Here’s a link, to and a citation for the special issue,
The issue is open access for three months, Dec. 14, 2020 – March 14, 2021.
Two articles from the special were featured in the press release,
“The agency of algorithms: Understanding human-algorithm interaction in administrative decision-making,” by Rik Peeters, PhD (https://doi.org/10.3233/IP-200253)
An AI governance publication from the US’s Wilson Center
Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,
Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg
Abstract
In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:
AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.
However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.
Canadian government and AI
The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.
There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)
Responsible use? Maybe not after 2019
First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?
For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?
What about the government’s digital service?
You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,
In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.
…
At the time, Simon was Director of Outreach at Code for Canada.
Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.
Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,
Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.
At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.
How it works
We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.
Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.
Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.
Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.
…
As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)
Does the Treasury Board of Canada have charge of responsible AI use?
I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.
The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.
…
It seems the Minister of Digital Government, Joyce Murray is part of the Treasury Board and the Treasury Board is the source for the Digital Operations Strategic Plan: 2018-2022,
I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.
But isn’t there a Chief Information Officer for Canada?
Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,
Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.
“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.
He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.
He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]
Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),
Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.
The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.
Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.
Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.
Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”
…
Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.
Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?
I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.
The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.
The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,
Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.
And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.
Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.
These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.
…
While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.
Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.
…
Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?
Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.
…
When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.
…
Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.
Instead, the Phoenix Pay system currently employs about 2,300. This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.
… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].
…
Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.
The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.
Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).
After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:
Insights and predictive modelling
Machine interactions
Cognitive automation
…
PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.
I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,
Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.
…
Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.
To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.
Canadian Institute for Advanced Research (CIFAR)
The first mention of the Pan-Canadian Artificial Intelligence Strategy is in my analysis of the Canadian federal budget in a March 24, 2017 posting. Briefly, CIFAR received a big chunk of that money. Here’s more about the strategy from the CIFAR Pan-Canadian AI Strategy homepage,
In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.
CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.
The objectives of the strategy are to:
Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.
Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.
Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.
Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.
Responsible AI at CIFAR
You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,
CIFAR is leading global conversations about AI’s impact on society.
The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.
Under the category of building an AI World I found this (from CIFAR’s AI & Society homepage),
BUILDING AN AI WORLD
Explore the landscape of global AI strategies.
Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.
…
I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.
Final comments about Responsible AI in Canada and the new reports
I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.
I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.
The great unwashed
What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.
I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.
Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen
Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.
The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,
The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.
Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.
In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.
Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.
…
Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.
…
Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”
…
Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”
…
The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.
Do* we really need senior government bureaucrats?
I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,
When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19
As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.
With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.
“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”
Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”
…
It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.
Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.
By late February [2020], Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.
“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”
China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”
It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.
But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.
The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.
However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.
…
The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July [2020], are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.
Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.
Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.
Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.
…
If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.
The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.
If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,
Responsible AI, eh?
Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.
Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.
Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.
A lot of mistakes have been made but we also do make a lot of good decisions.
Christmas Eve 2020: There haven’t been enough frog stories here this year and this December 21, 2020 essay by James B. Barnett, postdoctoral fellow inbBehavioural ecology at McMaster University (Ontario, Canada), for The Conversation helps fill that void,
…
Transparency may seem like the simplest form of camouflage, but in the last year, research has revealed new complexities behind what some animals do to vanish into their surroundings.
In my research, I have experienced first-hand how effective and sophisticated transparency really can be. That story begins on a dark night in the pouring rain in the thick of French Guiana’s tropical rainforest.
.. we had been searching for: a tiny glass frog (Teratohyla midas), only a couple of centimetres long. These bizarre frogs have transparent skin that lets us look directly at their intestines and bones — even at their beating hearts.
Here’s an image of a glass frog, from an October 11, 2011 posting by Lindsay on the amphibianrescue.org blog,
Barnett’s December 21, 2020 essay explains why animals might find transparency a useful survival mechanism,
Glass frogs are an amazing sight but seeing one raises an interesting question: what’s the point of having transparent skin if your predators can still see you, or your internal organs?
…
… as light moves between transparent materials it can be bent and scattered. Think about how a straw in a glass of water appears to bend. This is refraction, and results from the different ways that light moves through air and water.
An animal’s body is made up of many organs and tissues, each with a different thickness, structure and chemical makeup. For the animal to be transparent, light must not be reflected, absorbed, scattered or refracted as it travels through each of these different layers.
…
On land, transparency is much rarer, but we are now learning that it can be very effective, sometimes in unexpected ways.
… [glass frogs’] transparency provides yet another form of camouflage. Unlike butterflies and moths with their transparent wings, these frogs have all of their internal organs on show, preventing true transparency. Instead, by combining their transparent underside with a green topside, the frogs become translucent: allowing some light through but not showing a clear image.
Their green colouring is a close match to that of a generic leaf, and the trick of translucency allows the frog to brighten or darken in keeping with the leaves around it. What’s more, the frogs’ legs are more translucent than their bodies, providing an extra advantage from a process called “edge diffusion,” further blending frog and leaf together in the eyes of its predators.
…
This is an interesting essay illustrated with images of ‘transparent’ species, including a ghost shrimp.
I have one hesitation, the banner image for Barnett’s essay features what’s identified as a glasswing butterfly but Barnett references clearwing butterflies in his text. I did a brief search to see if the names are interchangeable and the answer is often but not on Wikipedia.
The search ‘clearwing butterfiles’ will lead you to this Wikipedia entry on Cressida cressida, the clearwing swallowtail butterfly or big greasy, which is found in northern Australia, New Guinea, Maluku, and Timor,
Here’s the glasswing butterfly (Greta oto),
You can find the Greta Oto Wikipedia entry here. The butterfly is found mainly in Central America and the northern region of South America.
To sum up, I think the term ‘clearwing’ is used to describe a lot of butterflies that have transparent or translucent wing qualities, as well as to a specific butterfly and ‘glasswing; is used only for the Greta oto. (If anyone knows more, please feel free to let me know with a blog comment.)
Just after I stumbled across Barnett’s essay, I came across some research involving transparent worms. I’ve included a link to the research but do note that this is not about their ‘transparent’ quality but about how scientists have used different types of light to observe various metabolic processes. Here’s the December 22, 2020 Rice University news release on EurekAlert.
Earlier this week, RaftsTheGame (@TheRaftsGame) popped up on my twitter feed, which was excellent timing since it’s getting close to Christmas in a year (2020) when I imagine a lot of people may be home and inclined to play games.
The people (rafts4biotech) who produced Rafts The Game (also called Rafts!) are involved in a research project funded by the European Union’s Horizon 2020 programme,
RAFTS! Create the bacterium of your dreams
Have you ever wondered what it would be like to be a genetic engineer? Now’s your chance to find out! Rafts! is a card game in which your aim is to design a bacterium while trying to overcome the challenges of research work.
If you are a researcher, look no further – Rafts! enables you to finally share your academic struggles with those friends who don’t have a clue of what you do!
THE GAME
In Rafts! you race to become the first scientist to create a bacterium that can do incredible things: cleaning an oil spill, detecting toxic compounds, producing blood for donations… Sounds like science fiction? More like a regular day at the lab!
But don’t get carried away – nobody said conducting research was easy! Hard work alone isn’t enough if you don’t have the right genetic instructions as well as a combination of money, time as well as food for your bacterium. You’ll have to collect all of these resources to finish the masterpiece that is your bacterium.
In this laboratory people play dirty, so don’t forget to keep an eye on your colleagues – they are all trying to achieve their objectives, and sometimes you will compete for the same resources. Don’t hesitate to strike back!
THE CARDS
There are three types of cards in Rafts!: action cards help you gather the resource cards that you will need to achieve the goal in your objective card. Bring your mouse on top of a card to know what it can do!
…
GET YOURS
Ready to become the biotech wizard you’ve always wanted to be? You’re just a click away from building the bacterium of a lifetime!
Download Rafts! for free and print it yourself – or let your local print shop do it for you:
For anyone curious about the source for the game, here’s a bit about rafts4biotech, from the homepage,
Engineering bacterial lipid rafts to optimise industrial processes
Context
Bacteria are used in the biotechnology industry to produce a wide range of valuable compounds. However, the performance of these microorganisms in the demanding industrial conditions is limited by the toxicity of some compounds and the complex metabolic interactions that occur within the bacterial cells.
Challenge
Generating new synthetic microorganisms that will solve productivity hurdles and yield a great variety of economy-value compounds. These modified strains will be used as standardised microbial chassis platforms to fit industry needs.
Solution
The R4B solution relies on confining the production of compounds to specific areas of the microbe’s membrane called lipid rafts. This recently-discovered regions present an ideal setting that will avoid interferences with bacterial metabolism and viability.
…
Given that at least one of the COVID-19 vaccines (Pfizer-BioNTech?) is wrapped in lipid nanobodies and, now, with this mention of lipids, it seemed like a good idea (for me) to learn about lipids. Here’s what I found in the definition for lipid in The free Dictionary,
a group of substances comprising fatty, greasy, oily, and waxy compounds that are insoluble in water and soluble in nonpolar solvents, such as hexane, ether, and chloroform.
This image is pretty and I’m pretty sure it’s an illustration and not a real photodetection system. Regardless, an Oct. 21, 2020 news item on Nanowerk describes the research into producing a real 3D hemispheric photodetector for biomedical imaging (Note: A link has been removed),
Purdue University innovators are taking cues from nature to develop 3D photodetectors for biomedical imaging.
The researchers used some architectural features from spider webs to develop the technology. Spider webs typically provide excellent mechanical adaptability and damage-tolerance against various mechanical loads such as storms.
“We employed the unique fractal design of a spider web for the development of deformable and reliable electronics that can seamlessly interface with any 3D curvilinear surface,” said Chi Hwan Lee, a Purdue assistant professor of biomedical engineering and mechanical engineering. “For example, we demonstrated a hemispherical, or dome-shaped, photodetector array that can detect both direction and intensity of incident light at the same time, like the vision system of arthropods such as insects and crustaceans.”
The Purdue technology uses the structural architecture of a spider web that exhibits a repeating pattern. This work is published in Advanced Materials (“Fractal Web Design of a Hemispherical Photodetector Array with Organic-Dye-Sensitized Graphene Hybrid Composites”).
Lee said this provides unique capabilities to distribute externally induced stress throughout the threads according to the effective ratio of spiral and radial dimensions and provides greater extensibility to better dissipate force under stretching. Lee said it also can tolerate minor cuts of the threads while maintaining overall strength and function of the entire web architecture.
“The resulting 3D optoelectronic architectures are particularly attractive for photodetection systems that require a large field of view and wide-angle antireflection, which will be useful for many biomedical and military imaging purposes,” said Muhammad Ashraful Alam, the Jai N. Gupta Professor of Electrical and Computer Engineering.
Alam said the work establishes a platform technology that can integrate a fractal web design with system-level hemispherical electronics and sensors, thereby offering several excellent mechanical adaptability and damage-tolerance against various mechanical loads.
“The assembly technique presented in this work enables deploying 2D deformable electronics in 3D architectures, which may foreshadow new opportunities to better advance the field of 3D electronic and optoelectronic devices,” Lee said.
It seems that the second international Nanocar Race (nano Grand Prix) has been bumped from 2021 to 2022. I always find this car race a little challenging to cover. The first race was scheduled for 2016 and then bumped 2017 and, for some reason, I have two posts about the winners of that 2017 race. (sigh) Let’s hope I can manage a little more tidiness this time.
Nanomechanics at Rice University and the University of Houston are getting ready to rev their engines for the second international Nanocar Race.
While they’ll have to pump the brakes for a bit longer than expected, as the race has been bumped a year to 2022, the Rice-based team is pushing forward with new designs introduced in the American Chemical Society’s Journal of Organic Chemistry.
The work led by chemists James Tour of Rice and Anton Dubrovskiy of the University of Houston-Clear Lake upgrades the cars with wheels of tert-butyl that should help them navigate the course laid out on a surface of gold, with pylons consisting of a few well-placed atoms.
Like their winning entry in 2017’s first international Nanocar Race, these nanocars have permanent dipole moments to increase their speed and drivability on the surface.
“The permanent dipoles make the cars more susceptible to being influenced by electric field gradients, which are used to propel and maneuver them,” Tour said. “It is a feature we introduced for the first competition, and I’m sure many of the entries will now have this advanced design element built into their nanocars.”
This year’s models are also lighter, a little more than the minimum 100 atoms required by new regulations. “The car we used in the first race had only 50 atoms,” Tour said. “So this is a substantial increase in the molecular weight, as required by the updated standards. “It’s likely the race organizers wanted to slow us down since the last time, when we finished the 30-hour race in only 1.5 hours,” he said.
To bring the cars past 100 atoms while streamlining their syntheses, the researchers used a modular process to make five new cars with either all tert-butyl, all adamantyl wheels (as in previous nanocars) or combinations of the two.
At 90 atoms, cars with only butyl wheels, which minimize interactions with the track, and shorter chassis were too small. By using wheel combinations, the Rice lab made nanocars with 114 atoms. “This keeps the weight at a minimum while meeting the race requirements,” Tour said.
The nanocars will again be driven by a team from University of Graz in Austria led by Professor Leonhard Grill. The team brought the Rice vehicle across the finish line in 2017 and has enormous expertise in scanning tunneling microscope-directed manipulations, Tour said. The Grill and Tour groups will meet again in France for the race.
The overarching goal of the competition is to advance the development of nanomachines capable of real work, like carrying molecular-scale cargo and facilitating nano-fabrication.
“This race pushes the limits of molecular nanocar design and methods to control them,” Tour said. “So through this competitive process, worldwide expertise is elevated and the entire field of nanomanipulation is encouraged to progress all the faster.”
The race, originally scheduled for next summer, has been delayed by the pandemic. The racers will still need to gather in France to be overseen by the judges, but all of the teams will control their cars via the internet on tracks under scanning tunneling microscopes in their home labs.
“So the drivers will be together, and the cars and tracks will be dispersed around the world,” Tour said. “But the distance of each track will be identical, to within a few nanometers.”
The Rice-Graz entry won the 2017 race with an asterisk, as its car moved so quickly on the gold surface that it was impossible to capture images for judging. The team was then allowed to race on a silver surface that offered sufficient resistance and finished the 150-nanometer course in 90 minutes.
“The course was supposed to have been only 100 nanometers, but the team was penalized to add an extra 50 nanometers,” Tour said. “Eventually, it was no barrier anyway.” First prize on the gold track went to a Swiss team that finished a 100-nanometer course in six-and-a-half hours.
Tour’s lab built the world’s first single-molecule car in 2005 and it has gone through many iterations since, with the related development of molecular motors that drill through cells to deliver drugs.
Rice graduate student Alexis van Venrooy is lead author of the paper. Co-authors are Rice alumnus Victor García-López and undergraduate John Tianci Li. Dubrovskiy is an assistant professor of chemistry at the University of Houston-Clear Lake and an academic visitor at Rice. Tour is the T.T. and W.F. Chao Chair in Chemistry as well as a professor of computer science and of materials science and nanoengineering at Rice.
For the curious, there’s my July 9, 2020 posting which announced the second race then scheduled for 2021 and there’s my May 9, 2017 posting, which announced the US-Austrian team winners of the first race..
An Oct. 16, 2020 news item on phys.org announces research that contradicts a common belief about how diamonds are formed ,
Natural diamonds can form through low pressure and temperature geological processes on Earth, as stated in an article published in the journal Geochemical Perspectives Letters. The newfound mechanism, far from the classic view on the formation of diamonds under ultra-high pressure, is confirmed in the study, which draws on the participation of experts from the Mineral Resources Research Group of the Faculty of Earth Sciences of the University of Barcelona (UB).
Other participants in the study are the experts from the Institute of Nanoscience and Nanotechnology of the UB (IN2UB), the University of Granada (UGR), the Andalusian Institute of Earth Sciences (IACT), the Institute of Ceramics and Glass (CSIC), and the National Autonomous University of Mexico (UNAM). The study has been carried out within the framework of the doctoral thesis carried out by researcher Núria Pujol-Solà (UB), first author of the article, under the supervision of researchers Joaquín A. Proenza (UB) and Antonio García-Casco (UGR).
A symbol of luxury and richness, the diamond (from the Greek αδ?μας, “invincible”) is the most valuable gem and the toughest mineral (value of 10 in Mohs scale). It formed by chemically pure carbon, and according to the traditional hypothesis, it crystalizes the cubic system under ultra-high-pressure conditions at great depths in the Earth’s mantle.
The study confirms for the first time the formation of the natural diamond under low pressures in oceanic rocks in the Moa-Baracoa Ophiolitic Massif, in Cuba. This great geological structure is in the north-eastern side of the island and is formed by ophiolites, representative rocks of the Oceanic lithosphere.
These oceanic rocks were placed on the continental edge of North America during the collision of the Caribbean oceanic island arch, between 70 and 40 million years ago. “During its formation in the abysmal marine seafloors, in the cretaceous period -about 120 million years ago-, these oceanic rocks underwent mineral alterations due to marine water infiltrations, a process that led to small fluid inclusions inside the olivine, the most common mineral in this kind of rock”, note Joaquín A. Proenza, member of the Department of Mineralogy, Petrology and Applied Geology at the UB and principal researcher of the project in which the article appears, and Antonio García-Casco, from the Department of Mineralogy and Petrology of the UGR.
“These fluid inclusions contain nanodiamonds -of about 200 and 300 nanometres-, apart from serpentine, magnetite, metallic silicon and pure methane. All these materials have formed under low pressure (<200 MPa) and temperature (<350 ºC), during the olivine alteration that contains fluid inclusions”, add the researchers.
“Therefore, this is the first description of ophiolitic diamond formed under low pressure and temperature, whose formation under natural processes does not bear any doubts”, they highlight.
Diamonds formed under low pressure and temperature
It is notable to bear in mind that the team published, in 2019, a first description of the formation of ophiolitic diamonds under low pressure conditions (Geology), a study carried out as part of the doctoral thesis by the UB researcher Júlia Farré de Pablo, supervised by Joaquín A. Proenza and the UGR professor José María González Jiménez. This study was highly debated on among the members of the international scientific community.
In the published article in Geochemical Perspectives Letters, a journal of the European Association of Geochemistry, the experts detected the nanodiamonds in small fluid inclusions under the surface of the samples. The finding was carried out by using the confocal Raman maps and using focused ion beams (FIB), combined with transmission electron microscopy (FIB-TEM). This is how they could confirm the presence of the diamond in the depth of the sample, and therefore, the formation of a natural diamond under low pressure in exhumed oceanic rocks. The Scientific and Technological Centres of the UB (CCiTUB) have taken part in this study, among other infrastructures supporting the country.
In this case, the study focuses its debate on the validity of some geodynamic models that, based on the presence of ophiolite diamonds, imply circulation in the mantle and large-scale lithosphere recycling. For instance, the ophiolitic diamond was thought to reflect the passing of ophiolitic rocks over the deep earth’s mantle up to the transition area (210-660 km deep) before settling into a normal ophiolite formed under low pressure (~10 km deep).
According to the experts, the low state of oxidation in this geological system would explain the formation of nano-çdiamonds instead of graphite -which would be expected under physical and chemical formation conditions of fluid inclusions.
The study counted on the support from the former Ministry for Economy and Competitiveness (MINECO), the Ramón y Cajal Program and the EU European Regional Development Fund (ERDF).
The researchers have an image showing inclusions which contain nanodiamonds,
Here`s a link to and a citation for the paper,
Diamond forms during low pressure serpentinisation of oceanic lithosphere by N. Pujol-Solà, A. Garcia-Casco, J.A. Proenza, J.M. González-Jiménez, A. del Campo, V. Colás, À. Canals, A. Sánchez-Navas, J. Roqué-Rosell. Geochemical Perspectives Letters v15 DOI: 10.7185/geochemlet.2029 Published 10 September 2020
An October 26, 2020 news item on Nanowerk describes some new research that may lead the way to treatments for people with asbestos-related cancers (e.g., mesothelioma), Note: A link has been removed,
Gold nanotubes – tiny hollow cylinders one thousandth the width of a human hair – could be used to treat mesothelioma, a type of cancer caused by exposure to asbestos, according to a team of researchers at the Universities of Cambridge and Leeds.
In a study published in journal Small (“Exploring High Aspect Ratio Gold Nanotubes as Cytosolic Agents: Structural Engineering and Uptake into Mesothelioma Cells”), the researchers demonstrate that once inside the cancer cells, the nanotubes absorb light, causing them to heat up, thereby killing the cells.
More than 2,600 people are diagnosed in the UK each year with mesothelioma, a malignant form of cancer caused by exposure to asbestos. Although the use of asbestos is outlawed in the UK now, the country has the world’s highest levels of mesothelioma because it imported vast amounts of asbestos in the post-war years. The global usage of asbestos remains high, particularly in low- and middle-income countries, which means mesothelioma will become a global problem.
“Mesothelioma is one of the ‘hard-to-treat’ cancers, and the best we can offer people with existing treatments is a few months of extra survival,” said Dr Arsalan Azad from the Cambridge Institute for Medical Research at the University of Cambridge. “There’s an important unmet need for new, effective treatments.”
In 2018, the University of Cambridge was awarded £10million from the Engineering and Physical Sciences Research Council to help develop engineering solutions, including nanotech, to find ways to address hard-to-treat cancers.
In a collaboration between the University of Cambridge and University of Leeds, researchers have developed a form of gold nanotubes whose physical properties are ‘tunable’ – in other words, the team can tailor the wall thickness, microstructure, composition, and ability to absorb particular wavelengths of light.
The researchers added the nanotubes to mesothelioma cells cultured in the lab and found that they were absorbed by the cells, residing close to the nucleus, where the cell’s DNA lies. When the team targeted the cells with a laser, the nanotubes absorbed the light and heated up, killing the mesothelioma cell.
Professor Stefan Marciniak, also from the Cambridge Institute for Medical Research, added: “The mesothelioma cells ‘eat’ the nanotubes, leaving them susceptible when we shine light on them. Laser light is able to penetrate deep into tissue without causing damage to surrounding tissue. It then gets absorbed by the nanotubes, which heat up and, we hope in the future, could be used to cause localised cancer-cell killing.”
The team will be developing the work further to ensure the nanotubes are targeted to cancer cells with less effect on normal tissue.
The nanotubes are made in a two-step process. First, solid silver nanorods are created of the desired diameter. Gold is then deposited from solution onto the surface of the silver. As the gold builds-up at the surface, the silver dissolves from the inside to leave a hollow nanotube.
The approach advanced by the Leeds team allows these nanotubes to be developed at room temperature, which should make their manufacture at scale more feasible.
Professor Stephen Evans from the School of Physics and Astronomy at the University of Leeds said: “Having control over the size and shape of the nanotubes allows us to tune them to absorb light where the tissue is transparent and will allow them to be used for both the imaging and treatment of cancers. The next stage will be to load these nanotubes with medicines for enhanced therapies.”
Bravo to the team behind “Communicating Science: A Global Perspective” published in September 2020 by the Australian National University Press!
Two of the editors, Toss Gascoigne (Visiting fellow, Centre for the Public Awareness of Science, Australian National University) and Joan Leach (Professor, Australian National University) have written November 8, 2020 essay featuring their book for The Conversation,
It’s a challenging time to be a science communicator. The current pandemic, climate crisis, and concerns over new technologies from artificial intelligence to genetic modification by CRISPR demand public accountability, clear discussion and the ability to disagree in public.
…
Since the Second World War, there have been many efforts to negotiate a social contract between science and civil society. In the West, part of that negotiation has emphasised the distribution of scientific knowledge. But how is the relationship between science and society formulated around the globe?
We collected stories from 39 countries together into a book. …
…
The term “science communication” is not universal. For 50 years, what is called “science communication” in Australia has had different names in other countries: “science popularisation”, “public understanding”, “vulgarisation”, “public understanding of science”, and the cultivation of a “scientific temper”.
Colombia uses the term “the social appropriation of science and technology”. This definition underscores that scientific knowledge is transformed through social interaction.
Each definition delivers insights into how science and society are positioned. Is science imagined as part of society? Is science held in high esteem? Does association with social issues lessen or strengthen the perception of science?
Governments play a variety of roles in the stories we collected. The 1970s German government stood back, perhaps recalling the unsavoury relationship between Nazi propaganda and science. Private foundations filled the gap by funding ambitious programs to train science journalists. In the United States, the absence of a strong central agency encouraged diversity in a field described variously as “vibrant”, “jostling” or “cacophonous”.
…
Russia saw a state-driven focus on science through the communist years, to modernise and industrialise. In 1990 the Knowledge Society’s weekly science newspaper Argumenty i Fakty had the highest weekly circulation of any newspaper in the world: 33.5 million copies. But the collapse of the Soviet Union showed how fragile these scientific views were, as people turned to mysticism.
…
Eighteen countries contributing to the book have a recent colonial history, and many are from the Global South. They saw the end of colonial rule as an opportunity to embrace science. …
…
Science in these countries focused mainly on health, the environment and agriculture. Nigeria’s polio vaccine campaign was almost derailed in 2003 when two influential groups, the Supreme Council for Shari’ah in Nigeria and the Kaduna State Council of Imams and Ulamas, declared the vaccine contained anti-fertility substances and was part of a Western conspiracy to sterilise children. Only after five Muslim leaders witnessed a successful vaccine program in Egypt was it recognised as being compatible with the Qur’an.
This collection charts the emergence of modern science communication across the world. This is the first volume to map investment around the globe in science centres, university courses and research, publications and conferences as well as tell the national stories of science communication.
…
Communicating Science describes the pathways followed by 39 different countries. All continents and many cultures are represented. For some countries, this is the first time that their science communication story has been told. [emphasis mine]
Here’s a link to and a citation for the book,
Communicating Science; A Global Perspective. Edited by Toss Gascoigne, Bernard Schiele, Joan Leach, Michelle Riedlinger, Bruce V. Lewenstein, Luisa Massarani, Peter Broks. DOI: http://doi.org/10.22459/CS.2020 ISBN (print): 9781760463656 ISBN (online): 9781760463663 Imprint [Publisher]: ANU Press Publication date: Sep 2020
The paper copy is $150 and I assume those are Australian dollars. There are free online and e-versions but they do ask you to: Please read Conditions of use before downloading the formats.
A commentary on the Canadian chapter, mostly
Before launching into the commentary, Here’s a bit about words.
Terminology
Terminology, whether it’s within one language or across two or more languages, is almost always an issue and science communication is no exception as is noted in the Introduction (Subsection 4, page 11),
In the course of compiling the chapters, we found that the term ‘science communication’ has many definitions and not all researchers or practitioners agree on its goals and boundaries. It has been variously described as an objective, goals, a process, a result and an outcome. This confusion over a definition is reflected in the terminology used internationally for the field. From the second half of the 20th century, what we have chosen to call ‘science communication’ for this book has flown under different headings: ‘science popularisation, ‘public understanding’, ‘vulgarisation’, ‘social appropriation of science and technology’, ‘public understanding of science’ and ‘scientific temper’ for example. In all, the chapters mention 24 separate terms for the expression ‘science communication’ that we chose. We have taken note of that variety.
Very few of the chapters which are organized by country name attempt to establish a definition. The chapter on Canada written by Michelle Riedlinger, Alexandre Schiele and Germana Barata is one of the many not offering any definitions for ‘science communication’. Although, it does offer a few other terms used as synonyms or closely allied concepts (also without definitions). They include ‘science or scientific culture’, which (according to a Nov.13.20 email from Toss Gascoigne in response to my question about science culture being a term unique to Canada) has French roots and is used in France and Canada.
Scope
The scope for both the book and the chapter on Canada is substantive and everyone involved is to be lauded for their efforts. Here’s how the book is described on the publisher’s ‘Communicating Science; A Global Perspective’ webpage (Note: more about the emphases in the ‘I love you; we need to talk’ subsection below),
…
This collection charts the emergence of modern science communication across the world. This is the first volume to map investment around the globe in science centres, university courses and research, publications and conferences as well as tell the national stories of science communication. [emphases mine]
…
The authors of the Canada chapter managed to squeeze a lot of Canadian science communication history into 21 pp. of text.
Quite an accomplishment. I am particularly admiring as earlier this year I decided to produce a 10 year overview (2010 – 19) of science culture in Canada and got carried away proceeded to write a 25,000 word, multi-part series.
Given the November 8, 2020 essay and its storytelling style, I wasn’t expecting the largely historical review I found in both the Canada and France chapters. I advise reading the Introduction to the book first as that will set expectations more accurately.
I love you; we need to talk
I learned a lot about the history of science communication in Canada. It’s the first time I’ve seen a document that pulls together so much material ranging from 19th century efforts to relatively contemporaneous efforts, i.e., 2018 or thereabouts.
There’s something quite exciting about recognizing the deep roots that science communication has in Canada.
I just wish the authors hadn’t taken ‘the two cultures’ (French and English) route. By doing so, they managed to write a history that ignores a lot of other influences including that of Canada’s Indigenous peoples and their impact on Canadian science, science culture, and, increasingly, science communication. (Confession, I too missed the impact from Indigenous peoples in my series.)
Plus, ‘two cultures’ seems a dated (1970s?) view of Canadian society and, by extension, its science culture and communication.
This was not the only element that seemed out of date. The authors mentioned Canada’s National Science and Technology Week without noting that the effort was rebranded in 2016 as ‘Science Odyssey’ (plus, its dates moved from Oct. to May of each year).
No surprise, the professional and institutional nature of science communication was heavily emphasized. So, it was delightful to find a section (2.10 on page 11) titled, “Citizen involvement in science communication.” Perhaps, they were constrained for space as they didn’t include the astronomy community, which I believe is amongst our oldest citizen science groups with roots that can be traced back to the 19th century (1868).
There are some other omissions (unless noted otherwise, I managed to include something on the topic in my series):
the Canadian Arctic and/or The North (I tried but did not succeed)
art/science (also known as sciart) communities
the maker and do-it-yourself (DIY) communities
open science, specifically, the open science initiative at McGill University’s Neuro (Montreal Neurological Institute-Hospital) (can’t remember but I probably missed this too)
the immigrant communities and their impact (especially obvious in light of the January 2020 downed PS752 Flight from Iran to the Ukraine; many of the passengers were Canadians and/or students coming to study and a stunning percentage of those people were in science and/or technology) (I didn’t do as good as job as I should have)
women or gender issues (I missed it too)
BIPOC representation (yes, I missed it)
LGBTQ+ representation (yes, me too)
social sciences (yes, me too)
etc.
The bits I emphasized in the publisher’s description of the book “science centres, university courses and research, publications and conferences as well as tell the national stories of science communication” set up tension between a ‘national story of science communication’ and a ‘national story of institutionalized and/or academic science communication’.
Clearly, the authors had an almost impossible task and by including citizen science and social media and some independent actors they made an attempt to recognize the totality. Still, I wish they had managed even a sentence or two mentioning some of these other communities of interest and/or noting the omissions.
Here’s more about the difficulties I think the authors encountered.
It’s all about central Canada
As noted with other problems, this one happened to me too (in my 2010 – 19 Canadian science culture overview). It’s as if the provinces of Ontario and Québec exert a centrifugal force throughout every aspect of our nationhood including our science and science communication. Almost everything tracks back to those provinces.
The authors have mentioned most of the provinces, although none of the three Northern territories, in their chapter, evidence they made an attempt. What confounds me is the 7 pp. of 21 pp. of text dedicated to Québec alone, in addition to the Québec mentions in the other 14 pp. If there was a problem with word count, couldn’t they have shaved off a paragraph or two to include some or all of the omissions I noted earlier? Or added a paragraph or two to the chapter?
Framing and authors
By framing the discussion about Canada within the ‘two culture’ paradigm, the authors made things difficult for themselves. Take a look at the title and first sentence for the chapter,
CANADA One country, two cultures: Two routes to science communication
…
This chapter provides an account of modern science communication in Canada, including historical factors influencing its development, and the development of the distinct Province of Quebec. …
The title and discussion frame the article so tightly that anything outside the frame is an outlier, i.e., they ‘baked’ in the bias. It’s very similar to the problem in scientific research where you have to be careful about your research question because asking the wrong question or framing it poorly will result in problematic research.
Authors
It’s not unusual for family members to work in the same field and even work together (Marie and Pierre Curie spring to mind). I believe the failure to acknowledge (I checked the introduction, the acknowledgements, and the Canada chapter) the relationship between one of the authors (Alexandre Schiele, son) of the Canada chapter to one of the book’s editors (Bernard Schiele, father) was an oversight. (Both also have some sort of affiliation with the Université du Québec à Montréal [UQAM]).
Anyway, I hope subsequent editions of the book will include an acknowledgement. These days, transparency is important, eh?
Having gotten that out of the way, I was curious about the ‘Canada’ authors and found this on p. 204,
Contributors
Dr Michelle Riedlinger is an associate professor at the University of the Fraser Valley, British Columbia, Canada, and secretary of the PCST Network [Public Communication of Science and Technology Network] and her career spans the practical and theoretical sides of science communication.
Dr Alexandre Schiele holds a PhD in communication science (Sorbonne) and another in political science (University of Quebec). He is working on a project ‘Mapping the New Science Communication Landscape in Canada’.
Dr Germana Barata is a science communication researcher at the Laboratory of Advanced Studies in Journalism (Labjor) at the State University of Campinas, Brazil, and a member of the Scientific Committee of the PCST Network.
Outsiders often provide perceptive and thoughtful commentary. I did not find any discernible trace of that perspective n the chapter despite all three authors having extensive experience in other countries.
Riedlinger is more strongly associated with Australia than Canada (source: Riedlinger’s biography on the Public Communication of Science and Technology Network). As of July 2020, she is a senior lecturer at Australia’s Queensland University of Technology (QUT).
Interestingly, she is also a Board member of the Science Writers and Communicators of Canada (SWCC) (source: her QUT biography). I’ll get back to this membership later.
Barata is (or was?) a research associate at Simon Fraser University’s Canada Scholar Communications Lab (ScholCommLab) (source: Barata’s SFU biography) in addition to her work in Brazil.
Those two would seem to cover the southern hemisphere. The third gives us the northern hemisphere.
A. Schiele (source: his CV on ResearchGate) is (or was?) a researcher at the UQAM (Université du Québec à Montréa) East Asia Observatory and is (or was?) at (source: profile on Academia.edu) The Hebrew University of Jerusalem’s Louis Frieberg Center for East Asian Studies.
After looking at their biographies and CV, the Canada book chapter is even more disappointing. Yes, the authors were constrained by the book’s raison d’être and the way they framed their chapter but , perhaps, there’s something more to the story?
The future of science communication and the ‘elephant in the room’
At the conclusion of the Canada chapter (pp. 194-6), there’s this,
4. The future for modern science communication in Canada
Recent surveys of Canadian science communicators identified though Twitter and Instagram show that, compared to traditional science communication professionals, social media communicators are younger, paid less (or not at all) for their science communication activities, and have been communicating for fewer years than other kinds of science communicators (Riedlinger, Barata and Schiele [A], 2019). They are more likely to have a science background (rather than communication, journalism or education background) and are less likely to be members of professional associations. These communicators tend to be based in Ontario, Quebec and British Columbia, and communicate with each other through their own informal networks. Canadian social media science communicators are primarily located in the provinces identified by Schiele [B] and Landry (2012) as the most prolific regions for science communication in Canada, where Canada’s most prestigious and traditional universities are located, and where the bulk of Canada’s population is concentrated. While some science journalists and communicators in Canada mourn the perceived loss of control over science communication as a loss of quality and accuracy, others welcome digital technology for the public engagement potential it offers. For example, Canadian science Instagram communicator Samantha Yammine [emphasis mine] was recently criticised in a Sciencemagazine op-ed piece for trivialising scientific endeavours on social media (Wright, 2018). However, supporters of Yammine argued that she was successfully responding to the Instagram medium in her communication (see, for example, Lougheed, 2018 [emphasis mine]; Marks, 2018). Science has subsequently published an article by Yammine and other social media communicators on the benefits of social media for science communication (Yammine, Liu, Jarreau and Coe, 2018). Social media platforms are allowing space for sociopolitically motivated communicators in Canada to work productively. The impact of these social media science communication efforts is difficult to assess; yet open science for consensus building and support for science in society efforts are needed in Canada now more than ever.
Canada has seen increased investments in science as described by the Naylor Report and the Global Young Academy, but science communication and outreach efforts are still needed to support science culture nationally (Boon, 2017a) [emphasis mine]. Funding for activities happens at the federal level through agency funding; however, Canadian scientists, science communicators and science policymakers have criticised some recent initiatives for being primarily aimed at youth rather than adults, supporting mainly traditional and established organisations rather than innovative science communication initiatives, and having limited connection with the current and broader community of science communicators in Canada. While some science communicators are actively advocating for greater institutional support for a wider range of science communication initiatives (see Boon, 2017b) [emphasis mine], governments and scientific communities have been slow to respond.
Austerity continues to dominate public policy in Quebec, and science culture has ceased to be a priority. The Society for the Promotion of Science and Technology dissolved in 2010 and State-sponsored PCST in Quebec has come to an end. PCST actors and networks in Quebec persevere although they face difficulties in achieving an online presence in a global, yet overwhelmingly Anglophone, social media environment. However, the European Union program Horizon 2020 may very well encourage a new period of renewed government interest in science communication.
As a preface to the next subsection, I want to note that the relationships and networks I’m describing are not problematic or evil or sinister in and of themselves. We all work with friends and acquaintances and, even, family when we can. If not, we find other ways to establish affiliations such as professional and informal networks.
The advantages include confidence in the work quality, knowing deadlines will be met and that you’ll be treated fairly and acknowledged, getting a fast start, etc. There are many advantages and one of the biggest disadvantages (in my opinion) is ‘group think’, i.e., the tendency for a group to unconsciously reinforce each other’s biases.
Weirdly, outsiders such as myself have a similar problem. While people within networks tend to get reinforcing feedback, ‘group think’, outsiders don’t get much, if any. Without feedback you’re at the mercy of your search techniques and you tend to reinforce your own biases and shortsightedness (you’re inside your own echo chamber). In the end research needs to take those shortcomings, biases, and beliefs into account.
Networks and research can be a trap
All three authors are in one fashion or another closely associated with the PCST Network. Two (Riedlinger and Barata) are board or executive members of the PCST Network and one (A. Schiele) has familial relationship with a book editor (B. Schiele) who is himself an executive member of the PCST Network. (Keep tuned, there’s one more network of relationships coming up.)
Barata, Riedlinger, and A. Schiele were the research team for the ‘Mapping the New Science Communication Landscape in Canada’ project as you can see here. (Note: Oops! There’s a typo in the project title on the webpage, which, unexpectedly, is hosted by Brazil’s Laboratory of Advanced Studies in Journalism [Labjor] where Barata is a researcher.)
My points about ‘Mapping …’ and the Canada book chapter,
The Canada book chapter’s ‘The impact of new and emerging technology …’ has roots that can be traced back to the ‘Mapping’ project, which focused on social media (specifically, Instagram and Twitter).
The ‘Mapping’ project is heavily dependent on one network (not PCST).
The Canada chapter is listed as one of the ‘Mapping’ project’s publications. (Source: Project’s Publications page).
The ‘Impact’ subsection sets the tone for a big chunk of the final subsection, ‘The future …’ both heavily dependent on the ‘Mapping’ project.
The ‘Mapping’ project has a few problems, which I describe in the following.
In the end, two sections of the Canada chapter are heavily dependent on one research project that the authors themselves conducted.
Rather than using an authoritative style, perhaps the authors could have included a sentence indicating that more research is needed before making definitive statements about Canadian science communication and its use of new and emerging technologies and about its future.
The second network and other issues
Counterintuitively, I’m starting with the acknowledgements in the materials produced by the three authors for their ‘Mapping’ project and then examining the Canada chapter’s ‘Impact of new emerging and technologies …’ subsection before getting back to the Canada chapter’s final subsection ‘The future …’.
The authors’ 2019 paper is interesting. You can access the title, “The landscape of science communication in contemporary Canada: A focus on anglophone actors and networks” here on Academia.edu and you can access the author’s 2018 paper “Using social media metrics to identify science communicators in Canada” for the 2018 Science & You conference in Beijing, China here on ResearchGate. Both appear to be open access. That is wonderful and much appreciated.
The 2019 and 2018 papers’ Acknowledgements have something interesting (excerpt from 2019 paper),
This study was supported by the Social Sciences and Humanities Research Council of Canada through Grant (892-2017-2019) to Juan Pablo Alperin [there’s a bit more info. about the grant on Alperin’s CV in the Grants subsection] and Michelle Riedlinger. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We would like to thank the Science Writers and Communicators of Canada (SWCC) for their partnership in this project. [emphasis mine] In particular, we are grateful for the continued support and assistance of Shelley McIvor, Janice Benthin and Tim Lougheed [emphasis mine] from SWCC, and Stéphanie Thibault from l’Association des communicateurs scientifiques du Québec (ACS).
2.12. The impact of new and emerging technologies on science communication
Coupled with government ambivalence towards science communication over the last decade, Canada has experienced the impact of new and emerging technologies and changing economic conditions. These changes have reshaped the mainstream media landscape in many parts of the world, including Canada, and the effects have been exacerbated by neoliberal agendas. The changes and their impacts on Canadian journalism were captured in the Canadian survey report The Shattered Mirror (2017). The survey found that Canadians prefer to be informed through the media but on their own timelines and with little or no cost to themselves.
Canada’s science media have responded to new media in many ways. For example, in 2005, CBC’s Quirks and Quarks became the first major CBC radio show to be made available as a free podcast. Canada’s very active blogging community has been developing from the early 2000s, and recent digital initiatives are helping redefine what independent science communication looks like. These initiatives include Science Borealis, launched in 2013 [emphasis mine] (Science Borealis, 2018), Hakai Magazine [emphasis mine] launched in 2015 (Hakai Magazine, n.d.), and The Conversation Canada launched in 2017 (The Conversation Canada, 2018). Twitter, Instagram and YouTube are also supporting a growing number of science communicators engaging a diverse range of publics in digital spaces. …
[assume my emphasis for this paragraph; I didn’t have the heart to make any readers struggle through that much bolding] In 2016, the Canadian Science Writers Association changed its name to the Science Writers and Communicators of Canada Association (SWCC) to reflect the new diversity of its membership as well as the declining number of full-time journalists in mass media organisations. SWCC now describes itself as a national alliance of professional science communicators in all media, to reflect the blurring boundaries between journalism, science communication and public relations activities (SWCC, 2017). In 2017, SWCC launched the People’s Choice Awards for Canada’s favourite science site and Canada’s favourite blog to reflect the inclusion of new media.
Given that so much of the relatively brief text in this three paragraph subsection is devoted to SWCC and the examples of new media science practitioners (Science Borealis, Hakai Magazine, and Samantha Yammine) are either associated with or members of SWCC, it might have been good idea to make the relationship between the organization and the three authors a little more transparent.
We’re all in this together: PCST, SWCC, Science Borealis, Hakai Magazine, etc.
Here’s a brief recapitulation of the relationships so far: Riedlinger and Barata, both co-authors of the Canada chapter, are executive/board/committee members of the Public Communication of Science and Technology (PCST) network. As well, Bernard Schiele one of the co-editors of the book is also a committee member of PCST (source: PCST webpage) and, as noted earlier, he’s related to the third co-author of the Canada chapter, Alexandre Schiele.
Plus, Riedlinger is one of the book’s editors.
Interestingly, four of the seven editors for the book are members of the PCST network.
More connections:
Remember Riedlinger is also a board member of the Science Writers and Communicators of Canada (SWCC)?
One of the founding members* of Science Borealis (a Canadian science blog aggregator), Sarah Boon is the managing editor for Science Borealis (source: Boon’s LinkedIn profile) and also a member of the SWCC (source: About me webpage on Watershed Notes). *Full disclosure: I too am a co-founding member of Science Borealis.*
Boon’s works and works from other SWCC members (e.g., Tim Lougheed) are cited in the conclusion for the Canada chapter.
Hakai Magazine and Science Borealis both cited as “… recent digital initiatives … helping redefine what independent science communication looks like.”
Hakai’s founding and current editor-in-chief is Jude Isabella, a past board member of the *SWCC’s predecessor organization Canadian Science Writers Association (source: Dec. 11, 2020 communication from Ms. Isabella)*
In short, there are many interlaced relationships.
The looking glass and a lack of self-criticism
Reviewing this work put some shortcomings of and biases in my own work into high relief. It’s one of the eternal problems, blindness, whether it’s a consequence of ‘group think’ or a failure to get out of your own personal bubble. Canadian science communication/culture is a big topic and it’s easy to get trapped in your own bubble or your group’s bubble.
As far as I can tell from reading the conference paper (2018) and the paper published in Cultures of Science (2019), there is no indication in the text that the researchers critiqued their own methodology.
Specifically,. most of the respondents to their survey were from one of two professional science communication organizations (SWCC and ACS [Association des communicateurs scientifiques du Québec]). As for the folks the authors found on Twitter and Instagram, those people had to self-identify as science communicators or use scicomm, commsci, vulgarisation and sciart as hashtags. If you didn’t use one of those hashtags, you weren’t seen. Also, ‘sciart’ can be called ‘artsci’ so, why wasn’t that hashtag also used?
In short, the research seems to have a rather narrow dataset, which is not a problem in and of itself, as long as it’s noted in your paper. Unfortunately, the authors didn’t and that problem/weakness followed the researchers into the book.
Remember the subsection: ‘2.12. The impact of new and emerging technologies on science communication’? As noted, it was heavily influenced by the co-authors own research and in this book, those words attain great authority as they are writing about Canada’s science communication and the ‘The future for modern science communication in Canada‘.
Getting back briefly to connections or, in this case, a lack of. There seems to have been one ‘outside’ editor/reviewer (source: Acknowledgements] for the book, Ranjan Chaudhuri, Associate Professor at National Institute of Industrial Engineering Mumbai (source: Chaudhuri’s LinkedIn profile). He’s the only person amongst the authors and the editors for whom I could find no connection to PCST.
(Book editors who weren’t previously mentioned: Joan Leach and Bruce V. Lewenstein were both invited speakers at the 2016 PCST Talk in Istanbul, Turkey and Peter Broks presented in 2004 at the PCST conference in Barcelona, Spain and his work was presented at a 2018 PCST conference in Dunedin, New Zealand.)
Chaudhuri doesn’t seem to have any connection and the other three seem to have, at best, a weak connection to PCST. That leaves four ‘outsiders’ to critically review and edit chapters from 39 countries. It’s an impossible job.
So, what is the future of science communication in Canada?
In the end, I have love for and two big problems with the Canada chapter.
What were they thinking?
Maybe someone could help me understand why the final paragraph of the Canada chapter is about Québec, the PCST, and the European Union’s Horizon 2020 science funding initiative.
Ending the chapter with the focus, largely, on one province, **an international organization (PCST) incorporated in Australia**, and a European science funding initiative that sunsets in 2020 to be replaced by Horizon Europe 2021-27 confounds me.
Please, someone out there, please help me. How do these impact or set the future for science communication in Canada?
Aside: the authors never mention Québec’s Agence Science-Presse. It’s an independent media outlet founded in 1978 and devoted, as you can see from the name, entirely to science. It seems like an odd omission.
Now, I have another question.
What about other realities, artificial intelligence, and more?
Why didn’t the authors mention virtual reality (VR)/augmented reality (AR)/mixed reality (MR)/cross reality (XR) and others? What about artificial intelligence (AI) and automated writing, i.e., will we need writers and communicators? (For anyone not familiar with the move to automate more of the writing process, see my July 16, 2014 posting “Writing and AI or is a robot writing this blog?” when Associated Press (AP) had made a deal with Automated Insights and my Sept. 16, 2019 posting “Automated science writing?” about some work at the Massachusetts Institute of Technology [MIT].)
It’s not exactly new but what impact are games of the virtual and real life types having?
All of these technologies and others on the horizon are certain to have an effect on the future of science communication in Canada.
Confession: I too missed these new and emerging technologies when pointing to the future in my own series. (sigh) Blindness affects all of us.
The future
I wish the authors had applied a little more imagination to the ‘future’ because I think it has major possibilities grounded in both new and emerging technologies and in hopes for greater inclusiveness (Indigenous communities, citizen scientists, elders, artists, and more) in the Canadian science communication effort. As for the possible impact these groups and technologies will have on institutionalized and noninstitutionalized science communication, I would dearly like to have seen mention of the possibility if not outright speculation.
The end
There is a lot to admire in the Canada chapter. Given the amount of history they were covering, the authors were admirably succinct and disciplined. There’s a lot to be learned in this chapter.
As for the flaws, as noted many times, I am subject to many of the same ones. I have often longed for a critical reader who can see what I can’t. In some ways, it’s the same problem academics face.
Thank you to the authors and the editors for an unexpected treat. Examining their work made it possible for me to cast a jaundiced eye on some of my own, becoming my own critical reader. Again, thank you to the authors and editors of this book. I just hope this critique proves useful to someone else too.
Links
For anyone who is curious, here’s a link to the authors’ interactive map of the new landscape (Twitter and Instagram) of science communication in Canada. BTW, I was charmed by and it looks like they’re still adding to the map.
There you have it, science communication in Canada, more or less, as a book chapter and as a multipart series warts and all.
*Original: “a past board member of the SWCC’ (source: homepage of Isabella’s eponymous website)” changed on Dec. 11, 2020 to”past board member of SWCC’s predecessor organization Canadian Science Writers Association (source: Dec. 11, 2020 communication from Ms. Isabella)”
**Original:”an Australian organization (PCST)” changed on Dec. 11, 2020 to “an international organization (PCST) incorporated in Australia”