The Woodrow Wilson International Center for Scholars has planned a US-centric event, in this case, but I think it could be of interest to anyone interested in low-cost and open source tools.
For anyone who’s unfamiliar with the term ‘open source’, as applied to a tool and/or software, it’s pretty much the opposite of a tool/software that’s kept secret by a patent. The Webopedia entry for Open Source Tools defines the term this way (Note: Links have been removed),
Open source tools is a phrase used to mean a program — or tool — that performs a very specific task, in which the source code is openly published for use and/or modification from its original design, free of charge. …
Getting back to the Wilson Center, I received this Jan. 22, 2021 announcement (via email) about their upcoming ‘Low-Cost and Open Source Tools: Next Steps for Science and Policy’ event,
Low-Cost and Open Source Tools: Next Steps for Science and Policy
Monday Feb. 1, 2021 3:30pm – 5:00pm ET
Foldable and 3D printed microscopes are broadening access to the life sciences, low-cost and open microprocessors are supporting research from cognitive neuroscience to oceanography, and low-cost and open sensors are measuring air quality in communities around the world. In these examples and beyond, the things of science – the physical tools that generate data or contribute to scientific processes – are becoming less expensive, and more open.
Recent developments, including the extraordinary response to COVID-19 by maker and DIY communities, have demonstrated the value of low cost and open source hardware for addressing global challenges. These developments strengthen the capacity of individual innovators and community-based organizations and highlight concrete opportunities for their contribution to science and society. From a policy perspective, work on low-cost and open source hardware– as well as broader open science and open innovation initiatives– has spanned at least two presidential administrations.
When considering past policy and practical developments, where are we today?
With the momentum of a new presidential administration, what are the possible next steps for elevating the value and prominence of low-cost and open source tools? By bringing together perspectives from the general public and public policy communities, this event will articulate the proven potential and acknowledge the present obstacles of making low-cost and open source hardware for science more accessible and impactful.
Alison Parker, Senior Program Associate, Science & Technology Innovation Program, The Wilson Center
3:40 Keynote Speech: Perspectives from the UNESCO Open Science Recommendation
Ana Persic, United Nations Educational, Scientific and Cultural Organization (UNESCO)
3:55 Panel: The progress and promise of low-cost and open tools for accelerating science and addressing challenges
Meghan McCarthy, Program Lead, 3D Printing and Biovisualization, NIH/NIAID at Medical Science & Computing (MSC)
Gerald “Stinger” Guala, Earth Sciences Division, Science Mission Directorate, National Aeronautics and Space Administration (NASA)
Zac Manchester, The Robotics Institute, Carnegie Mellon University (CMU)
Moderator: Anne Bowser, Deputy Director, Science & Technology Innovation Program, The Wilson Center
4:45 Closing Remarks: What’s Next?
Shannon Dosemagen, Open Environmental Data Project
This project is an initiative of the Wilson Center’s THING Tank. From DIY microscopes made from paper and household items, to low cost and open microprocessors supporting research from cognitive neuroscience to oceanography, to low cost sensors measuring air quality in communities around the world, the things of science — that is, the physical tools that generate data or contribute to scientific processes — are changing the way that science happens.
Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.
Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.
Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.
“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”
The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.
“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”
The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.
For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”
At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.
“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”
“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.
This image illustrates the interplay between the various level dynamics,
Here’s a link, to and a citation for the special issue,
An AI governance publication from the US’s Wilson Center
Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,
Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg
In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:
AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.
However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.
Canadian government and AI
The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.
There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)
Responsible use? Maybe not after 2019
First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?
For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?
What about the government’s digital service?
You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,
In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.
At the time, Simon was Director of Outreach at Code for Canada.
Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.
Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,
Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.
At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.
How it works
We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.
Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.
Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.
Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.
As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)
Does the Treasury Board of Canada have charge of responsible AI use?
I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.
The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.
I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.
But isn’t there a Chief Information Officer for Canada?
Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,
Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.
“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.
He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.
He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]
Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),
Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.
The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.
Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.
Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.
Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”
Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.
Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?
I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.
The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.
The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,
Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.
And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.
Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.
These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.
While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.
Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.
Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?
Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.
When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.
Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.
Instead, the Phoenix Pay system currently employs about 2,300. This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.
… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].
Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.
The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.
Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).
After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:
Insights and predictive modelling
PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.
I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,
Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.
Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.
To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.
In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.
CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.
The objectives of the strategy are to:
Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.
Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.
Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.
Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.
Responsible AI at CIFAR
You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,
CIFAR is leading global conversations about AI’s impact on society.
The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.
Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.
I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.
Final comments about Responsible AI in Canada and the new reports
I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.
I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.
The great unwashed
What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.
I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.
Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen
Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.
The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,
The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.
Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.
In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.
Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.
Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.
Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”
Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”
The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.
Doe we really need senior government bureaucrats?
I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,
When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19
As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.
With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.
“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”
Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”
It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.
Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.
By late February , Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.
“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”
China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”
It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.
But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.
The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.
However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.
The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July , are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.
Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.
Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.
Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.
If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.
The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.
If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,
Responsible AI, eh?
Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.
Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.
Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.
A lot of mistakes have been made but we also do make a lot of good decisions.
The Wilson Center (also known as the Woodrow Wilson International Center for Scholars) in Washington, DC is hosting a live webcast tomorrow on Dec. 3, 2020 and a call for applications for an internship (deadline; Dec. 18, 2020) and all of it concerns artificial intelligence (AI).
Assessing the AI Agenda: a Dec. 3, 2020 event
This looks like there could be some very interesting discussion about policy and AI, which could be applicable to other countries, as well as, the US. From a Dec. 2, 2020 Wilson Center announcements (received via email),
Assessing the AI Agenda: Policy Opportunities and Challenges in the 117th Congress
Thursday Dec. 3, 2020 11:00am – 12:30pm ET
Artificial intelligence (AI) technologies occupy a growing share of the legislative agenda and pose a number of policy opportunities and challenges. Please join The Wilson Center’s Science and Technology Innovation Program (STIP) for a conversation with Senate and House staff from the AI Caucuses, as they discuss current policy proposals on artificial intelligence and what to expect — including oversight measures–in the next Congress. The public event will take place on Thursday, December 3  from 11am to 12:30pm EDT, and will be hosted virtually on the Wilson Center’s website. RSVP today.
Sam Mulopulos, Legislative Assistant, Sen. Rob Portman (R-OH)
Sean Duggan, Military Legislative Assistant, Sen. Martin Heinrich (D-NM)
Dahlia Sokolov, Staff Director, Subcommittee on Research and Technology, House Committee on Science, Space, and Technology
Mike Richards, Deputy Chief of Staff, Rep. Pete Olson (R-TX)
Meg King, Director, Science and Technology Innovation Program, The Wilson Center
We hope you will join us for this critical conversation. To watch, please RSVP and bookmark the webpage. Tune in at the start of the event (you may need to refresh once the event begins) on December 3. Questions about this event can be directed to the Science and Technology Program through email at email@example.com or Twitter @WilsonSTIP using the hashtag #AICaucus.
Wilson Center’s AI Lab
This initiative brings to mind some of the science programmes that the UK government hosts for the members of Parliament. From the Wilson Center’s Artificial Intelligence Lab webpage,
Artificial Intelligence issues occupy a growing share of the Legislative and Executive Branch agendas; every day, Congressional aides advise their Members and Executive Branch staff encounter policy challenges pertaining to the transformative set of technologies collectively known as artificial intelligence. It is critically important that both lawmakers and government officials be well-versed in the complex subjects at hand.
What the Congressional and Executive Branch Labs Offer
Similar to the Wilson Center’s other technology training programs (e.g. the Congressional Cybersecurity Lab and the Foreign Policy Fellowship Program), the core of the Lab is a six-week seminar series that introduces participants to foundational topics in AI: what is machine learning; how do neural networks work; what are the current and future applications of autonomous intelligent systems; who are currently the main players in AI; and what will AI mean for the nation’s national security. Each seminar is led by top technologists and scholars drawn from the private, public, and non-profit sectors and a critical component of the Lab is an interactive exercise, in which participants are given an opportunity to take a hands-on role on computers to work through some of the major questions surrounding artificial intelligence. Due to COVID-19, these sessions are offered virtually. When health guidance permits, these sessions will return in-person at the Wilson Center.
Who Should Apply
The Wilson Center invites mid- to senior-level Congressional and Executive Branch staff to participate in the Lab; the program is also open to exceptional rising leaders with a keen interest in AI. Applicants should possess a strong understanding of the legislative or Executive Branch governing process and aspire to a career shaping national security policy.
Side trip: Science Meets (Canadian) Parliament
Briefly, here’s a bit about a programme in Canada, ‘Science Meets Parliament’ from the Canadian Science Policy Centre (CSPC); a not-for-profit, and the Canadian Office of the Chief Science Advisor (OCSA); a position with the Canadian federal government. Here’s a description of the programme from the Science Meets Parliament application webpage,
The objective of this initiative is to strengthen the connections between Canada’s scientific and political communities, enable a two-way dialogue, and promote mutual understanding. This initiative aims to help scientists become familiar with policy making at the political level, and for parliamentarians to explore using scientific evidence in policy making. [emphases mine] This initiative is not meant to be an advocacy exercise, and will not include any discussion of science funding or other forms of advocacy.
The Science Meets Parliament model is adapted from the successful Australian program held annually since 1999. Similar initiatives exist in the EU, the UK and Spain.
CSPC’s program aims to benefit the parliamentarians, the scientific community and, indirectly, the Canadian public.
This seems to be a training programme designed to teach scientists how to influence policy and to teach politicians to base their decisions on scientific evidence or, perhaps, lean on scientific experts that they met in ‘Science Meets Parliament’?
I hope they add some critical thinking to this programme so that politicians can make assessments of the advice they’re being given. Scientists have their blind spots too.
CSPC and OCSA are pleased to offer this program in 2021 to help strengthen the connection between the science and policy communities. The program provides an excellent opportunity for researchers to learn about the inclusion of scientific evidence in policy making in Parliament.
You can find out more about benefits, eligibility, etc. on the application page.
Paid Graduate Research Internship: AI & Facial Recognition
Getting back to the Wilson Center, there’s this opportunity (from a Dec. 1, 2020 notice received by email),
New policy is on the horizon for facial recognition technologies (FRT). Many current proposals, including The Facial Recognition and Biometric Technology Moratorium Act of 2020 and The Ethical Use of Artificial Intelligence Act, either target the use of FRT in areas such as criminal justice or propose general moratoria until guidelines can be put in place. But these approaches are limited by their focus on negative impacts. Effective planning requires a proactive approach that considers broader opportunities as well as limitations and includes consumers, along with federal, state and local government uses.
More research is required to get us there. The Wilson Center seeks to better understand a wide range of opportunities and limitations, with a focus on one critically underrepresented group: consumers. The Science and Technology Innovation Program (STIP) is seeking an intern for Spring 2021 to support a new research project on understanding FRT from the consumer perspective.
A successful candidate will:
Have a demonstrated track record of work on policy and ethical issues related to Artificial Intelligence (AI) generally, Facial Recognition specifically, or other emerging technologies.
Be able to work remotely.
Be enrolled in a degree program, recently graduated (within the last year) and/or have been accepted to enter an advanced degree program within the next year.
Interested applicants should submit:
Cover letter explaining your general interest in STIP and specific interest in this topic, including dates and availability.
CV / Resume
Two brief writing samples (formal and/or informal), ideally demonstrating your work in science and technology research.
Applications are due Friday, December 18th . Please email all application materials as a single PDF to Erin Rohn, firstname.lastname@example.org. Questions on this role can be directed to Anne Bowser, email@example.com.
Intelligence Squared (IQ2US) was featured here in a January 18, 2019 posting when the organization hosted a ‘de-extinction’ (or ‘resurrection’) biology debate. I was quite impressed with the quality of the arguments, pro and con (for and against) and the civility with which the participants conducted themselves. Fingers crossed their upcoming Nov. 6, 2020 debate proves as satisfying.
It should be noted that Bloomberg TV is co-hosting this event with Intelligence Squared (IQ2US) and IBM is sponsoring it.
Here’s more about the debate on the motion: A U.S.-China Space Race Is Good for Humanity, from an Oct. 26, 2020 Shore Fire announcement (received via email),
Next Friday evening [Nov. 6, 2020] at 7:00 pm ET, the nonprofit debate series Intelligence Squared U.S. will hold a live debate on the motion “A U.S.-China Space Race Is Good for Humanity.”
Two of their debaters have released statements commenting on today’s news [emphasis mine; I have included information about the Oct. 26, 2020 news after this event information] out of NASA. One, Bidushi Bhattacharya, is a twenty-year veteran of NASA. The other, Avi Loeb, is one of the most prominent scientists working on space today.
… they will be debating for the motion “A U.S.-China Space Race Is Good for Humanity” with Intelligence Squared U.S. … . The debate will be viewable on Bloomberg TV’s new show ‘That’s Debatable’. Their opponents are Michio Kaku and Rajeswari Pillai Rajagopalan.
AVI LOEB STATEMENT:
“It was already known from previous studies that there is water ice on the lunar surface. But the new study identified that it is more abundant and exists all over the Moon. Interestingly, a month ago we published a paper with my former postdoc, Manasvi Lingam, arguing that liquid water may exist deep under the surface of the Moon and support sub-surface life.
“The existence of significant amounts of water on the lunar surface can be helpful for establishing a sustainable base there in the context of NASA’s Artemis program with its international partners. This will be the first step in advancing humanity to more distant destinations, such as Mars and beyond. There is no doubt that our future lies in space, not only for national security and commercial benefits but mainly for scientific exploration aimed at opening new horizons to our civilization. Earlier in October , eight countries signed the Artemis Accords , a set of international agreements drawn up by the US concerning future exploration of the Moon and the use of its resources. The Accords recognize that exploration of the Moon should be for peaceful purposes.
“In analogy with the scientific exploration conducted in the South Pole, it would be particularly interesting to search for life under the surface of the Moon once we establish a scientific base there.”
BIDUSHI BHATTACHARYA STATEMENT
“Today’s [Oct. 26, 2020] announcement has huge implications for the commercial development space sector. Private companies and startups now have a new product development opportunity. I can see a path for leveraging today’s off-planet capabilities to develop AI-based robotics to provide water extraction services for NASA, so that the agency can continue to focus on R&D.”
Theoretical Physicist & Professor
Abraham (Avi) Loeb is a theoretical physicist, author, and Harvard professor. He was the longest-serving chair of Harvard’s astronomy department (for nine years) and is an elected member of the American Academy of Arts and Sciences, the American Physical Society, and the International Academy of Astronautics. Loeb is a member of the President’s Council of Advisors on Science and Technology at the White House and, in 2012, TIME magazine selected Loeb as one of the 25 most influential people in space.
Bidushi Bhattacharya: Rocket Scientist & Space Entrepreneur
Bidushi Bhattacharya is a rocket scientist and entrepreneur. After two decades with NASA working on projects including the Hubble Space Telescope and Galileo probe to Jupiter, Bhattacharya founded Astropreneurs HUB, Southeast Asias first space technology incubator. She currently serves on the Global Entrepreneurship Network Space Advisory Board and is the CEO of Bhattacharya Space Enterprises, a Singaporean startup dedicated to space-related education and training.
They found water (rather than the ice they had found before) on the moon and announced it on Oct. 26, 2020. To be more specific, they found the water in a crater named after a Jesuit priest, Christopher Clavius, who was also an astronomer and a mathematician. Given that piece of information it’s perhaps not that surprising that my cursory search yielded (near the top of the list) an Oct. 26, 2020 article about the discovery, Clavius, and the Jesuits’ interest in the stars by Molly Cahill for America Magazine The Jesuit Review (Note: Links have been removed),
On Oct. 26 , NASA’s Stratospheric Observatory for Infrared Astronomy, or SOFIA, announced the discovery of water on the moon. The water was discovered on the moon’s sunlit surface, which “indicates that water may be distributed across the lunar surface, and not limited to cold, shadowed places,” according to a press release.
His [Christopher Clavius] observance in 1560 of a total solar eclipse as a student inspired his life’s work: astronomy. Clavius is known for his work on refining and modifying the modern Gregorian calendar, and as Billy Critchley-Menor, S.J., wrote in America, Clavius was even called the “Euclid of the 16th century” before his death in 1612. He was one of the first mathematicians in the West to popularize the use of the decimal point, and his contributions to astronomy influenced Galileo, even though Clavius himself assented to a geocentric solar system, believing the heavens rotated around the Earth.
On Friday, November 6  at 7:00 PM ET Bloomberg Television will present the second episode of the new limited series “That’s Debatable,” presented in partnership with Intelligence Squared U.S. and sponsored exclusively by IBM, with an episode debating the motion “A U.S.-China Space Race Is Good for Humanity.” China is ramping up its national space industry with huge investments in next-generation technologies that promise to transform military, economic, and political realities. Could the U.S.-China space race drive innovation, rally public support for science and discovery, and launch humans into the next generation? Or would this competition catalyze an expensive global arms race, militarize space for decades to come, and destroy any hope of international peace and cohesion in the future?
Arguing in favor of the motion “A U.S.-China Space Race Is Good for Humanity” are Harvard physicist and member of the President’s Council of Advisors on Science and Technology at the White House Avi Loeb and rocket scientist Bidushi Bhattacharya, who spent two decades with NASA working on the Hubble Space Telescope and Galileo probe. Arguing against the motion are theoretical physicist Michio Kaku, a co-founder of String Field Theory, and nuclear weapons and space policy expert Rajeswari Pillai Rajagopalan.
Filmed in front of a live virtual audience, “That’s Debatable” will be conducted in the traditional Oxford-style format with two teams of two subject matter experts debating over four rounds, moderated by veteran Intelligence Squared U.S. moderator John Donvan. The live virtual audience will vote via mobile for or against the motion to determine the winner, to be announced at the conclusion of the program.
“That’s Debatable” also presents some of the first AI-aided debates, designed to demonstrate how AI can be used to bring a larger, more diverse range of voices and opinions to the public square. …
During the debate, IBM Watson plans to use Key Point Analysis, a new capability in Natural Language Processing (NLP) developed by the same IBM Research team that created Project Debater, which is designed to analyze viewer submitted arguments [deadline was Oct. 18, 2020] and provide insight into the global public opinion on each episode’s debate topic.
… [Note: The BIOS for those ‘arguing for the motion’ is in the Oct. 26, 2020 announcement excerpted near the beginning of this post]
Michio Kaku is one of the most widely recognized figures in science. He is a theoretical physicist, international bestselling author, and co-founder of String Field Theory. His most recent book, “Future of Humanity,” projects the future of the space program centuries into the future. Kaku is a professor at the City University of New York.
Rajeswari Pillai Rajagopalan: Nuclear Weapons & Space Policy Expert
Rajeswari Pillai Rajagopalan is a distinguished fellow and head of the Nuclear and Space Policy Initiative at the Observer Research Foundation, one of India’s leading think tanks. Rajagopalan also recently served as a technical advisor to the United Nations Group of Governmental Experts on Prevention of Arms Race in Outer Space. She is the author of “The Dragon’s Fire: Chinese Military Strategy and Its Implications for Asia.”
About Bloomberg Media:
Bloomberg Media is a leading, global, multi-platform brand that provides decision-makers with timely news, analysis and intelligence on business, finance, technology, climate change, politics and more. Powered by a newsroom of over 2,700 journalists and analysts, it reaches influential audiences worldwide across every platform including digital, social, TV, radio, print and live events. Bloomberg Media is a division of Bloomberg LP. Visit BloombergMedia.com for more information.
About Intelligence Squared U.S.:
A non-partisan, non-profit organization, Intelligence Squared U.S. was founded to address a fundamental problem in America: the extreme polarization of our nation and our politics. Their mission is to restore critical thinking, facts, reason, and civility to American public discourse. The award-winning debate series reaches millions of viewers and listeners through multi-platform distribution, including public radio, podcasts, live streaming, newsletters, interactive digital content, and on-demand apps including Roku and Apple TV. With over 180 debates and counting, Intelligence Squared U.S. has encouraged the public to “think twice” on a wide range of provocative topics. Author and ABC News correspondent John Donvan has moderated IQ2US since 2008.
About IBM Watson:
Watson is IBM’s AI technology for business, helping organizations to better predict and shape future outcomes, automate complex processes, and optimize employees’ time. Watson has evolved from an IBM Research project, to experimentation, to a scaled set of products that run anywhere. With more than 30,000 client engagements, Watson is being applied by leading global brands across a variety of industries to transform how people work. To learn more, visit: https://www.ibm.com/watson.
To learn more about Natural Language Processing and how new capabilities like Key Point Analysis are designed to analyze and generate insights from thousands of arguments on any topic, visit: https://www.ibm.com/watson/natural-language-processing.
It’s C. P. Snow who comes to mind on seeing the words ‘science and two cultures’ (for anyone unfamiliar with the lecture and/or book see The Two Cultures Wikipedia entry).
This Sept. 14, 2020 news item on phys.org puts forward an entirely different concept concerning two cultures and science (Note: Links have been removed),
In the world of scientific research today, there’s a revolution going on—over the last decade or so, scientists across many disciplines have been seeking to improve the workings of science and its methods.
To do this, scientists are largely following one of two paths: the movement for reproducibility and the movement for open science. Both movements aim to create centralized archives for data, computer code and other resources, but from there, the paths diverge. The movement for reproducibility calls on scientists to reproduce the results of past experiments to verify earlier results, while open science calls on scientists to share resources so that future research can build on what has been done, ask new questions and advance science.
Now, an international research team led by IU’s Mary Murphy, Amanda Mejia, Jorge Mejia, Yan Xiaoran, Patty Mabry, Susanne Ressl, Amanda Diekman, and Franco Pestilli, finds the two movements do more than diverge. They have very distinct cultures, with two distinct literatures produced by two groups of researchers with little crossover. Their investigation also suggests that one of the movements — open science — promotes greater equity, diversity, and inclusivity. Their findings were recently reported in the Proceedings for the National Academy of Sciences [PNAS].
The team of researchers on the study, whose fields range widely – from social psychology, network science, neuroscience, structural biology, biochemistry, statistics, business, and education, among others – were taken by surprise by the results.
“The two movements have very few crossovers, shared authors or collaborations,” said Murphy. “They operate relatively independently. And this distinction between the two approaches is replicated across all scientific fields we examined.”
In other words, whether in biology, psychology or physics, scientists working in the open science participate in a different scientific culture than those working within the reproducibility culture, even if they work in the same disciplinary field. And which culture a scientist works in determines a lot about access and participation, particularly for women.
IU cognitive scientist Richard Shiffrin, who has previously been involved in efforts to improve science but did not participate in the current study, says the new study by Murphy and her colleagues provides a remarkable look into the way that current science operates. “There are two quite distinct cultures, one more inclusive, that promotes transparency of reporting and open science, and another, less inclusive, that promotes reproducibility as a remedy to the current practice of science,” he said.
A Tale of Two Sciences
To investigate the fault lines between the two movements, the team, led by network scientists Xiaoran Yan and Patricia Mabry, first conducted a network analysis of papers published from 2010-2017 identified with one of the two movements. The analysis showed that even though both movements span widely across STEM fields, the authors within them occupy two largely distinct networks. Authors who publish open science research, in other words, rarely produce research within reproducibility, and very few reproducibility researchers conduct open science research.
Next, information systems analyst Jorge Mejia and statistician Amanda Mejia applied a semantic text analysis to the abstracts of the papers to determine the values implicit in the language used to define the research. Specifically they looked at the degree to which the research was prosocial, that is, oriented toward helping others by seeking to solve large social problems.
“This is significant,” Murphy explained, “insofar as previous studies have shown that women often gravitate toward science that has more socially oriented goals and aims to improve the health and well-being of people and society. We found that open science has more prosocial language in its abstracts than reproducibility does.”
With respect to gender, the team found that “women publish more often in high-status authorship positions in open science, and that participation in high-status authorship positions has been increasing over time in open science, while in reproducibility women’s participation in high-status authorship positions is decreasing over time,” Murphy said.
The researchers are careful to point out that the link they found between women and open science is so far a correlation, not a causal connection.
“It could be that as more women join these movements, the science becomes more prosocial. But women could also be drawn to this prosocial model because that’s what they value in science, which in turn strengthens the prosocial quality of open science,” Murphy noted. “It’s likely to be an iterative cultural cycle, which starts one way, attracts people who are attracted to that culture, and consequently further builds and supports that culture.”
Diekman, a social psychologist and senior author on the paper, noted these patterns might help open more doors to science. “What we know from previous research is that when science conveys a more prosocial culture, it tends to attract not only more women, but also people of color and prosocially oriented men,” she said.
The distinctions traced in the study are also reflected in the scientific processes employed by the research team itself. As one of the most diverse teams to publish in the pages of PNAS, the research team used open science practices.
“The initial intuition, before the project started, was that investigators have come to this debate from very different perspectives and with different intellectual interests. These interests might attract different categories of researchers.” says Pestilli, an IU neuroscientist. “Some of us are working on improving science by providing new technology and opportunities to reduce human mistakes and promote teamwork. Yet we also like to focus on the greater good science does for society, every day. We are perhaps seeing more of this now in the time of the COVID-19 pandemic.”
With a core of eight lead scientists at IU, the team also included 20 more co-authors, mostly women and people of color who are experts on how to increase the participation of underrepresented groups in science; diversity and inclusion; and the movements to improve science.
Research team leader Mary Murphy noted that in this cultural moment of examining inequality throughout our institutions, looking at who gets to participate in science can yield great benefit.
“Trying to understand inequality in science has the potential to benefit society now more than ever. Understanding how the culture of science can compound problems of inequality or mitigate them could be a real advance in this moment when long-standing inequalities are being recognized–and when there is momentum to act and create a more equitable science.”
I think someone had a little fun writing the news release. First, there’s a possible reference to C. P. Snow’s The Two Cultures and, then, a reference to Charles Dickens’ A Tale of Two Cities (Wikipedia entry here) along with, possibly, an allusion to the French Revolution (liberté, égalité, et fraternité). Going even further afield, is there also an allusion to a science revolution? Certainly the values of liberty and equality would seem to fit in with the findings.
Here’s a link to and a citation for the paper,
Open science, communal culture, and women’s participation in the movement to improve science by Mary C. Murphy, Amanda F. Mejia, Jorge Mejia, Xiaoran Yan, Sapna Cheryan, Nilanjana Dasgupta, Mesmin Destin, Stephanie A. Fryberg, Julie A. Garcia, Elizabeth L. Haines, Judith M. Harackiewicz, Alison Ledgerwood, Corinne A. Moss-Racusin, Lora E. Park, Sylvia P. Perry, Kate A. Ratliff, Aneeta Rattan, Diana T. Sanchez, Krishna Savani, Denise Sekaquaptewa, Jessi L. Smith, Valerie Jones Taylor, Dustin B. Thoman, Daryl A. Wout, Patricia L. Mabry, Susanne Ressl, Amanda B. Diekman, and Franco Pestilli PNAS DOI: https://doi.org/10.1073/pnas.1921320117 First published September 14, 2020
This paper appears to be open access.
Here’s an image representing the researchers’ findings,
I stumbled across two news bits of interest from the Technical University of Munich in one day (Sept. 1, 2020, I think). The topics: artificial intelligence (AI) and synthetic DNA (deoxyribonucleic acid).
The increasing use of AI (artificial intelligence) in the development of new medical technologies demands greater attention to ethical aspects. An interdisciplinary team at the Technical University of Munich (TUM) advocates the integration of ethics from the very beginning of the development process of new technologies. Alena Buyx, Professor of Ethics in Medicine and Health Technologies, explains the embedded ethics approach.
Professor Buyx, the discussions surrounding a greater emphasis on ethics in AI research have greatly intensified in recent years, to the point where one might speak of “ethics hype” …
Prof. Buyx: … and many committees in Germany and around the world such as the German Ethics Council or the EU Commission High-Level Expert Group on Artificial Intelligence have responded. They are all in agreement: We need more ethics in the development of AI-based health technologies. But how do things look in practice for engineers and designers? Concrete solutions are still few and far between. In a joint pilot project with two Integrative Research Centers at TUM, the Munich School of Robotics and Machine Intelligence (MSRM) with its director, Prof. Sami Haddadin, and the Munich Center for Technology in Society (MCTS), with Prof. Ruth Müller, we want to try out the embedded ethics approach. We published the proposal in Nature Machine Intelligence at the end of July .
What exactly is meant by the “embedded ethics approach”?
Prof.Buyx: The idea is to make ethics an integral part of the research process by integrating ethicists into the AI development team from day one. For example, they attend team meetings on a regular basis and create a sort of “ethical awareness” for certain issues. They also raise and analyze specific ethical and social issues.
Is there an example of this concept in practice?
Prof. Buyx: The Geriatronics Research Center, a flagship project of the MSRM in Garmisch-Partenkirchen, is developing robot assistants to enable people to live independently in old age. The center’s initiatives will include the construction of model apartments designed to try out residential concepts where seniors share their living space with robots. At a joint meeting with the participating engineers, it was noted that the idea of using an open concept layout everywhere in the units – with few doors or individual rooms – would give the robots considerable range of motion. With the seniors, however, this living concept could prove upsetting because they are used to having private spaces. At the outset, the engineers had not given explicit consideration to this aspect.
Prof.Buyx: The approach sounds promising. But how can we avoid “embedded ethics” from turning into an “ethics washing” exercise, offering companies a comforting sense of “being on the safe side” when developing new AI technologies?
That’s not something we can be certain of avoiding. The key is mutual openness and a willingness to listen, with the goal of finding a common language – and subsequently being prepared to effectively implement the ethical aspects. At TUM we are ideally positioned to achieve this. Prof. Sami Haddadin, the director of the MSRM, is also a member of the EU High-Level Group of Artificial Intelligence. In his research, he is guided by the concept of human centered engineering. Consequently, he has supported the idea of embedded ethics from the very beginning. But one thing is certain: Embedded ethics alone will not suddenly make AI “turn ethical”. Ultimately, that will require laws, codes of conduct and possibly state incentives.
Here’s a link to and a citation for the paper espousing the embedded ethics for AI development approach,
An embedded ethics approach for AI development by Stuart McLennan, Amelia Fiske, Leo Anthony Celi, Ruth Müller, Jan Harder, Konstantin Ritt, Sami Haddadin & Alena Buyx. Nature Machine Intelligence (2020) DOI: https://doi.org/10.1038/s42256-020-0214-1 Published 31 July 2020
This paper is behind a paywall.
Religion, ethics and and AI
For some reason embedded ethics and AI got me to thinking about Pope Francis and other religious leaders.
The Roman Catholic Church and AI
There was a recent announcement that the Roman Catholic Church will be working with MicroSoft and IBM on AI and ethics (from a February 28, 2020 article by Jen Copestake for British Broadcasting Corporation (BBC) news online (Note: A link has been removed),
Leaders from the two tech giants met senior church officials in Rome, and agreed to collaborate on “human-centred” ways of designing AI.
Microsoft president Brad Smith admitted some people may “think of us as strange bedfellows” at the signing event.
“But I think the world needs people from different places to come together,” he said.
The call was supported by Pope Francis, in his first detailed remarks about the impact of artificial intelligence on humanity.
The Rome Call for Ethics [sic] was co-signed by Mr Smith, IBM executive vice-president John Kelly and president of the Pontifical Academy for Life Archbishop Vincenzo Paglia.
It puts humans at the centre of new technologies, asking for AI to be designed with a focus on the good of the environment and “our common and shared home and of its human inhabitants”.
Framing the current era as a “renAIssance”, the speakers said the invention of artificial intelligence would be as significant to human development as the invention of the printing press or combustion engine.
UN Food and Agricultural Organization director Qu Dongyu and Italy’s technology minister Paola Pisano were also co-signatories.
Hannah Brockhaus’s February 28, 2020 article for the Catholic News Agency provides some details missing from the BBC report and I found it quite helpful when trying to understand the various pieces that make up this initiative,
The Pontifical Academy for Life signed Friday [February 28, 2020], alongside presidents of IBM and Microsoft, a call for ethical and responsible use of artificial intelligence technologies.
According to the document, “the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote ‘algor-ethics.’”
“Algor-ethics,” according to the text, is the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.
The signing of the “Rome Call for AI Ethics [PDF]” took place as part of the 2020 assembly of the Pontifical Academy for Life, which was held Feb. 26-28  on the theme of artificial intelligence.
One part of the assembly was dedicated to private meetings of the academics of the Pontifical Academy for Life. The second was a workshop on AI and ethics that drew 356 participants from 41 countries.
On the morning of Feb. 28 , a public event took place called “renAIssance. For a Humanistic Artificial Intelligence” and included the signing of the AI document by Microsoft President Brad Smith, and IBM Executive Vice-president John Kelly III.
The Director General of FAO, Dongyu Qu, and politician Paola Pisano, representing the Italian government, also signed.
The president of the European Parliament, David Sassoli, was also present Feb. 28.
Pope Francis canceled his scheduled appearance at the event due to feeling unwell. His prepared remarks were read by Archbishop Vincenzo Paglia, president of the Academy for Life.
You can find Pope Francis’s comments about the document here (if you’re not comfortable reading Italian, hopefully, the English translation which follows directly afterward will be helpful). The Pope’s AI initiative has a dedicated website, Rome Call for AI ethics, and while most of the material dates from the February 2020 announcement, they are keeping up a blog. It has two entries, one dated in May 2020 and another in September 2020.
Buddhism and AI
The Dalai Lama is well known for having an interest in science and having hosted scientists for various dialogues. So, I was able to track down a November 10, 2016 article by Ariel Conn for the futureoflife.org website, which features his insights on the matter,
The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.
“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”
If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.
I also found a talk on the topic by The Venerable Tenzin Priyadarshi, first here’s a description from his bio at the Dalai Lama Center for Ethics and Transformative Values webspace on the Massachusetts Institute of Technology (MIT) website,
… an innovative thinker, philosopher, educator and a polymath monk. He is Director of the Ethics Initiative at the MIT Media Lab and President & CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. Venerable Tenzin’s unusual background encompasses entering a Buddhist monastery at the age of ten and receiving graduate education at Harvard University with degrees ranging from Philosophy to Physics to International Relations. He is a Tribeca Disruptive Fellow and a Fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University. Venerable Tenzin serves on the boards of a number of academic, humanitarian, and religious organizations. He is the recipient of several recognitions and awards and received Harvard’s Distinguished Alumni Honors for his visionary contributions to humanity.
He gave the 2018 Roger W. Heyns Lecture in Religion and Society at Stanford University on the topic, “Religious and Ethical Dimensions of Artificial Intelligence.” The video runs over one hour but he is a sprightly speaker (in comparison to other Buddhist speakers I’ve listened to over the years).
Judaism, Islam, and other Abrahamic faiths examine AI and ethics
New technologies are transforming our world every day, and the pace of change is only accelerating. In coming years, human beings will create machines capable of out-thinking us and potentially taking on such uniquely-human traits as empathy, ethical reasoning, perhaps even consciousness. This will have profound implications for virtually every human activity, as well as the meaning we impart to life and creation themselves. This conference will provide an introduction for non-specialists to Artificial Intelligence (AI):
What is it? What can it do and be used for? And what will be its implications for choice and free will; economics and worklife; surveillance economies and surveillance states; the changing nature of facts and truth; and the comparative intelligence and capabilities of humans and machines in the future?
Leading practitioners, ethicists and theologians will provide cross-disciplinary and cross-denominational perspectives on such challenges as technology addiction, inherent biases and resulting inequalities, the ethics of creating destructive technologies and of turning decision-making over to machines from self-driving cars to “autonomous weapons” systems in warfare, and how we should treat the suffering of “feeling” machines. The conference ultimately will address how we think about our place in the universe and what this means for both religious thought and theological institutions themselves.
UTS [Union Theological Seminary] is the oldest independent seminary in the United States and has long been known as a bastion of progressive Christian scholarship. JTS [Jewish Theological Seminary] is one of the academic and spiritual centers of Conservative Judaism and a major center for academic scholarship in Jewish studies. The Riverside Church is an interdenominational, interracial, international, open, welcoming, and affirming church and congregation that has served as a focal point of global and national activism for peace and social justice since its inception and continues to serve God through word and public witness. The annual Greater Good Gathering, the following week at Columbia University’s School of International & Public Affairs, focuses on how technology is changing society, politics and the economy – part of a growing nationwide effort to advance conversations promoting the “greater good.”
As for Islam, I did track down this November 29, 2018 article by Shahino Mah Abdullah, a fellow at the Institute of Advanced Islamic Studies (IAIS) Malaysia,
As the global community continues to work together on the ethics of AI, there are still vast opportunities to offer ethical inputs, including the ethical principles based on Islamic teachings.
This is in line with Islam’s encouragement for its believers to convey beneficial messages, including to share its ethical principles with society.
In Islam, ethics or akhlak (virtuous character traits) in Arabic, is sometimes employed interchangeably in the Arabic language with adab, which means the manner, attitude, behaviour, and etiquette of putting things in their proper places. Islamic ethics cover all the legal concepts ranging from syariah (Islamic law), fiqh ( jurisprudence), qanun (ordinance), and ‘urf (customary practices).
Adopting and applying moral values based on the Islamic ethical concept or applied Islamic ethics could be a way to address various issues in today’s societies.
At the same time, this approach is in line with the higher objectives of syariah (maqasid alsyariah) that is aimed at conserving human benefit by the protection of human values, including faith (hifz al-din), life (hifz alnafs), lineage (hifz al-nasl), intellect (hifz al-‘aql), and property (hifz al-mal). This approach could be very helpful to address contemporary issues, including those related to the rise of AI and intelligent robots.
Part of the difficulty with tracking down more about AI, ethics, and various religions is linguistic. I simply don’t have the language skills to search for the commentaries and, even in English, I may not have the best or most appropriate search terms.
Television (TV) episodes stored on DNA?
According to a Sept. 1, 2020 news item on Nanowerk, the first episode of a tv series, ‘Biohackers’ has been stored on synthetic DNA (deoxyribonucleic acid) by a researcher at TUM and colleagues at another institution,
The first episode of the newly released series “Biohackers” was stored in the form of synthetic DNA. This was made possible by the research of Prof. Reinhard Heckel of the Technical University of Munich (TUM) and his colleague Prof. Robert Grass of ETH Zürich.
They have developed a method that permits the stable storage of large quantities of data on DNA for over 1000 years.
Prof. Heckel, Biohackers is about a medical student seeking revenge on a professor with a dark past – and the manipulation of DNA with biotechnology tools. You were commissioned to store the series on DNA. How does that work?
First, I should mention that what we’re talking about is artificially generated – in other words, synthetic – DNA. DNA consists of four building blocks: the nucleotides adenine (A), thymine (T), guanine (G) and cytosine (C). Computer data, meanwhile, are coded as zeros and ones. The first episode of Biohackers consists of a sequence of around 600 million zeros and ones. To code the sequence 01 01 11 00 in DNA, for example, we decide which number combinations will correspond to which letters. For example: 00 is A, 01 is C, 10 is G and 11 is T. Our example then produces the DNA sequence CCTA. Using this principle of DNA data storage, we have stored the first episode of the series on DNA.
And to view the series – is it just a matter of “reverse translation” of the letters?
In a very simplified sense, you can visualize it like that. When writing, storing and reading the DNA, however, errors occur. If these errors are not corrected, the data stored on the DNA will be lost. To solve the problem, I have developed an algorithm based on channel coding. This method involves correcting errors that take place during information transfers. The underlying idea is to add redundancy to the data. Think of language: When we read or hear a word with missing or incorrect letters, the computing power of our brain is still capable of understanding the word. The algorithm follows the same principle: It encodes the data with sufficient redundancy to ensure that even highly inaccurate data can be restored later.
Channel coding is used in many fields, including in telecommunications. What challenges did you face when developing your solution?
The first challenge was to create an algorithm specifically geared to the errors that occur in DNA. The second one was to make the algorithm so efficient that the largest possible quantities of data can be stored on the smallest possible quantity of DNA, so that only the absolutely necessary amount of redundancy is added. We demonstrated that our algorithm is optimized in that sense.
DNA data storage is very expensive because of the complexity of DNA production as well as the reading process. What makes DNA an attractive storage medium despite these challenges?
First, DNA has a very high information density. This permits the storage of enormous data volumes in a minimal space. In the case of the TV series, we stored “only” 100 megabytes on a picogram – or a billionth of a gram of DNA. Theoretically, however, it would be possible to store up to 200 exabytes on one gram of DNA. And DNA lasts a long time. By comparison: If you never turned on your PC or wrote data to the hard disk it contains, the data would disappear after a couple of years. By contrast, DNA can remain stable for many thousands of years if it is packed right.
And the method you have developed also makes the DNA strands durable – practically indestructible.
My colleague Robert Grass was the first to develop a process for the “stable packing” of DNA strands by encapsulating them in nanometer-scale spheres made of silica glass. This ensures that the DNA is protected against mechanical influences. In a joint paper in 2015, we presented the first robust DNA data storage concept with our algorithm and the encapsulation process developed by Prof. Grass. Since then we have continuously improved our method. In our most recent publication in Nature Protocols of January 2020, we passed on what we have learned.
What are your next steps? Does data storage on DNA have a future?
We’re working on a way to make DNA data storage cheaper and faster. “Biohackers” was a milestone en route to commercialization. But we still have a long way to go. If this technology proves successful, big things will be possible. Entire libraries, all movies, photos, music and knowledge of every kind – provided it can be represented in the form of data – could be stored on DNA and would thus be available to humanity for eternity.
Here’s a link to and a citation for the paper,
Reading and writing digital data in DNA by Linda C. Meiser, Philipp L. Antkowiak, Julian Koch, Weida D. Chen, A. Xavier Kohll, Wendelin J. Stark, Reinhard Heckel & Robert N. Grass. Nature Protocols volume 15, pages86–101(2020) Issue Date: January 2020 DOI: https://doi.org/10.1038/s41596-019-0244-5 Published [online] 29 November 2019
This world-class symposium, the sixth event of its kind, will bring together a record number (1000+) of renowned Canadian and international experts from across the nanomedicines field to:
highlight the discoveries and innovations in nanomedicines that are contributing to global progress in acute, chronic and orphan disease treatment and management;
present up-to-date diagnostic and therapeutic nanomedicine approaches to addressing the challenges of COVID-19; and
facilitate discussion among nanomedicine researchers and innovators and UBC and NMIN clinician-scientists, basic researchers, trainees, and research partners.
Since 2014, Vancouver Nanomedicine Day has advanced nanomedicine research, knowledge mobilization and commercialization in Canada by sharing high-impact findings and facilitating interaction—among researchers, postdoctoral fellows, graduate students, and life science and startup biotechnology companies—to catalyze research collaboration.
I have a few observations, First, Robert Langer is a big deal. Here are a few highlights from his Wikipedia entry (Note: Links have been removed),
Robert Samuel Langer, Jr. FREng (born August 29, 1948) is an American chemical engineer, scientist, entrepreneur, inventor and one of the twelve Institute Professors at the Massachusetts Institute of Technology.
Langer holds over 1,350 granted or pending patents. He is one of the world’s most highly cited researchers, having authored nearly 1,500 scientific papers, and has participated in the founding of multiple technology companies.
Langer is the youngest person in history (at 43) to be elected to all three American science academies: the National Academy of Sciences, the National Academy of Engineering and the Institute of Medicine. He was also elected as a charter member of National Academy of Inventors. He was elected as an International Fellow of the Royal Academy of Engineering in 2010.
It’s all about commercializing the research—or is it?
(This second observation is a little more complicated and requires a little context.) The NMIN is one of Canada’s Networks of Centres of Excellence (who thought that name up? …sigh), from the NMIN About page,
The NCEs seem to be firmly fixed on finding pathways to commercialization (from the NCE About page) Note: All is not as it seems,
Canada’s global economic competitiveness [emphasis mine] depends on making new discoveries and transforming them into products, services [emphasis mine] and processes that improve the lives of Canadians. To meet this challenge, the Networks of Centres of Excellence (NCE) offers a suite of programs that mobilize Canada’s best research, development and entrepreneurial [emphasis mine] expertise and focus it on specific issues and strategic areas.
NCE programs meet Canada’s needs to focus a critical mass of research resources on social and economic challenges, commercialize [emphasis mine] and apply more of its homegrown research breakthroughs, increase private-sector R&D, [emphasis mine] and train highly qualified people. As economic [emphasis mine] and social needs change, programs have evolved to address new challenges.
The fund will invest $275 million over the next 5 years beginning in fiscal 2018-19, and $65 million ongoing, to fund international, interdisciplinary, fast-breaking and high-risk research.
NFRF is composed of three streams to support groundbreaking research.
Exploration generates opportunities for Canada to build strength in high-risk, high-reward and interdisciplinary research;
Transformation provides large-scale support for Canada to build strength and leadership in interdisciplinary and transformative research; and
International enhances opportunities for Canadian researchers to participate in research with international partners.
As you can see there’s no reference to commercialization or economic challenges.
Here at last is the second observation, I find it hard to believe that the government of Canada has given up on the idea of commercializing research and increasing the country’s economic competitiveness through research. Certainly, Langer’s virtual appearance at Vancouver Nanomedicine Day 2020, suggests that at least some corners of the Canadian research establishment are remaining staunchly entrepreneurial.
Canada remains strong in research output and impact, capacity for R&D and innovation at risk: New expert panel report
While Canada is a highly innovative country, with a robust research base and thriving communities of technology start-ups, significant barriers—such as a lack of managerial skills, the experience needed to scale-up companies, and foreign acquisition of high-tech firms—often prevent the translation of innovation into wealth creation.[emphasis mine] The result is a deficit of technology companies growing to scale in Canada, and a loss of associated economic and social benefits.This risks establishing a vicious cycle, where successful companies seek growth opportunities elsewhere due to a lack of critical skills and experience in Canada guiding companies through periods of rapid expansion.
According to the CCA’s [2018 report] Summary webpage, it was Innovation, Science and Economic Development Canada which requested the report. (I wrote up a two-part commentary under one of my favourite titles: “The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada.” Part 1 and Part 2)
I will be fascinated to watch the NFRF and science commercialization situations as they develop.
I have some news about conserving art; early bird registration deadlines for two events, and, finally, an announcement about contest winners.
Canadian Light Source (CLS) and modern art
This is one of three pieces by Rita Letendre that underwent chemical mapping according to an August 5, 2020 CLS news release by Victoria Martinez (also received via email),
Research undertaken at the Canadian Light Source (CLS) at the University of Saskatchewan was key to understanding how to conserve experimental oil paintings by Rita Letendre, one of Canada’s most respected living abstract artists.
The work done at the CLS was part of a collaborative research project between the Art Gallery of Ontario (AGO) and the Canadian Conservation Institute (CCI) that came out of a recent retrospective Rita Letendre: Fire & Light at the AGO. During close examination, Meaghan Monaghan, paintings conservator from the Michael and Sonja Koerner Centre for Conservation, observed that several of Letendre’s oil paintings from the fifties and sixties had suffered significant degradation, most prominently, uneven gloss and patchiness, snowy crystalline structures coating the surface known as efflorescence, and cracking and lifting of the paint in several areas.
Kate Helwig, Senior Conservation Scientist at the Canadian Conservation Institute, says these problems are typical of mid-20th century oil paintings. “We focused on three of Rita Letendre’s paintings in the AGO collection, which made for a really nice case study of her work and also fits into the larger question of why oil paintings from that period tend to have degradation issues.”
Growing evidence indicates that paintings from this period have experienced these problems due to the combination of the experimental techniques many artists employed and the additives paint manufacturers had begun to use.
In order to determine more precisely how these factors affected Letendre’s paintings, the research team members applied a variety of analytical techniques, using microscopic samples taken from key points in the works.
“The work done at the CLS was particularly important because it allowed us to map the distribution of materials throughout a paint layer such as an impasto stroke,” Helwig said. The team used Mid-IR chemical mapping at the facility, which provides a map of different molecules in a small sample.
For example, chemical mapping at the CLS allowed the team to understand the distribution of the paint additive aluminum stearate throughout the paint layers of the painting Méduse. This painting showed areas of soft, incompletely dried paint, likely due to the high concentration and incomplete mixing of this additive.
The painting Victoire had a crumbling base paint layer in some areas and cracking and efflorescence at the surface in others. Infrared mapping at the CLS allowed the team to determine that excess free fatty acids in the paint were linked to both problems; where the fatty acids were found at the base they formed zing “soaps” which led to crumbling and cracking, and where they had moved to the surface they had crystallized, causing the snowflake-like efflorescence.
AGO curators and conservators interviewed Letendre to determine what was important to her in preserving and conserving her works, and she highlighted how important an even gloss across the surface was to her artworks, and the philosophical importance of the colour black in her paintings. These priorities guided conservation efforts, while the insights gained through scientific research will help maintain the works in the long term.
In order to restore the black paint to its intended even finish for display, conservator Meaghan Monaghan removed the white crystallization from the surface of Victoire, but it is possible that it could begin to recur. Understanding the processes that lead to this degradation will be an important tool to keep Letendre’s works in good condition.
“The world of modern paint research is complicated; each painting is unique, which is why it’s important to combine theoretical work on model paint systems with this kind of case study on actual works of art” said Helwig. The team hopes to collaborate on studying a larger cross section of Letendre’s paintings in oil and acrylic in the future to add to the body of knowledge.
The latest news from the CSPC 2020 (November 16 – 20 with preconference events from Nov. 1 -14) organizers is that registration is open and early birds have a deadline of September 27, 2020 (from an August 6, 2020 CSPC 2020 announcement received via email),
It’s time! Registration for the 12th Canadian Science Policy Conference (CSPC 2020) is open now. Early Bird registration is valid until Sept. 27th .
CSPC 2020 is coming to your offices and homes:
Register for full access to 3 weeks of programming of the biggest science and innovation policy forum of 2020 under the overarching theme: New Decade, New Realities: Hindsight, Insight, Foresight.
300+ Speakers from five continents
65+ Panel sessions, 15 pre conference sessions and symposiums
50+ On demand videos and interviews with the most prominent figures of science and innovation policy
20+ Partner-hosted functions
15+ Networking sessions
15 Open mic sessions to discuss specific topics
The virtual conference features an exclusive array of offerings:
3D Lounge and Exhibit area
Advance access to the Science Policy Magazine, featuring insightful reflections from the frontier of science and policy innovation
Don’t miss this unique opportunity to engage in the most important discussions of science and innovation policy with insights from around the globe, just from your office, home desk, or your mobile phone.
Benefit from significantly reduced registration fees for an online conference with an option for discount for multiple ticket purchases
The preliminary programme can be found here. This year there will be some discussion of a Canadian synthetic biology roadmap, presentations on various Indigenous concerns (mostly health), a climate challenge presentation focusing on Mexico and social vulnerability and another on parallels between climate challenges and COVID-19. There are many presentations focused on COVID-19 and.or health.
Margaux Davoine has written up a teaser for the 2020 edition of ISEA in the form of an August 6, 2020 interview with Yan Breuleux. I’ve excerpted one bit,
Finally, thinking about this year’s theme [Why Sentience?], there might be something a bit ironic about exploring the notion of sentience (historically reserved for biological life, and quite a small subsection of it) through digital media and electronic arts. There’s been much work done in the past 25 years to loosen the boundaries between such distinctions: how do you imagine ISEA2020 helping in that?
The similarities shared between humans, animals, and machines are fundamental in cybernetic sciences. According to the founder of cybernetics Norbert Wiener, the main tenets of the information paradigm – the notion of feedback – can be applied to humans, animals as well as the material world. Famously, the AA predictor (as analysed by Peter Galison in 1994) can be read as a first attempt at human-machine fusion (otherwise known as a cyborg).
The infamous Turing test also tends to blur the lines between humans and machines, between language and informational systems. Second-order cybernetics are often associated with biologists Francisco Varela and Humberto Maturana. The very notion of autopoiesis (a system capable of maintaining a certain level of stability in an unstable environment) relates back to the concept of homeostasis formulated by Willam Ross [William Ross Ashby] in 1952. Moreover, the concept of “ecosystems” emanates directly from the field of second-order cybernetics, providing researchers with a clearer picture of the interdependencies between living and non-living organisms. In light of these theories, the absence of boundaries between animals, humans, and machines constitutes the foundation of the technosciences paradigm. New media, technological arts, virtual arts, etc., partake in the dialogue between humans and machines, and thus contribute to the prolongation of this paradigm. Frank Popper nearly called his book “Techno Art” instead of “Virtual Art”, in reference to technosciences (his editor suggested the name change). For artists in the technological arts community, Jakob von Uexkull’s notion of “human-animal milieu” is an essential reference. Also present in Simondon’s reflections on human environments (both natural and artificial), the notion of “milieu” is quite important in the discourses about art and the environment. Concordia University’s artistic community chose the concept of “milieu” as the rallying point of its research laboratories.
ISEA2020’s theme resonates particularly well with the recent eruption of processing and artificial intelligence technologies. For me, Sentience is a purely human and animal idea: machines can only simulate our ways of thinking and feeling. Partly in an effort to explore the illusion of sentience in computers, Louis-Philippe Rondeau, Benoît Melançon and I have established the Mimesis laboratory at NAD University. Processing and AI technologies are especially useful in the creation of “digital doubles”, “Vactors”, real-time avatar generation, Deep Fakes and new forms of personalised interactions.
I adhere to the epistemological position that the living world is immeasurable. Through their ability to simulate, machines can merely reduce complex logics to a point of understandability. The utopian notion of empathetic computers is an idea mostly explored by popular science-fiction movies. Nonetheless, research into computer sentience allows us to devise possible applications, explore notions of embodiment and agency, and thereby develop new forms of interaction. Beyond my own point of view, the idea that machines can somehow feel emotions gives artists and researchers the opportunity to experiment with certain findings from the fields of the cognitive sciences, computer sciences and interactive design. For example, in 2002 I was particularly marked by an immersive installation at Universal Exhibition in Neuchatel, Switzerland titled Ada: Intelligence Space. The installation comprised an artificial environment controlled by a computer, which interacted with the audience on the basis of artificial emotion. The system encouraged visitors to participate by intelligently analysing their movements and sounds. Another example, Louis-Philippe Demers’ Blind Robot (2012), demonstrates how artists can be both critical of, and amazed by, these new forms of knowledge. Additionally, the 2016 BIAN (Biennale internationale d’art numérique), organized by ELEKTRA (Alain Thibault) explored the various ways these concepts were appropriated in installation and interactive art. The way I see it, current works of digital art operate as boundary objects. The varied usages and interpretations of a particular work of art allow it to be analyzed from nearly every angle or field of study. Thus, philosophers can ask themselves: how does a computer come to understand what being human really is?
I have yet to attend conferences or exchange with researchers on that subject. Although the sheer number of presentation propositions sent to ISEA2020, I have no doubt that the symposium will be the ideal context to reflect on the concept of Sentience and many issues raised therein.
For the last bit of news.
HotPopRobot, one of six global winners of 2020 NASA SpaceApps COVID-19 challenge
We are excited to become the global winners of the 2020 NASA SpaceApps COVID-19 Challenge from among 2,000 teams from 150 countries. The six Global Winners will be invited to visit a NASA Rocket Launch site to view a spacecraft launch along with the SpaceApps Organizing team once travel is deemed safe. They will also receive an invitation to present their projects to NASA, ESA [European Space Agency], JAXA [Japan Aerospace Exploration Agency], CNES [Centre National D’Etudes Spatiales; France], and CSA [Canadian Space Agency] personnel. https://covid19.spaceappschallenge.org/awards
15,000 participants joined together to submit over 1400 projects for the COVID-19 Global Challenge that was held on 30-31 May 2020. 40 teams made to the Global Finalists. Amongst them, 6 teams became the global winners!
The 2020 SpaceApps was an international collaboration between NASA, Canadian Space Agency, ESA, JAXA, CSA,[sic] and CNES focused on solving global challenges. During a period of 48 hours, participants from around the world were required to create virtual teams and solve any of the 12 challenges related to the COVID-19 pandemic posted on the SpaceApps website. More details about the 2020 SpaceApps COVID-19 Challenge: https://sa-2019.s3.amazonaws.com/media/documents/Space_Apps_FAQ_COVID_.pdf
We have been participating in NASA Space Challenge for the last seven years since 2014. We were only 8 years and 5 years respectively when we participated in our very first SpaceApps 2014.
We have grown up learning more about space, tacking global challenges, making hardware and software projects, participating in meetings, networking with mentors and teams across the globe, and giving presentations through the annual NASA Space Apps Challenges. This is one challenge we look forward to every year.
It has been a fun and exciting journey meeting so many people and astronauts and visiting several fascinating places on the way! We hope more kids, youths, and families are inspired by our space journey. Space is for all and is yours to discover!
A June 1, 2020 essay by Maywa Montenegro (Postdoctoral Fellow, University of California at Davis) for The Conversation posits that new regulations (which in fact result in deregulation) are likely to create problems,
In May , federal regulators finalized a new biotechnology policy that will bring sweeping changes to the U.S. food system. Dubbed “SECURE,”the rule revises U.S. Department of Agriculture regulations over genetically engineered plants, automatically exempting many gene-edited crops from government oversight. Companies and labs will be allowed to “self-determine” whether or not a crop should undergo regulatory review or environmental risk assessment.
Initial responses to this new policy have followed familiar fault lines in the food community. Seed industry trade groups and biotech firms hailed the rule as “important to support continuing innovation.” Environmental and small farmer NGOs called the USDA’s decision “shameful” and less attentive to public well-being than to agribusiness’s bottom line.
But the gene-editing tool CRISPR was supposed to break the impasse in old GM wars by making biotechnology more widely affordable, accessible and thus democratic.
In my research, I study how biotechnology affects transitions to sustainable food systems. It’s clear that since 2012 the swelling R&D pipeline of gene-edited grains, fruits and vegetables, fish and livestock has forced U.S. agencies to respond to the so-called CRISPR revolution.
Yet this rule change has a number of people in the food and scientific communities concerned. To me, it reflects the lack of accountability and trust between the public and government agencies setting policies.
Is there a better way?
… I have developed a set of principles and practices for governing CRISPR based on dialogue with front-line communities who are most affected by the technologies others usher in. Communities don’t just have to adopt or refuse technology – they can co-create [emphasis mine] it.
One way to move forward in the U.S. is to take advantage of common ground between sustainable agriculture movements and CRISPR scientists. The struggle over USDA rules suggests that few outside of industry believe self-regulation is fair, wise or scientific.
If you have the time and the inclination, do read the essay in its entirety.
Anyone who has read my COVID-19 op-ed for the Canadian Science Policy may see some similarity between Montenegro’s “co-create” and this from my May 15, 2020 posting which included my reference materials or this version on the Canadian Science Policy Centre where you can find many other COVID-19 op-eds)
In addition to engaging experts as we navigate our way into the future, we can look to artists, writers, citizen scientists, elders, indigenous communities, rural and urban communities, politicians, philosophers, ethicists, religious leaders, and bureaucrats of all stripes for more insight into the potential for collateral and unintended consequences.
To be clear, I think times of crises are when a lot of people call for more co-creation and input. Here’s more about Montenegro’s work on her profile page (which includes her academic credentials, research interests and publications) on the University of California at Berkeley’s Department of Environmental Science, Policy, and Management webspace. She seems to have been making the call for years.
I am a US-Dutch-Peruvian citizen who grew up in Appalachia, studied molecular biology in the Northeast, worked as a journalist in New York City, and then migrated to the left coast to pursue a PhD. My indigenous ancestry, smallholder family history, and the colonizing/decolonizing experiences of both the Netherlands and Peru informs my personal and professional interests in seeds and agrobiodiversity. My background engenders a strong desire to explore synergies between western science and the indigenous/traditional knowledge systems that have historically been devalued and marginalized.
Trained in molecular biology, science writing, and now, a range of critical social and ecological theory, I incorporate these perspectives into research on seeds.
I am particularly interested in the relationship between formal seed systems – characterized by professional breeding, certification, intellectual property – and commercial sale and informal seed systems through which farmers traditionally save, exchange, and sell seeds. …
You can find more on her Twitter feed, which is where I discovered a call for papers for a “Special Feature: Gene Editing the Food System” in the journal, Elementa: Science of the Anthropocene. They have a rolling deadline, which started in February 2020. At this time, there is one paper in the series,
What I’ve posted here is the piece followed by attribution for the artwork used to illustrate my op-ed in the PDF version of essays and by links to all of my reference materials.
It can become overwhelming as one looks at the images of coffins laid out in various venues, listens to exhausted health care professionals, and sees body bags being loaded onto vans while reading stories about the people who have been hospitalized and/or have died.
In this sea of information, it’s easy to forget that COVID-19 is one in a long history of pandemics. For the sake of brevity, here’s a mostly complete roundup of the last 100 years. The H1N1 pandemic of 1918/19 resulted in either 17 million, 50 million, or 100 million deaths depending on the source of information. The H2N2 pandemic of 1958/59 resulted in approximately 1.1. million deaths; the H3N2 pandemic of 1968/69 resulted in somewhere from 1 to 4 million deaths; and the H1N1pdm09 pandemic of 2009 resulted in roughly 150,000 -575,000 deaths. The HIV/AIDS global pandemic or, depending on the agency, epidemic is ongoing. The estimate for HIVAIDS-related deaths in 2018 alone was between 500,000 – 1.1 million.
It’s now clear that the 2019/20 pandemic will take upwards of 350,000 lives and, quite possibly, many more lives before it has run its course.
On the face of it, the numbers for COVID-19 would not seem to occasion the current massive attempt at physical isolation which ranges across the globe and within entire countries. There is no record of any such previous, more or less global effort. In the past, physical isolation seems to have been practiced on a more localized level.
We are told the current policy ‘flattening the curve’ is an attempt to constrain the numbers so as to lighten the burden on the health care system, i.e. the primary focus being to lessen the number of people needing care at any one time and also lessening the number of deaths and hospitalizations
It’s an idea that can be traced back in more recent times to the 1918/19 pandemic (and stretches back to at least the 17th century when as a student Isaac Newton was sent home from Cambridge to self-isolate from the Great Plague of London).
During the 1918/19 pandemic, Philadelphia and St. Louis, in the US had vastly different experiences. Ignoring advice from infectious disease experts, Philadelphia held a large public parade. Within two or three days, people sickened and, ultimately, 16,000 died in six months. By contrast, St. Louis adopted social and physical isolation measures suffering 2,000 deaths and flattening the curve. (That city too suffered greatly but more slowly.)
In 2019/20, many governments were slow to respond and many have been harshly criticized for their tardiness. Government leaders seem to have been following an older script, something more laissez-faire, something similar to the one we have followed with past pandemics.
We are breaking new ground by following a policy that is untested at this scale.
Viewed positively, the policy hints at a shift in how we view disease and death and hopes are that this heralds a more cohesive and integrated approach to all life on this planet. Viewed more negatively, it suggests an agenda of social control being enacted and promoted to varying degrees across the planet.
Regardless of your perspective, ‘flattening the curve’ seems to have been employed without any substantive consideration of collateral damages and unintended consequences
We are beginning to understand some of the consequences. On April 5, 2020, UN Secretary-General Antonio Guterres expressed grave concern about a global surge in domestic violence. King’s College London and the Australian National University released a report on April 9, 2020 estimating that half a billion people around the world may be pushed into poverty because of these measures.
As well, access to water, which many of us take for granted, can be highly problematic. Homeless people, incarcerated people, indigenous peoples and others note that washing with water and soap, the recommended practice for killing the virus should it land on you, is not a simple matter for them.
More crises such as pandemics, climate change as seen in extreme weather events and water shortages along with rising sea levels around the world, and economic downturns either singly or connected together in ways we have difficulty fully appreciating can be anticipated.
In addition to engaging experts as we navigate our way into the future, we can look to artists, writers, citizen scientists, elders, indigenous communities, rural and urban communities, politicians, philosophers, ethicists, religious leaders, and bureaucrats of all stripes for more insight into the potential for collateral and unintended consequences.
We have the tools what remains is the will and the wit to use them. Brute force analysis has its uses but it’s also important to pay attention to the outliers. “We cannot solve our problems with the same thinking we used when we created them.” (Albert Einstein)
PDF of essays (Response to COVID-19 Pandemic and its Impacts, volume 1, issue 2, May 20202)
This image of an art piece derived from a Fibonacci word fractal was used to illustrate my essay (pp. 31-2) as reproduced in the PDF only.
For anyone unfamiliar with Fibonacci words (from its Wikipedia entry), Note: Links have been removed,
A Fibonacci word is a specific sequence of binary digits (or symbols from any two-letter alphabet). The Fibonacci word is formed by repeated concatenation in the same way that the Fibonacci numbers are formed by repeated addition.
It is a paradigmatic example of a Sturmian word and specifically, a morphic word.
The name “Fibonacci word” has also been used to refer to the members of a formal language L consisting of strings of zeros and ones with no two repeated ones. Any prefix of the specific Fibonacci word belongs to L, but so do many other strings. L has a Fibonacci number of members of each possible length.
References used for op-ed
That opinion piece was roughly 787 words and as such fit into the 600-800 words submission guideline. It’s been a long time since I’ve written something without links and supporting information. What follows are the supporting sources I used for my statements. (Note: i have also included a few pieces that were published after my op-ed was submitted on April 20, 2020 as they lend further support for some of my contentions.)
https://www.covid-19canada.com/ This is a Canadian site relying on information from the Canadian federal government, Johns Hopkins University (US) and the World Health Organization (WHO) as well as others.
https://news.itu.int/sharing-best-practices-on-digital-cooperation-during-covid19-and-beyond/ “I think what COVID has done, is actually to put the will to get the world connected right in front of us – and we rallied around that will,” said Doreen Bogdan-Martin, Director of ITU’s Telecommunication Development Bureau. “We have come together in these very difficult circumstances and we have come up with innovative practices to actually better connect people who actually weren’t connected before.”
Brute force analysis and tools for broader consultation
I came up with the term ‘brute force analysis’ after an experience in local participatory budgeting. (For those who don’t know, there’s a movement afoot for a government body [in this case, it was the City of Vancouver] to dedicate a portion of their budget to a community [in this case, it was the West End neighbourhood] for citizens to decide on how the allocation should be sent.)
In our case, volunteers had gone out out and solicited ideas for how neighbourhood residents would like to see the money spent. The ideas were categorized and a call for volunteers to work on committees went out. I ended up on the ‘arts and culture’ committee and we were tasked with taking some 300 – 400 suggestions and establishing a list of 10 – 12 possibilities for more discussion and research after which we were to present three or four to city staff who would select a maximum of two suggestions for a community vote.
Our deadlines, many of which seemed artificially imposed, were tight and we had to be quite ruthless as we winnowed away the suggestions. It became an exercise in determining which were the most frequently made suggestions, hence, ‘brute force analysis’. (This a condensed description of the process.)
As for tools to encourage wider participation, I was thinking of something like ‘Foldit‘ (scroll down to ‘Folding …’.). Both a research project (University of Washington) and a video puzzle game for participants who want to try protein-folding, it’s a remarkable effort first described in my August 6, 2010 posting when the researchers had their work published in Nature with an astonishing 50,000 co-authors.
The quote, “We cannot solve our problems with the same thinking we used when we created them,” is attributed to Albert Einstein in many places but I have not been able to find any supporting references or documentation.