Tag Archives: Emily Chung

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.

Congratulations! Noēma magazine’s first year anniversary

Apparently, I am an idiot—if the folks at Expunct and other organizations passionately devoted to their own viewpoints are to be believed.

To be specific, Berggruen Institute (which publishes Noēma magazine) has attracted remarkably sharp criticism and, by implication, that seems to include anyone examining, listening, or reading the institute’s various communication efforts.

Perhaps you’d like to judge the quality of the ideas for yourself?

Abut the Institute and about the magazine

The institute is a think tank founded by Nicolas Berggruen, US-based billionaire investor and philanthropist, and Nathan Gardels, journalist and editor-in-chief of Noēma magazine, in 2010. Before moving onto the magazine’s first anniversary, here’s more about the Institute from its About webpage,

Ideas for a Changing World

We live in a time of great transformations. From capitalism, to democracy, to the global order, our institutions are faltering. The very meaning of the human is fragmenting.

The Berggruen Institute was established in 2010 to develop foundational ideas about how to reshape political and social institutions in the face of these great transformations. We work across cultures, disciplines and political boundaries, engaging great thinkers to develop and promote long-term answers to the biggest challenges of the 21st Century.

As the for the magazine, here’s more from the About Us webpage (Note: I have rearranged the paragraph order),

In ancient Greek, noēma means “thinking” or the “object of thought.” And that is our intention: to delve deeply into the critical issues transforming the world today, at length and with historical context, in order to illuminate new pathways of thought in a way not possible through the immediacy of daily media. In this era of accelerated social change, there is a dire need for new ideas and paradigms to frame the world we are moving into.

Noema is a magazine exploring the transformations sweeping our world. We publish essays, interviews, reportage, videos and art on the overlapping realms of philosophy, governance, geopolitics, economics, technology and culture. In doing so, our unique approach is to get out of the usual lanes and cross disciplines, social silos and cultural boundaries. From artificial intelligence and the climate crisis to the future of democracy and capitalism, Noema Magazine seeks a deeper understanding of the most pressing challenges of the 21st century.

Published online and in print by the Berggruen Institute, Noema grew out of a previous publication called The WorldPost, which was first a partnership with HuffPost and later with The Washington Post. Noema publishes thoughtful, rigorous, adventurous pieces by voices from both inside and outside the institute. While committed to using journalism to help build a more sustainable and equitable world, we do not promote any particular set of national, economic or partisan interests.

First anniversary

Noēma’s anniversary is being marked by its second paper publication (the first was produced for the magazine’s launch). From a July 1, 2021 announcement received via email,

June 2021 marked one year since the launch of Noema Magazine, a crucial milestone for the new publication focused on exploring and amplifying transformative ideas. Noema is working to attract audiences through longform perspectives and contemporary artwork that weave together threads in philosophy, governance, geopolitics, economics, technology, and culture.

“What began more than seven years ago as a news-driven global voices platform for The Huffington Post known as The WorldPost, and later in partnership with The Washington Post, has been reimagined,” said Nathan Gardels, editor-in-chief of Noema. “It has evolved into a platform for expansive ideas through a visual lens, and a timely and provocative portal to plumb the deeper issues behind present events.”

The magazine’s editorial board, involved in the genesis and as content drivers of the magazine, includes Orhan Pamuk, Arianna Huffington, Fareed Zakaria, Reid Hoffman, Dambisa Moyo, Walter Isaacson, Pico Iyer, and Elif Shafak. Pieces by thinkers cracking the calcifications of intellectual domains include, among many others:

·      Francis Fukuyama on the future of the nation-state

·      A collage of commentary on COVID with Yuval Harari and Jared Diamond 

·      An interview with economist Mariana Mazzucato on “mission-oriented government”

·      Taiwan’s Digital Minister Audrey Tang on digital democracy

·      Hedge-fund giant Ray Dalio in conversation with Nobel laureate Joe Stiglitz

·      Shannon Vallor on how AI is making us less intelligent and more artificial

·      Former Governor Jerry Brown in conversation with Stewart Brand 

·      Ecologist Suzanne Simard on the intelligence of forest ecosystems

·      A discussion on protecting the biosphere with Bill Gates’s guru Vaclav Smil 

·      An original story by Chinese science-fiction writer Hao Jingfang

Noema seeks to highlight how the great transformations of the 21st century are reflected in the work of today’s artistic innovators. Most articles are accompanied by an original illustration, melding together an aesthetic experience with ideas in social science and public policy. Among others, in the past year, the magazine has featured work from multimedia artist Pierre Huyghe, illustrator Daniel Martin Diaz, painter Scott Listfield, graphic designer and NFT artist Jonathan Zawada, 3D motion graphics artist Kyle Szostek, illustrator Moonassi, collage artist Lauren Lakin, and aerial photographer Brooke Holm. Additional contributions from artists include Berggruen Fellows Agnieszka Kurant and Anicka Yi discussing how their work explores the myth of the self.

Noema is available online and annually in print; the magazine’s second print issue will be released on July13, 2021. The theme of this issue is “planetary realism,” which proposes to go beyond the exhausted notions of globalization and geopolitical competition among nation-states to a new “Gaiapolitik.” It addresses the existential challenge of climate change across all borders and recognizes that human civilization is but one part of the ecology of being that encompasses multiple intelligences from microbes to forests to the emergent global exoskeleton of AI and internet connectivity (more on this in the letter from the editors below).

Published by the Berggruen Institute, Noema is an incubator for the Institute’s core ideas, such as “participation without populism,” “pre-distribution” and universal basic capital (vs. income), and the need for dialogue between the U.S. and China to avoid an AI arms race or inadvertent war.

“The world needs divergent thinking on big questions if we’re going to meet the challenges of the 21st century; Noema publishes bold and experimental ideas,” said Kathleen Miles, executive editor of Noema. “The magazine cross-fertilizes ideas across boundaries and explores correspondences among them in order to map out the terrain of the great transformations underway.”  

I notice Suzanne Simard (from the University of British Columbia and author of “Finding the Mother Tree: Discovering the Wisdom of the Forest”) on the list of essayists along with a story by Chinese science fiction writer, Hao Jingfang.

Simard was mentioned here in a May 12, 2021 posting (scroll down to the “UBC forestry professor, Suzanne Simard’s memoir going to the movies?” subhead) when it was announced that her then not yet published memoir will be a film starring Amy Adams (or so they hope).

Hao Jingfang was mentioned here in a November 16, 2020 posting titled: “Telling stories about artificial intelligence (AI) and Chinese science fiction; a Nov. 17, 2020 virtual event” (co-hosted by the Berggruen Institute and University of Cambridge’s Leverhulme Centre for the Future of Intelligence [CFI]).

A month after Noēma’s second paper issue on July 13, 2021, the theme and topics appear especially timely in light of the extensive news coverage in Canada and many other parts of the world given to the Monday, August, 9, 2021 release of the sixth UN Climate report raising alarms over irreversible impacts. (Emily Chung’s August 12, 2021 analysis for the Canadian Broadcasting Corporation [CBC] offers a little good news for those severely alarmed by the report.) Note: The Intergovernmental Panel on Climate Change (IPCC) is the UN body tasked with assessing the science related to climate change.

Congratulations to Molly Shoichet (her hydrogels are used in regenerative medicine and more) for winning the $1 million Gerhard Herzberg Canada Gold Medal

I imagine that most anyone who’s been in contact with Ms. Shoichet is experiencing a thrill on hearing this morning’s (November 10, 2020) news about winning Canada’s highest honour for science and engineering research. (Confession: she, very kindly, once gave me a brief interview for a posting on this blog, more about that later).

Why Molly Shoichet won the Gerhard Herzberg Canada Gold Medal

Emily Chung’s Nov. 10, 2020 news item on the Canadian Broadcasting Corporation (CBC) website announces the exciting news (Note: Links have been removed),

A Toronto chemical engineering professor has won the $1 million Gerhard Herzberg Canada Gold Medal, the country’s top science prize, for her work designing gels that mimic human tissues.

The Natural Sciences and Engineering Research Council of Canada (NSERC) announced Tuesday [Nov. 10, 2020] that Molly Shoichet, professor of chemical engineering and applied chemistry and Canada Research Chair in Tissue Engineering at the University of Toronto is this year’s recipient of the award, which recognizes “sustained excellence” and “overall influence” of research conducted in Canada in the natural sciences or engineering.

Shoichet’s hydrogels are used for drug development and  delivery and regenerative medicine to heal injuries and treat diseases.

NSERC said Shoichet’s work has led to the development of several “game-changing” applications of such materials. They “delivered a crucial breakthrough” by allowing cells to be grown in three dimensions as they do in the body, rather than the two dimensions they typically do in a petri dish.

Hydrogels are polymer materials — materials such as plastics, made of repeating units — that become swollen with water.

“If you’ve ever eaten Jell-o, that’s a hydrogel,” Shoichet said. Slime and the absorbent material inside disposable diapers are also hydrogels.

Shoichet was born in Toronto, and studied science and engineering at the Massachusetts Institute of Technology and the University of Massachusetts Amherst. After graduating, she worked in the biotech industry alongside “brilliant biologists,” she said. She noticed that the biologists’ research was limited by what types of materials were available.

As an engineer, she realized she could help by custom designing materials for biologists. She could make materials specifically suit their needs, to answer their specific questions by designing hydrogels to mimic particular tissues.

Her collaborations with biologists have also generated three spinoff companies, including AmacaThera, which was recently approved to run human trials of a long-acting anesthetic delivered with an injectable hydrogel to deal with post-surgical pain.

Shoichet noted that drugs given to deal with that kind of pain lead to a quarter of opioid addictions, which have been a deadly problem in Canada and around the world.

“What we’re really excited about is not only meeting that critical need of providing people with greater pain relief for a sustained period of time, but also possibly putting a dent in the operation,” she said. 

Liz Do’s Nov. 10, 2020 University of Toronto news release provides more details (Note: Links have been removed),

The  Herzberg Gold Medal is awarded by the Natural Sciences and Engineering Research Council (NSERC) in recognition of research contributions characterized by both excellence and influence.

“I was completely overwhelmed when I was told the good news,” says Shoichet. “There are so many exceptional people who’ve won this award and I admire them. To think of my peers putting me in that same category is really incredible.”

A pioneer in regenerative medicine, tissue engineering and drug delivery, Shoichet and her team are internationally known for their discovery and innovative use of 3D hydrogels.

“One of the challenges facing drug screening is that many of the drugs discovered work well in the lab, but not in people, and a possible explanation for this discrepancy is that these drugs are discovered in environments that do not reflect that of the body,” explains Shoichet.

Shoichet’s team has invented a series of biomaterials that provide a soft, three-dimensional environment in which to grow cells. These hydrogels — water-swollen materials — better mimic human tissue than hard two-dimensional plastic dishes that are typically used. “Now we can do more predictive drug screening,” says Shoichet.

Her lab is using these biomaterials to discover drugs for breast and brain cancer and a rare lung disease. Shoichet’s lab has been equally innovative in regenerative medicine strategies to promote repair of the brain after stroke and overcome blindness.

“Everything that we do is motivated by answering a question in biology, using our engineering and chemistry tools to answer those questions,” says Shoichet.

“The hope is that our contributions will ultimately make a positive impact in the cancer community and in treating diseases for which we can only slow the progression rather than stop and reverse, such as with blindness.”

Shoichet is also an advocate for and advisor on the fields of science and engineering. She has advised both federal and provincial governments through her service on Canada’s Science, Technology and Innovation Council and the Ontario Research Innovation Council. From 2014 to 2018, she was the Senior Advisor to the President on Science & Engineering Engagement at the University of Toronto. She is the co-founder of Research2Reality [emphasis mine], which uses social media to promote innovative research across the country. She also served as Ontario’s first Chief Scientist [emphasis mine], with a mandate to advance science and innovation in the province.

Shoichet is the only person to be elected a fellow of all three of Canada’s National Academies and is a foreign member of the U.S. National Academy of Engineering, and fellow of the Royal Society (UK) — the oldest and most prestigious academic society.

Doug Ford (premier of Ontario) and Molly Shoichet

She did serve as Ontario’s first Chief Scientist—for about six months (Nov. 2017 – July 2018). Molly Shoichet was fired when a new provincial government was elected in the summer of 2018. Here’s more about the incident from a July 4, 2018 article by Ryan Maloney for huffingtonpost.ca (Note: Links have been removed),

New Ontario Premier Doug Ford has fired the province’s first chief scientist.

Dr. Molly Shoichet, a renowned biomedical engineer who teaches at the University of Toronto, was appointed in November [2017] to advise the government and ensure science and research were at the forefront of decision-making.

Shoichet told HuffPost Canada in an email that the she does not believe the decision was about her, and “I don’t even know whether it was about this role.” She said she is disappointed but honoured to have served Ontarians, even for a short time.

Ford’s spokesman, Simon Jefferies told The Canadian Press Wednesday that the government is starting the process of “finding a suitable and qualified replacement.” [emphasis mine]

The move comes just days after Ford’s Progressive Conservatives officially took power in Canada’s largest province with a majority government.

Almost a year later, there was no replacement in sight according to a June 24, 2019 opinion piece by Kimberly Girling (then the Research and Policy Director of the Evidence for Democracy not-for-profit) for the star.com,

Premier Doug Ford, I’m concerned for your government.

I know you feel it too. Last week, one year into your mandate and faced with sharply declining polls after your first provincial budget, you conducted a major cabinet shuffle. This shuffle is clearly an attempt to “put the right people in the right place at the right time” and improve the outcomes of your cabinet. But I’m still concerned.

Since your election, your caucus has made many bold decisions. Unfortunately, it seems many are Ontarians unhappy with most of these decisions, and I’m not sure the current shuffle is enough to fix this.

[] I think you’re missing someone.

What about a Chief Scientist?

It isn’t a radical idea. Actually, you used to have one. Ontario’s first Chief Scientist, Dr. Molly Shoichet, was appointed to advise the government on science policy and champion science and innovation for Ontario. However, when your government was elected, you fired Dr. Shoichet within the first week.

It’s been a year, and so far we haven’t seen any attempts to fill this vacant position. [emphasis mine]

I wonder if Doug Ford and his crew regret the decision to fire Shoichet especially now that the province is suffering from a new peak in rising COVID-19 case numbers. These days government could do with a little bit of good news.

The only way we might ever know is if Doug Ford writes a memoir (in about 20 or 30 years from now).

Molly Shoichet, Research2Reality, and FrogHeart

A May 11, 2015 posting announced the launch of Research2Reality and it’s in this posting that I have a few comments from Molly Shoichet about co-founding a national science communication project. Given how busy she was at the time, I was amazed she took a few minutes to speak to me and took more time to make it possible for me to interview Raymond Laflamme (then director of the Institute for Quantum Computing at the University of Waterloo [Ontario]) and a prominent physicist.

Here are the comments Molly Shoichet offered (from the May 11, 2015 posting),

“I’m very excited about this and really hope that other people will be too,” says Shoichet. The audience for the Research2Reality endeavour is for people who like to know more and have questions when they see news items about science discoveries that can’t be answered by investigating mainstream media programmes or trying to read complex research papers.

This is a big undertaking. ” Mike [Mike MacMillan, co-founder] and I thought about this for about two years.” Building on the support they received from the University of Toronto, “We reached out to the vice-presidents of research at the top fifteen universities in the country.” In the end, six universities accepted the invitation to invest in this project,

Five years later, it’s still going.

Finally: Congratulations Molly Shoichet!

The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (an addendum)

I missed a few science journalists (part 1 of this series, under the Science Communication subhead; Mainstream Media, sub subhead) as the folks at the Science Media Centre of Canada (SMCC) noted on Twitter,

Science Media Centre @SMCCanada Apr 16 Replying to @frogheart

Thanks for the mention. But I think poor @katecallen at the Toronto Star would be dismayed to read that @IvanSemeniuk is the only science reporter on a Canadian newspaper. And @row1960 Bob Weber at Canadian Press is carried in every newspaper in the country.

Science Media Centre @SMCCanada Apr 16 Replying to @frogheart

In addition, @mle_chung at CBC News Online (#1 news source in Canada) is read more than any other science writer in the country, as is her colleague @NebulousNikki

Thank you.

***ETA April 29, 2020 at 0910 PT: Yesterday, April 28, 2020, Postmedia announced that it was closing 15 community newspapers and a number of jobs elsewhere in the organization. Earlier in the month on April 7, 2020 Postmedia announced that 85 positions were being eliminated, including 11 in the editorial department of TorStar (Toronto Star). I hope they keep a position for a science writer at the Toronto Star.***

Alice Major, a poet mentioned in Part 3 under The word subhead; Poetry sub subhead, wrote with news of two other poets who focus on science in their work.

  • Christian Bök
  • Adam Dickinson

From Bök’s Wikipedia entry (Note: Links have been removed),

Christian Bök[needs IPA] (born August 10, 1966 in Toronto, Canada) is an experimental Canadian poet. He is the author of Eunoia, which won the Canadian Griffin Poetry Prize.

On April 4, 2011 Bök announced a significant break-through in his 9-year project to engineer “a life-form so that it becomes not only a durable archive for storing a poem, but also an operant machine for writing a poem”.[7][8] On the previous day (April 3) Bök said he received confirmation from the laboratory at the University of Calgary that my poetic cipher, gene X-P13, has in fact caused E. coli to fluoresce red in our test-runs—meaning that, when implanted in the genome of this bacterium, my poem (which begins “any style of life/ is prim…”) does in fact cause the bacterium to write, in response, its own poem (which begins “the faery is rosy/ of glow…”).”[9]

The project has continued for over fifteen years at a cost exceeding $110,000 and he hopes to finish the project in 2014.[10] He published “Book I” of the resulting Xenotext in 2015.

Xenotext: Book 1 published by Coach House Books is described this way,

Internationally best-sellling poet Christian Bök has spent more than ten years writing what promises to be the first example of ‘living poetry.’ After successfully demonstrating his concept in a colony of E. coli, Bök is on the verge of enciphering a beautiful, anomalous poem into the genome of an unkillable bacterium (Deinococcus radiodurans), which can, in turn, “read” his text, responding to it by manufacturing a viable, benign protein, whose sequence of amino acids enciphers yet another poem. The engineered organism might conceivably serve as a post-apocalyptic archive, capable of outlasting our civilization.

Book I of The Xenotext constitutes a kind of ‘demonic grimoire,’ providing a scientific framework for the project with a series of poems, texts, and illustrations. A Virgilian welcome to the Inferno, Book I is the “orphic” volume in a diptych, addressing the pastoral heritage of poets, who have sought to supplant nature in both beauty and terror. The book sets the conceptual groundwork for the second volume, which will document the experiment itself. The Xenotext is experimental poetry in the truest sense of the term.

Adam Dickinson is a poet and an associate professor at Brock University (Ontario). He describes himself and his work this way (from the Brock University bio page),

Adam Dickinson is a poet and a professor of poetry. His creative and academic writing has primarily focused on intersections between poetry and science as a way of exploring new ecocritical perspectives and alternative modes of poetic composition. His latest book, Anatomic (Coach House Books), involves the results of chemical and microbial testing on his body, and was shortlisted for The Raymond Souster Award. Sections of it were also shortlisted for the Canadian Broadcasting Corporation (CBC) Poetry Prize. His book, The Polymers (House of Anansi [2013]), which is an imaginary science project that combines the discourses, theories, and experimental methods of the science of plastic materials with the language and culture of plastic behaviour, was a finalist for both the Governor General’s Award for Poetry and the Trillium Book Award for Poetry. He has published two previous books, Kingdom, Phylum (also nominated for the Trillium Book Award for Poetry) and Cartography and Walking (nominated for an Alberta Book Award). His scholarly work (supported by SSHRC [Social Sciences and Humanities Research Council of Canada]) brings together research in innovative poetics, biosemiotics, pataphysics, and Anthropocene studies.

His current research-creation project, “Metabolic Poetics,” (also supported by SSHRC) is concerned with the potential of expanded modes of reading and writing to shift the frames and scales of conventional forms of signification in order to bring into focus the often inscrutable biological and cultural writing intrinsic to the Anthropocene, especially as this is reflected in the inextricable link between the metabolic processes of human and nonhuman bodies and the global metabolism of energy and capital.

He has been featured at prominent international literary festivals, such as Poetry International in Rotterdam, The Harbourfront International Festival of Authors in Toronto, and the Oslo International Poetry Festival in Norway. Adam has also been a finalist for the K.M. Hunter Artist Award in Literature, Administered by the Ontario Arts Council. Adam welcomes potential student supervisions on topics in poetry and poetics, environmental writing, science and literature, and creative writing.

Thank you.

This last addition may seen a little offbeat but ARPICO (Society of Italian Researchers & Professionals in Western Canada) has hosted a surprisingly large number of science events in Vancouver. Two recent examples include: The Eyes are the Windows to The Mind; Implications for Artificial Intelligence (AI) -driven Personalized Interaction on March 4, 2020 and, the relatively recent, Whispers in the Dark: Underground Science on June 12, 2019.

Hopefully, I’ll be able to resist the impulse to make any more additions.

***ETA April 30, 2020: Research2Reality (R2R) was launched in 2015 as a social media initiative featuring a series of short video interviews with Canadian scientists (see more in my May 11, 2015 posting). Almost five years later, the website continues to feature interviews and it also hosts news about Canadian science and research. R2R was founded by Molly Shoichet (pronounced shoyquette) and Mike MacMillan.***

For anyone who stumbled across this addendum first, it fits on to the end of a 5-part series:

Part 1 covers science communication, science media (mainstream and others such as blogging) and arts as exemplified by music and dance: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (1 of 5).

Part 2 covers art/science (or art/sci or sciart) efforts, science festivals both national and local, international art and technology conferences held in Canada, and various bar/pub/café events: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (2 of 5).

Part 3 covers comedy, do-it-yourself (DIY) biology, chief science advisor, science policy, mathematicians, and more: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (3 of 5).

Part 4 covers citizen science, birds, climate change, indigenous knowledge (science), and the IISD Experimental Lakes Area: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (4 of 5).

Part 5: includes science podcasting, eco art, a Saskatchewan lab with an artist-in-residence, the Order of Canada and children’s science literature, animation and mathematics, publishing science, *French language science media,* and more: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (5 of 5).

*French language science media added December 9, 2020.

Happy International Women’s Day on March 8, 2019—with a shout-out to women in science

I did a very quick search for today’s (March 8, 2019) women in science stories and found three to highlight here. First, a somewhat downbeat Canadian story.

Can Canadians name a woman scientist or engineer?

According to Emily Chung’s March 8, 2019 article on the Canadian Broadcasting Corporation’s (CBC) online news site, the answer is: no,

You’ve probably heard of Stephen Hawking, Albert Einstein and Mark Zuckerberg.

But can you name a woman scientist or engineer? Half of Canadians can’t, suggests a new poll.

The online survey of 1,511 Canadians was commissioned by the non-profit group Girls Who Code and conducted by the market research firm Maru/Blue from March 1-3 and released for International Women’s Day today [March 8, 2019].

It was intended to collect data about how people felt about science, technology, engineering and math (STEM) careers and education in Canada, said Reshma Saujani, founder and CEO of the group, which aims to close the gender gap in technology by teaching girls coding skills.


The poll found:

When asked how many women scientists/engineers they could name, 52 per cent of respondents said “none.”

When asked to picture a computer scientist, 82 per cent of respondents immediately imagined a man rather than a woman.

77 per cent of respondents think increased media representation of women in STEM careers or leadership roles would help close the gender gap in STEM.


Sandra Corbeil, who’s involved a Women in STEM initiative at Ingenium, the organization that oversees Canada’s national museums of science and innovation, agrees that women scientists are under-recognized.

… Ingenium organized an event where volunteers from the public collaborated to add more women scientists to the online encyclopedia Wikipedia for the International Day of Women and Girls in Science this past February [2019].

The 21 participants added four articles, including Dr. Anna Marion Hilliard, who developed a simple pap test for early detection of cervical cancer and Marla Sokolowski, who discovered an important gene that affects both metabolism and behaviour in fruit flies. The volunteer editors also updated and translated several other entries.

Similar events have been held around the world to boost the representation of women on Wikipedia, where as of March 4, 2019, only 17.7 per cent of biographies were of women — even 2018’s winner of the Nobel Prize in Physics, Donna Strickland, didn’t have a Wikipedia entry until the prize was announced.

Corbeil acknowledged that in science, the individual contributions of scientists, whether they are men or women, tend to not be well known by the public.[emphasis mine]

“We don’t treat them like superstars … to me, it’s something that we probably should change because their contributions matter.”

Chung points to a criticism of the Girls Who Code poll, they didn’t ask Canadians whether they could name male scientists or engineers. While Reshma Saujani acknowledged the criticism, she also brushed it off (from Chung’s article),

Saujani acknowledges that the poll didn’t ask how many male scientists or engineers they could name, but thinks the answer would “probablybe different. [emphasis mine]

Chung seems to be hinting (with the double quotes around the word probably) but I’m going to be blunt, that isn’t good science but, then, Saujani is not a scientist (from the reshmasujani.com’s About page),

Reshma began her career as an attorney and activist. In 2010, she surged onto the political scene as the first Indian American woman to run for U.S. Congress. During the race, Reshma visited local schools and saw the gender gap in computing classes firsthand, which led her to start Girls Who Code. She has also served as Deputy Public Advocate for New York City and ran a spirited campaign for Public Advocate in 2013.

I’m inclined to believe that Saujani is right but I’d want to test the hypothesis. I have looked at what I believe to be the entire report here. I’m happy to see the questions but I do have a few questions about the methodology (happily, also included in the report),

… online survey was commissioned by Girls Who Code of 1,511 randomly selected Canadian adults who are Maru Voice panelists.

If it’s an online survey, how can the pollsters be sure the respondents are Canadian or sure about any other of the demographic details? What is a Maru Voice panelist? Is there some form of self-selection inherent in being a Maru Voice panelist? (If I remember my social science research guidelines properly, self-selected groups are not the same as the general population.)

All I’m saying, this report is interesting but seems problematic so treat it with a little caution.

Celebrating women in science in UK (United Kingdom)

This story comes from the UK’s N8 Research Partnership (I’m pretty sure that N8 is meant to be pronounced as ‘innate’). On March 7, 2019 they put up a webpage celebrating women in science,

All #N8women deliver our vision of making the N8 Research Partnership an exceptionally effective cluster of research innovation and training excellence; we celebrate all of your contributions and thank you for everything that you do. Read more about the women below or find out about them on our social channels by searching #N8Women.

Professor Dame Sue Black

Professor Dame Sue Black from Lancaster University pioneered research techniques to identify an individual by their hand alone, a technique that has been used successfully in Court to identify perpetrators in relation to child abuse cases. Images have been taken from more than 5000 participants to form an open-source dataset which has allowed a breakthrough in the study of anatomical variation.

Professor Diana Williams

Professor Diana Williams from The University of Liverpool has led research with Farming Online into a digital application that predict when and where disease is likely to occur. This is hoped to help combat the £300m per year UK agriculture loses per year through the liver fluke parasite which affects livestock across the globe.

Professor Louise Heathwaite

Professor Louise Heathwaite from Lancaster University has gained not only international recognition for her research into environmental pollution and water quality, but she also received the royal seal of approval after being awarded a CBE in the Queen’s Birthday Honours 2018.

Professor Sue Black

Professor Sue Black from Durham University has helped support 100 women retrain into tech roles thanks to the development of online programme, TechUP. Supported by the Institute of Coding, the programme lasts six months and concludes with a job interview, internship or apprenticeship.

Dr Anna Olsson-Brown

Dr Anna Olsson-Brown from the University of Liverpool has been instrumental in research into next-generation drugs that can treat patients with more advanced, malignant cancers and help them deal with the toxicity that can accompany novel therapies.

Professor Katherine Denby

Professor Katherine Denby, Director of N8 Agrifood, based at the University of York has been at the forefront of developing novel ways to enhance and enable breeding of crops resistance to environmental stress and disease.

Most recently, she was involved in the development of a genetic control system that enables plants to strengthen their defence response against deadly pathogens.

Doctor Louise Ellis

Dr Louise Ellis, Director of Sustainability at the University of Leeds has been leading their campaign – Single Out: 2023PlasticFree – crucially commits the University and Union to phase out single-use plastic across the board, not just in catering and office spaces.

Professor Philippa Browning

Professor Philippa Browning from the University of Manchester wanted to be an astronaut when she was a child but found that there was a lack of female role models in her field. She is leading work on the interactions between plasmas and magnetic fields and is a mentor for young solar physicists.

Doctor Anh Phan

Dr Anh Phan is a Lecturer of Chemical Engineering in the School of Engineering at Newcastle University. She has been leading research into cold plasma pyrolysis, a process that could be used to turn plastic waste into green energy. This is a novel process that could revolutionise our problem with plastic and realise the true value of plastic waste.

So, Canadians take note of these women and the ones featured in the next item.

Canada Science and Technology Museum’s (an Ingenium museum) International Women’s Day video

It was posted on YouTube in 2017 but given the somewhat downbeat Canadian story I started with I thought this appropriate,

It’s never too late to learn about women in science and engineering. The women featured in the video are: Ursula Franklin, Maude Abbott, Janice Zinck, and Indira Samarasekera

Canadian researchers develop test for exposure to nanoparticles*

The Canadian Broadcasting Corporation’s online news features a May 21, 2014 article by Emily Chung regarding research from the University of Toronto that may enable a simple skin test for determining nanoparticle exposure,

Canadian researchers have developed the first test for exposure to nanoparticles — new chemical technology found in a huge range of consumer products — that could potentially be used on humans.

Warren Chan, a University of Toronto [U of T] chemistry professor, and his team developed the skin test after noticing that some mice changed colour and others became fluorescent (that is, they glowed when light of certain colours were shone on them) after being exposed to increasing levels of different kinds of nanoparticles. The mice were being used in research to develop cancer treatments involving nanoparticles.

There is some evidence that certain types and levels of exposure may be harmful to human health. But until now, it has been hard to link exposure to health effects, partly due to the challenge of measuring exposure.

“There’s no way to determine how much [sic] nanoparticles you’ve been exposed to,” said Chan in an interview with CBCNews.ca.

There was one way to measure nanoparticle exposure in mice —  but it required the animals to be dead. At that point, they would be cut open and tests could be run on organs such as the liver and spleen where nanoparticles accumulate.

A May 14, 2014 article by Nancy Owano on phys.org provides more details (Note: Links have been removed),

They [researchers] found that different nanoparticles are visible through the skin under ambient or UV light. They found that after intravenous injection of fluorescent nanoparticles, they accumulate and can be observed through the skin. They also found that the concentration of these nanoparticles can be directly correlated to the injected dose and their accumulations in other organs.

In their discussion over selecting nanoparticles used in mouse skin, they said, “Gold nanoparticles are commonly used in molecular diagnostics and drug delivery applications. These nanomaterials were selected for our initial studies as they are easily synthesized, have a distinct ruby color and can be quantified by inductively coupled plasma atomic emission spectroscopy (ICP-AES).”

Work involved in the study included designing and performing experiments, pathological analysis, and data analysis. Their discovery could be used to better predict how nanoparticles behave in the body.

Here’s a link to and a citation for the paper,

Nanoparticle exposure in animals can be visualized in the skin and analysed via skin biopsy by Edward A. Sykes, Qin Dai, Kim M. Tsoi, David M. Hwang & Warren C. W. Chan. Nature Communications 5, Article number: 3796 doi:10.1038/ncomms4796 Published 13 May 2014

This paper is behind a paywall.

* Posting’s head changed from ‘Canadians and exposure to nanoparticles; to the more descriptive ‘Canadian researchers develop test for exposure to nanoparticles’., May 27, 2014.

Stained glass cathedral window solar panels being hooked up to Saskatoon’s (Canada) power grid

The Cathedral of the Holy Family in Saskatoon, Saskatchewan (Canada) is about to have its art glass windows (“Lux Gloria”) complete with solar panels hooked up to the Saskatoon Light & Power’s distribution network. It’s not often one sees beauty and utility combined. You can see the stained glass windows as they appear, from outside the cathedral, on this book cover for “A Beacon of Welcome” A Glimpse Inside the Cathedral of the Holy Family,

“A Beacon of Welcome” A Glimpse Inside the Cathedral of the Holy Family book cover [downloaded from http://holyfamilycathedral.ca/holyfamily-parish-life/59-gala-week-books]

“A Beacon of Welcome” A Glimpse Inside the Cathedral of the Holy Family [book cover downloaded from http://holyfamilycathedral.ca/holyfamily-parish-life/59-gala-week-books]

Emily Chung’s July 29, 2013 news item for CBC (Canadian Broadcasting Corporation) online describes the project at more length,

“Lux Gloria” by Sarah Hall, at the Cathedral of the Holy Family in Saskatoon, is currently being connected to Saskatoon Light & Power’s electrical distribution network, confirmed Jim Nakoneshny, facilities manager at the cathedral.

The artwork, which consists of solar panels embedded in brightly coloured, hand-painted art glass, had just been reinstalled and upgraded after breaking and falling into the church last year.

According to Kevin Hudson, manager of metering and sustainable electricity for Saskatoon Light & Power, the solar panels are expected to produce about 2,500 kilowatt hours annually or about a third to a quarter of the 8,000 to 10,000 kilowatt hours consumed by a typical home in Saskatoon each year.

In fact, the installation will become Saskatchewan’s first building-integrated photovoltaic system (BIPV), where solar panels are embedded directly into walls, windows or other parts of a building’s main structure. It’s a trend that is expected to grow in the future as the traditional practice of mounting solar panels on rooftops isn’t practical for many city buildings, including some churches.

Chung’s article features some specific technical information about the solar art windows supplied by artist Sarah Hall,

In the case of the Cathedral of the Holy Family, each solar panel was a different size and was trapezoidal in shape, Hall said. As a result, “all the solar work had to be hand soldered.”

Because the solar cells aren’t transparent, Hall adds a high-tech “dichroic” glass to the back of the cells in some cases to make them colourful and reflective.

You can find more  images of Hall’s work on her website. Unfortunately, Hall does not provide much detail about the technical aspects of her work.

The Cathedral of the Holy Family features a book about their stained glass windows,

“Transfiguring Prairie Skies”  Stained Glass at Cathedral of the Holy Family [book cover downloaded from http://holyfamilycathedral.ca/holyfamily-parish-life/59-gala-week-books]

“Transfiguring Prairie Skies” Stained Glass at Cathedral of the Holy Family [book cover downloaded from http://holyfamilycathedral.ca/holyfamily-parish-life/59-gala-week-books]

Here’s more information about the book,

“Transfiguring Prairie Skies”  Stained Glass at Cathedral of the Holy Family  written by Bishop Donald Bolen and Sarah Hall, photography by Grant Kernan and Sarah Hall.  A 116 page hard cover book which includes incredibly detailed close-up shots of our stained glass windows, complete with poetic and theological reflections for each window.

Cost is $25.00

You can visit the Cathedral of the Holy Family website here.

Opening up Open Access: European Union, UK, Argentina, US, and Vancouver (Canada)

There is a furor growing internationally and it’s all about open access. It ranges from a petition in the US to a comprehensive ‘open access’ project from the European Union to a decision in the Argentinian Legislature to a speech from David Willetts, UK Minister of State for Universities and Science to an upcoming meeting in June 2012 being held in Vancouver (Canada).

As this goes forward, I’ll try to be clear as to which kind of open access I’m discussing,  open access publication (access to published research papers), open access data (access to research data), and/or both.

The European Commission has adopted a comprehensive approach to giving easy, open access to research funded through the European Union under the auspices of the current 7th Framework Programme and the upcoming Horizon 2020 (or what would have been called the 8th Framework Pr0gramme under the old system), according to the May 9, 2012 news item on Nanowerk,

To make it easier for EU-funded projects to make their findings public and more readily accessible, the Commission is funding, through FP7, the project ‘Open access infrastructure for research in Europe’ ( OpenAIRE). This ambitious project will provide a single access point to all the open access publications produced by FP7 projects during the course of the Seventh Framework Programme.

OpenAIRE is a repository network and is based on a technology developed in an earlier project called Driver. The Driver engine trawled through existing open access repositories of universities, research institutions and a growing number of open access publishers. It would index all these publications and provide a single point of entry for individuals, businesses or other scientists to search a comprehensive collection of open access resources. Today Driver boasts an impressive catalogue of almost six million taken from 327 open access repositories from across Europe and beyond.

OpenAIRE uses the same underlying technology to index FP7 publications and results. FP7 project participants are encouraged to publish their papers, reports and conference presentations to their institutional open access repositories. The OpenAIRE engine constantly trawls these repositories to identify and index any publications related to FP7-funded projects. Working closely with the European Commission’s own databases, OpenAIRE matches publications to their respective FP7 grants and projects providing a seamless link between these previously separate data sets.

OpenAIRE is also linked to CERN’s open access repository for ‘orphan’ publications. Any FP7 participants that do not have access to an own institutional repository can still submit open access publications by placing them in the CERN repository.

Here’s why I described this project as comprehensive, from the May 9, 2012 news item,

‘OpenAIRE is not just about developing new technologies,’ notes Ms Manola [Natalia Manola, the project’s manager], ‘because a significant part of the project focuses on promoting open access in the FP7 community. We are committed to promotional and policy-related activities, advocating open access publishing so projects can fully contribute to Europe’s knowledge infrastructure.’

The project is collecting usage statistics of the portal and the volume of open access publications. It will provide this information to the Commission and use this data to inform European policy in this domain.

OpenAIRE is working closely to integrate its information with the CORDA database, the master database of all EU-funded research projects. Soon it should be possible to click on a project in CORDIS (the EU’s portal for research funding), for example, and access all the open access papers published by that project. Project websites will also be able to provide links to the project’s peer reviewed publications and make dissemination of papers virtually effortless.

The project participants are also working with EU Members to develop a European-wide ‘open access helpdesk’ which will answer researchers’ questions about open access publishing and coordinate the open access initiatives currently taking place in different countries. The helpdesk will build up relationships and identify additional open access repositories to add to the OpenAIRE network.

Meanwhile, there’s been a discussion on the UK’s Guardian newspaper website about an ‘open access’ issue, money,  in a May 9, 2012 posting by John Bynner,

The present academic publishing system obstructs the free communication of research findings. By erecting paywalls, commercial publishers prevent scientists from downloading research papers unless they pay substantial fees. Libraries similarly pay huge amounts (up to £1m or more per annum) to give their readers access to online journals.

There is general agreement that free and open access to scientific knowledge is desirable. The way this might be achieved has come to the fore in recent debates about the future of scientific and scholarly journals.

Our concern lies with the major proposed alternative to the current system. Under this arrangement, authors are expected to pay when they submit papers for publication in online journals: the so called “article processing cost” (APC). The fee can amount to anything between £1,000 and £2,000 per article, depending on the reputation of the journal. Although the fees may sometimes be waived, eligibility for exemption is decided by the publisher and such concessions have no permanent status and can always be withdrawn or modified.

A major problem with the APC model is that it effectively shifts the costs of academic publishing from the reader to the author and therefore discriminates against those without access to the funds needed to meet these costs. [emphasis mine] Among those excluded are academics in, for example, the humanities and the social sciences whose research funding typically does not include publication charges, and independent researchers whose only means of paying the APC is from their own pockets. Academics in developing countries in particular face discrimination under APC because of their often very limited access to research funds.

There is another approach that could be implemented for a fraction of the cost of commercial publishers’ current journal subscriptions. “Access for all” (AFA) journals, which charge neither author nor reader, are committed to meeting publishing costs in other ways.

Bynner offers a practical solution, get the libraries to pay their subscription fees to an AFA journal, thereby funding ‘access for all’.

The open access discussion in the UK hasn’t stopped with a few posts in the Guardian, there’s also support from the government. David Willetts, in a May 2, 2012 speech to the UK Publishers Association Annual General Meeting had this to say, from the UK’s Dept. for Business Innovation and Skills website,

I realise this move to open access presents a challenge and opportunity for your industry, as you have historically received funding by charging for access to a publication. Nevertheless that funding model is surely going to have to change even beyond the positive transition to open access and hybrid journals that’s already underway. To try to preserve the old model is the wrong battle to fight. Look at how the music industry lost out by trying to criminalise a generation of young people for file sharing. [emphasis mine] It was companies outside the music business such as Spotify and Apple, with iTunes, that worked out a viable business model for access to music over the web. None of us want to see that fate overtake the publishing industry.

Wider access is the way forward. I understand the publishing industry is currently considering offering free public access to scholarly journals at all UK public libraries. This is a very useful way of extending access: it would be good for our libraries too, and I welcome it.

It would be deeply irresponsible to get rid of one business model and not put anything in its place. That is why I hosted a roundtable at BIS in March last year when all the key players discussed these issues. There was a genuine willingness to work together. As a result I commissioned Dame Janet Finch to chair an independent group of experts to investigate the issues and report back. We are grateful to the Publishers Association for playing a constructive role in her exercise, and we look forward to receiving her report in the next few weeks. No decisions will be taken until we have had the opportunity to consider it. But perhaps today I can share with you some provisional thoughts about where we are heading.

The crucial options are, as you know, called green and gold. Green means publishers are required to make research openly accessible within an agreed embargo period. This prompts a simple question: if an author’s manuscript is publicly available immediately, why should any library pay for a subscription to the version of record of any publisher’s journal? If you do not believe there is any added value in academic publishing you may view this with equanimity. But I believe that academic publishing does add value. So, in determining the embargo period, it’s necessary to strike a suitable balance between enabling revenue generation for publishers via subscriptions and providing public access to publicly funded information. In contrast, gold means that research funding includes the costs of immediate open publication, thereby allowing for full and immediate open access while still providing revenue to publishers.

In a May 22, 2012 posting at the Guardian website, Mike Taylor offers some astonishing figures (I had no idea academic publishing has been quite so lucrative) and notes that the funders have been a driving force in this ‘open access’ movement (Note: I have removed links from the excerpt),

The situation again, in short: governments and charities fund research; academics do the work, write and illustrate the papers, peer-review and edit each others’ manuscripts; then they sign copyright over to profiteering corporations who put it behind paywalls and sell research back to the public who funded it and the researchers who created it. In doing so, these corporations make grotesque profits of 32%-42% of revenue – far more than, say, Apple’s 24% or Penguin Books’ 10%. [emphasis mine]

… But what makes this story different from hundreds of other cases of commercial exploitation is that it seems to be headed for a happy ending. That’s taken some of us by surprise, because we thought the publishers held all the cards. Academics tend to be conservative, and often favour publishing their work in established paywalled journals rather than newer open access venues.

The missing factor in this equation is the funders. Governments and charitable trusts that pay academics to carry out research naturally want the results to have the greatest possible effect. That means publishing those results openly, free for anyone to use.

Taylor also goes on to mention the ongoing ‘open access’ petition in the US,

There is a feeling that the [US] administration fully understands the value of open access, and that a strong demonstration of public concern could be all it takes now to goad it into action before the November election. To that end a Whitehouse.gov petition has been set up urging Obama to “act now to implement open access policies for all federal agencies that fund scientific research”. Such policies would bring the US in line with the UK and Europe.

The people behind the US campaign have produced a video,

Anyone wondering about the reference to Elsevier may want to check out Thomas Lin’s Feb. 13, 2012 article for the New York Times,

More than 5,700 researchers have joined a boycott of Elsevier, a leading publisher of science journals, in a growing furor over open access to the fruits of scientific research.

You can find out more about the boycott and the White House petition at the Cost of Knowledge website.

Meanwhile, Canadians are being encouraged to sign the petition (by June 19, 2012), according to the folks over at ScienceOnline Vancouver in a description o f their June 12, 2012 event, Naked Science; Excuse: me your science is showing (a cheap, cheesy, and attention-getting  title—why didn’t I think of it first?),

Exposed. Transparent. Nude. All adjectives that should describe access to scientific journal articles, but currently, that’s not the case. The research paid by our Canadian taxpayer dollars is locked behind doors. The only way to access these articles is money, and lots of it!

Right now research articles costs more than a book! About $30. Only people with university affiliations have access and only journals their libraries subscribe to. Moms, dads, sisters, brothers, journalists, students, scientists, all pay for research, yet they can’t read the articles about their research without paying for it again. Now that doesn’t make sense.

….

There is also petition going around that states that research paid for by US taxpayer dollars should be available for free to US taxpayers (and others!) on the internet. Don’t worry if you are Canadian citizen, by signing this petition, Canadians would get access to the US research too and it would help convince the Canadian government to adopt similar rules. [emphasis mine]

Here’s where you can go to sign the petition. As for the notion that this will encourage the Canadian government to adopt an open access philosophy, I do not know. On the one hand, the government has opened up access to data, notably Statistics Canada data, mentioned by Frances Woolley in her March 22, 2012 posting about that and other open access data initiatives by the Canadian government on the Globe and Mail blog,

The federal government is taking steps to build the country’s data infrastructure. Last year saw the launch of the open data pilot project, data.gc.ca. Earlier this year the paywall in front of Statistics Canada’s enormous CANSIM database was taken down. The National Research Council, together with University of Guelph and Carleton University, has a new data registration service, DataCite, which allows Canadian researches to give their data permanent names in the form of digital object identifiers. In the long run, these projects should, as the press releases claim, “support innovation”, “add value-for-money for Canadians,” and promote “the reuse of existing data in commercial applications.”

That seems promising but there is a countervailing force. The Canadian government has also begun to charge subscription fees for journals that were formerly free. From the March 8, 2011 posting by Emily Chung on the CBC’s (Canadian Broadcasting Corporation) Quirks and Quarks blog,

The public has lost free online access to more than a dozen Canadian science journals as a result of the privatization of the National Research Council’s government-owned publishing arm.

Scientists, businesses, consultants, political aides and other people who want to read about new scientific discoveries in the 17 journals published by National Research Council Research Press now either have to pay $10 per article or get access through an institution that has an annual subscription.

It caused no great concern at the time,

Victoria Arbour, a University of Alberta graduate student, published her research in the Canadian Journal of Earth Sciences, one of the Canadian Science Publishing journals, both before and after it was privatized. She said it “definitely is too bad” that her new articles won’t be available to Canadians free online.

“It would have been really nice,” she said. But she said most journals aren’t open access, and the quality of the journal is a bigger concern than open access when choosing where to publish.

Then, there’s this from the new publisher, Canadian Science Publishing,

Cameron Macdonald, executive director of Canadian Science Publishing, said the impact of the change in access is “very little” on the average scientist across Canada because subscriptions have been purchased by many universities, federal science departments and scientific societies.

“I think the vast majority of researchers weren’t all that concerned,” he said. “So long as the journals continued with the same mission and mandate, they were fine with that.”

Macdonald said the journals were never strictly open access, as online access was free only inside Canadian borders and only since 2002.

So, journals that offered open access to research funded by Canadian taxpapers (to Canadians only) are now behind paywalls. Chung’s posting notes the problem already mentioned in the UK Guardian postings, money,

“It’s pretty prohibitively expensive to make things open access, I find,” she {Victoria Arbour] said.

Weir [Leslie Weir, chief librarian at the University of Ottawa] said more and more open-access journals need to impose author fees to stay afloat nowadays.

Meanwhile, the cost of electronic subscriptions to research journals has been ballooning as library budgets remain frozen, she said.

So far, no one has come up with a solution to the problem. [emphasis mine]

It seems they have designed a solution in the UK, as noted in John Bynner’s posting; perhaps we could try it out here.

Before I finish up, I should get to the situation in Argentina, from the May 27, 2012 posting on the Pasco Phronesis (David Bruggeman) blog (Note: I have removed a link in the following),

The lower house of the Argentinian legislature has approved a bill (en Español) that would require research results funded by the government be placed in institutional repositories once published.  There would be exceptions for studies involving confidential information and the law is not intended to undercut intellectual property or patent rights connected to research.  Additionally, primary research data must be published within 5 years of their collection.  This last point would, as far as I can tell, would be new ground for national open access policies, depending on how quickly the U.S. and U.K. may act on this issue.

Argentina steals a march on everyone by offering open access publication and open access data, within certain, reasonable constraints.

Getting back to David’s May 27, 2012 posting, he offers also some information on the European Union situation and some thoughts  on science policy in Egypt.

I have long been interested in open access publication as I feel it’s infuriating to be denied access to research that one has paid for in tax dollars. I have written on the topic before in my Beethoven inspires Open Research (Nov. 18, 2011 posting) and Princeton goes Open Access; arXiv is 10 years old (Sept. 30, 2011 posting) and elsewhere.

ETA May 28, 2012: I found this NRC Research Press website for the NRC journals and it states,

We are pleased to announce that Canadians can enjoy free access to over 100 000 back files of NRC Research Press journals, dating back to 1951. Access to material in these journals published after December 31, 2010, is available to Canadians through subscribing universities across Canada as well as the major federal science departments.

Concerned readers and authors whose institutes have not subscribed for the 2012 volume year can speak to their university librarians or can contact us to subscribe directly.

It’s good to see Canadians still have some access, although personally, I do prefer to read recent research.

ETA May 29, 2012: Yikes, I think this is one of the longest posts ever and I’m going to add this info. about libre redistribution and data mining as they relate to open access in this attempt to cover the topic as fully as possible in one posting.

First here’s an excerpt  from  Ross Mounce’s May 28, 2012 posting on the Palaeophylophenomics blog about ‘Libre redistribution’ (Note: I have removed a link),

I predict that the rights to electronically redistribute, and machine-read research will be vital for 21st century research – yet currently we academics often wittingly or otherwise relinquish these rights to publishers. This has got to stop. The world is networked, thus scholarly literature should move with the times and be openly networked too.

To better understand the notion of ‘libre redistribution’ you’ll want to read more of Mounce’s comments but you might also  want to check out Cameron Neylon’s comments in his March 6, 2012 posting on the Science in the Open blog,

Centralised control, failure to appreciate scale, and failure to understand the necessity of distribution and distributed systems. I have with me a device capable of holding the text of perhaps 100,000 papers It also has the processor power to mine that text. It is my phone. In 2-3 years our phones, hell our watches, will have the capacity to not only hold the world’s literature but also to mine it, in context for what I want right now. Is Bob Campbell ready for every researcher, indeed every interested person in the world, to come into his office and discuss an agreement for text mining? Because the mining I want to do and the mining that Peter Murray-Rust wants to do will be different, and what I will want to do tomorrow is different to what I want to do today. This kind of personalised mining is going to be the accepted norm of handling information online very soon and will be at the very centre of how we discover the information we need.

This moves the discussion past access (taxpayers not seeing the research they’ve funded, researchers who don’t have subscriptions, libraries not have subscriptions, etc.)  to what happens when you can get access freely. It opens up new ways of doing research by means of text mining and data mining redistribution of them both.

Entangling diamonds

Usually when you hear about entanglement, they’re talking about quantum particles or kittens. On Dec. 2, 2011, Science magazine published a paper by scientists who had entangled diamonds (that can be touched and held in human hands). From the Dec. 1, 2011 CBC (Canadian Broadcasting Corporation) news article by Emily Chung,

Quantum physics is known for bizarre phenomena that are very different from the behaviour we are familiar with through our interaction with objects on the human scale, which follow the laws of classical physics. For example, quantum “entanglement” connects two objects so that no matter how far away they are from one another, each object is affected by what happens to the other.

Now, scientists from the U.K., Canada and Singapore have managed to demonstrate entanglement in ordinary diamonds under conditions found in any ordinary room or laboratory.

Philip Ball in his Dec. 1, 2011 article for Nature magazine describes precisely what entanglement means when applied to the diamond crystals that were entangled,

A pair of diamond crystals has been linked by quantum entanglement. This means that a vibration in the crystals could not be meaningfully assigned to one or other of them: both crystals were simultaneously vibrating and not vibrating.

Quantum entanglement — interdependence of quantum states between particles not in physical contact — has been well established between quantum particles such as atoms at ultra-cold temperatures. But like most quantum effects, it doesn’t tend to survive either at room temperature or in objects large enough to see with the naked eye.

Entanglement, until now, has been demonstrated at very small scales due to an issue with coherence and under extreme conditions. Entangled objects are coherent with each other but other objects such as atoms can cause the entangled objects to lose their coherence and their entangled state. In order to entangle the diamonds, the scientists had to find a way of dealing with the loss of coherence as the objects are scaled up and they were able to achieve this at room temperature. From the Emily Chung article,

Walmsley [Ian Walmsley, professor of experimental physics at the University of Oxford] said it’s easier to maintain coherence in smaller objects because they can be isolated practically from disturbances. Things are trickier in larger systems that contain lots of interacting, moving parts.

Two things helped the researchers get around this in their experiment, Sussman [Ben Sussman, a quantum physicist at the National Research Council of Canada and adjunct professor at the University of Ottawa] said:

  • The hardness of the diamonds meant it was more resistant to disturbances that could destroy the coherence.
  • The extreme speed of the experiment — the researchers used laser pulses just 60 femtoseconds long, about 6/100,000ths of a nanosecond (a nanosecond is a billionth of a second) — meant there was no time for disturbances to destroy the quantum effects.

Laser pulses were used to put the two diamonds into a state where they were entangled with one another through a shared vibration known as a phonon. By measuring particles of light called photons subsequently scattered from the diamonds, the researchers confirmed that the states of the two diamonds were linked with each other — evidence that they were entangled.

If you are interested in the team’s research and can get past Science magazine’s paywall, here’s the citation,

“Entangling Macroscopic Diamonds at Room Temperature,” by K.C. Lee; M.R. Sprague; J. Nunn; N.K. Langford; X.-M. Jin; T. Champion; P. Michelberger; K.F. Reim; D. England; D. Jaksch; I.A. Walmsley at University of Oxford in Oxford, UK; B.J. Sussman at National Research Council of Canada in Ottawa, ON, Canada; X.-M. Jin; D. Jaksch at National University of Singapore in Singapore. Science 2 December 2011: Vol. 334 no. 6060 pp. 1253-1256 DOI: 10.1126/science.1211914

All of the media reports I’ve seen to date focus on the UK and Canadian researchers and I cannot find anything about the contribution of the researcher based in Singapore.

I do wish I could read more languages as I’d be more likely to find information about work which is not necessarily going to be covered in English language media.

Canada Election 2011, science writers, and an update on Peer Review Radio Candidate Interviews

Emily Chung (CBC News online) wrote up an April 26, 2011 article highlighting an open letter that the Canadian Science Writers Association (CSWA) have sent during this election 2011 campaign season to Conservative leader, Steven Harper; Green party leader, Elizabeth May; Liberal leader, Michael Ignatieff; and NDP leader, Jack Layton about the ‘muzzle’ place on federal scientists (from the article),

A group representing 500 science journalists and communicators across Canada sent an open letter Tuesday to Conservative Leader Stephen Harper, Liberal Leader Michael Ignatieff, NDP Leader Jack Layton and Green Party Leader Elizabeth May documenting recent instances where they say federal scientists have been barred from talking about research funded by taxpayers.

“We urge you to free the scientists to speak,” the letter said. “Take off the muzzles and eliminate the script writers and allow scientists — they do have PhDs after all — to speak for themselves.”

Kathryn O’Hara, president of the association, said openness and transparency are issues that haven’t come up much in the election campaign, and her group felt it was important to ask about them.

The federal government spends billions each year on scientific research, and taxpayers must be able to examine the results, she said, otherwise, “how can you get a real sense of … value in money going toward science?”

The public also needs to be able to see whether government policy is based on evidence uncovered using taxpayer money, she added.

It’s good to see science writers getting the topic of science into the election coverage. I’m a little puzzled that the science policy centre folks (Canadian Science Policy Centre) don’t seem to have organized an ‘ask your candidates about science campaign’ or composed questions and sent their own open letter to the federal parties or devised some other tactic to highlight science and science policy in this election campaign.

One more bit about science and the Canada 2011 federal election, Peer Review Radio has now posted two interviews with candidates answering questions about science policy and their respective parties. The interviews with Scott Bradley, running for the Liberal Party in Ottawa-Centre and Emma Hogbin running for the Green Party in Bruce-Grey-Owen Sound are each about 22 minutes long. The show producer and host, Adrian J. Ebsary promises to post the interviews with me, Marie-Claire Shanahan, and other interested science policy observers soon. Unfortunately, he was not able to broadcast the interviews as he hoped.