Tag Archives: Martin Rees

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.

Quantum Mechanics & Gravity conference (August 15 – 19, 2022) launches Vancouver (Canada)-based Quantum Gravity Institute and more

I received (via email) a July 21, 2022 news release about the launch of a quantum science initiative in Vancouver (BTW, I have more about the Canadian quantum scene later in this post),

World’s top physicists unite to tackle one of Science’s greatest
mysteries


Vancouver-based Quantum Gravity Society leads international quest to
discover Theory of Quantum Gravity

Vancouver, B.C. (July 21, 2022): More than two dozen of the world’s
top physicists, including three Nobel Prize winners, will gather in
Vancouver this August for a Quantum Gravity Conference that will host
the launch a Vancouver-based Quantum Gravity Institute (QGI) and a
new global research collaboration that could significantly advance our
understanding of physics and gravity and profoundly change the world as
we know it.

For roughly 100 years, the world’s understanding of physics has been
based on Albert Einstein’s General Theory of Relativity (GR), which
explored the theory of space, time and gravity, and quantum mechanics
(QM), which focuses on the behaviour of matter and light on the atomic
and subatomic scale. GR has given us a deep understanding of the cosmos,
leading to space travel and technology like atomic clocks, which govern
global GPS systems. QM is responsible for most of the equipment that
runs our world today, including the electronics, lasers, computers, cell
phones, plastics, and other technologies that support modern
transportation, communications, medicine, agriculture, energy systems
and more.

While each theory has led to countless scientific breakthroughs, in many
cases, they are incompatible and seemingly contradictory. Discovering a
unifying connection between these two fundamental theories, the elusive
Theory of Quantum Gravity, could provide the world with a deeper
understanding of time, gravity and matter and how to potentially control
them. It could also lead to new technologies that would affect most
aspects of daily life, including how we communicate, grow food, deliver
health care, transport people and goods, and produce energy.

“Discovering the Theory of Quantum Gravity could lead to the
possibility of time travel, new quantum devices, or even massive new
energy resources that produce clean energy and help us address climate
change,” said Philip Stamp, Professor, Department of Physics and
Astronomy, University of British Columbia, and Visiting Associate in
Theoretical Astrophysics at Caltech [California Institute of Technology]. “The potential long-term ramifications of this discovery are so incredible that life on earth 100
years from now could look as miraculous to us now as today’s
technology would have seemed to people living 100 years ago.”

The new Quantum Gravity Institute and the conference were founded by the
Quantum Gravity Society, which was created in 2022 by a group of
Canadian technology, business and community leaders, and leading
physicists. Among its goals are to advance the science of physics and
facilitate research on the Theory of Quantum Gravity through initiatives
such as the conference and assembling the world’s leading archive of
scientific papers and lectures associated with the attempts to reconcile
these two theories over the past century.

Attending the Quantum Gravity Conference in Vancouver (August 15-19 [2022])
will be two dozen of the world’s top physicists, including Nobel
Laureates Kip Thorne, Jim Peebles and Sir Roger Penrose, as well as
physicists Baron Martin Rees, Markus Aspelmeyer, Viatcheslav Mukhanov
and Paul Steinhardt. On Wednesday, August 17, the conference will be
open to the public, providing them with a once-in-a-lifetime opportunity
to attend keynote addresses from the world’s pre-eminent physicists.
… A noon-hour discussion on the importance of the
research will be delivered by Kip Thorne, the former Feynman Professor
of physics at Caltech. Thorne is well known for his popular books, and
for developing the original idea for the 2014 film “Interstellar.” He
was also crucial to the development of the book “Contact” by Carl Sagan,
which was also made into a motion picture.

“We look forward to welcoming many of the world’s brightest minds to
Vancouver for our first Quantum Gravity Conference,” said Frank
Giustra, CEO Fiore Group and Co-Founder, Quantum Gravity Society. “One
of the goals of our Society will be to establish Vancouver as a
supportive home base for research and facilitate the scientific
collaboration that will be required to unlock this mystery that has
eluded some of the world’s most brilliant physicists for so long.”

“The format is key,” explains Terry Hui, UC Berkley Physics alumnus
and Co-Founder, Quantum Gravity Society [and CEO of Concord Pacific].
“Like the Solvay Conference nearly 100 years ago, the Quantum Gravity
Conference will bring top scientists together in salon-style gatherings. The
relaxed evening format following the conference will reduce barriers and
allow these great minds to freely exchange ideas. I hope this will help accelerate
the solution of this hundred-year bottleneck between theories relatively
soon.”

“As amazing as our journey of scientific discovery has been over the
past century, we still have so much to learn about how the universe
works on a macro, atomic and subatomic level,” added Paul Lee,
Managing Partner, Vanedge Capital, and Co-Founder, Quantum Gravity
Society. “New experiments and observations capable of advancing work
on this scientific challenge are becoming increasingly possible in
today’s physics labs and using new astronomical tools. The Quantum
Gravity Society looks forward to leveraging that growing technical
capacity with joint theory and experimental work that harnesses the
collective expertise of the world’s great physicists.”

About Quantum Gravity Society

Quantum Gravity Society was founded in Vancouver, Canada in 2020 by a
group of Canadian business, technology and community leaders, and
leading international physicists. The Society’s founding members
include Frank Giustra (Fiore Group), Terry Hui (Concord Pacific), Paul
Lee and Moe Kermani (Vanedge Capital) and Markus Frind (Frind Estate
Winery), along with renowned physicists Abhay Ashtekar, Sir Roger
Penrose, Philip Stamp, Bill Unruh and Birgitta Whaley. For more
information, visit Quantum Gravity Society.

About the Quantum Gravity Conference (Vancouver 2022)


The inaugural Quantum Gravity Conference (August 15-19 [2022]) is presented by
Quantum Gravity Society, Fiore Group, Vanedge Capital, Concord Pacific,
The Westin Bayshore, Vancouver and Frind Estate Winery. For conference
information, visit conference.quantumgravityinstitute.ca. To
register to attend the conference, visit Eventbrite.com.

The front page on the Quantum Gravity Society website is identical to the front page for the Quantum Mechanics & Gravity: Marrying Theory & Experiment conference website. It’s probable that will change with time.

This seems to be an in-person event only.

The site for the conference is in an exceptionally pretty location in Coal Harbour and it’s close to Stanley Park (a major tourist attraction),

The Westin Bayshore, Vancouver
1601 Bayshore Drive
Vancouver, BC V6G 2V4
View map

Assuming that most of my readers will be interested in the ‘public’ day, here’s more from the Wednesday, August 17, 2022 registration page on Eventbrite,

Tickets:

  • Corporate Table of 8 all day access – includes VIP Luncheon: $1,100
  • Ticket per person all day access – includes VIP Luncheon: $129
  • Ticket per person all day access (no VIP luncheon): $59
  • Student / Academia Ticket – all day access (no VIP luncheon): $30

Date:

Wednesday, August 17, 2022 @ 9:00 a.m. – 5:15 p.m. (PT)

Schedule:

  • Registration Opens: 8:00 a.m.
  • Morning Program: 9:00 a.m. – 12:30 p.m.
  • VIP Lunch: 12:30 p.m. – 2:30 p.m.
  • Afternoon Program: 2:30 p.m. – 4:20 p.m.
  • Public Discussion / Debate: 4:20 p.m. – 5:15 p.m.

Program:

9:00 a.m. Session 1: Beginning of the Universe

  • Viatcheslav Mukhanov – Theoretical Physicist and Cosmologist, University of Munich
  • Paul Steinhardt – Theoretical Physicist, Princeton University

Session 2: History of the Universe

  • Jim Peebles, 2019 Nobel Laureate, Princeton University
  • Baron Martin Rees – Cosmologist and Astrophysicist, University of Cambridge
  • Sir Roger Penrose, 2020 Nobel Laureate, University of Oxford (via zoom)

12:30 p.m. VIP Lunch Session: Quantum Gravity — Why Should We Care?

  • Kip Thorne – 2017 Nobel Laureate, Executive Producer of blockbuster film “Interstellar”

2:30 p.m. Session 3: What do Experiments Say?

  • Markus Aspelmeyer – Experimental Physicist, Quantum Optics and Optomechanics Leader, University of Vienna
  • Sir Roger Penrose – 2020 Nobel Laureate (via zoom)

Session 4: Time Travel

  • Kip Thorne – 2017 Nobel Laureate, Executive Producer of blockbuster film “Interstellar”

Event Partners

  • Quantum Gravity Society
  • Westin Bayshore
  • Fiore Group
  • Concord Pacific
  • VanEdge Capital
  • Frind Estate Winery

Marketing Partners

  • BC Business Council
  • Greater Vancouver Board of Trade

Please note that Sir Roger Penrose will be present via Zoom but all the others will be there in the room with you.

Given that Kip Thorne won his 2017 Nobel Prize in Physics (with Rainer Weiss and Barry Barish) for work on gravitational waves, it’s surprising there’s no mention of this in the publicity for a conference on quantum gravity. Finding gravitational waves in 2016 was a very big deal (see Josh Fischman’s and Steve Mirsky’s February 11, 2016 interview with Kip Thorne for Scientific American).

Some thoughts on this conference and the Canadian quantum scene

This conference has a fascinating collection of players. Even I recognized some of the names, e.g., Penrose, Rees, Thorne.

The academics were to be expected and every presenter is an academic, often with their own Wikipedia page. Weirdly, there’s no one from the Perimeter Institute Institute for Theoretical Physics or TRIUMF (a national physics laboratory and centre for particle acceleration) or from anywhere else in Canada, which may be due to their academic specialty rather than an attempt to freeze out Canadian physicists. In any event, the conference academics are largely from the US (a lot of them from CalTech and Stanford) and from the UK.

The business people are a bit of a surprise. The BC Business Council and the Greater Vancouver Board of Trade? Frank Giustra who first made his money with gold mines, then with Lionsgate Entertainment, and who continues to make a great deal of money with his equity investment company, Fiore Group? Terry Hui, Chief Executive Office of Concord Pacific, a real estate development company? VanEdge Capital, an early stage venture capital fund? A winery? Missing from this list is D-Wave Systems, Canada’s quantum calling card and local company. While their area of expertise is quantum computing, I’d still expect to see them present as sponsors. *ETA December 6, 2022: I just looked at the conference page again and D-Wave is now listed as a sponsor.*

The academics? These people are not cheap dates (flights, speaker’s fees, a room at the Bayshore, meals). This is a very expensive conference and $129 for lunch and a daypass is likely a heavily subsidized ticket.

Another surprise? No government money/sponsorship. I don’t recall seeing another academic conference held in Canada without any government participation.

Canadian quantum scene

A National Quantum Strategy was first announced in the 2021 Canadian federal budget and reannounced in the 2022 federal budget (see my April 19, 2022 posting for a few more budget details).. Or, you may find this National Quantum Strategy Consultations: What We Heard Report more informative. There’s also a webpage for general information about the National Quantum Strategy.

As evidence of action, the Natural Science and Engineering Research Council of Canada (NSERC) announced new grant programmes made possible by the National Quantum Strategy in a March 15, 2022 news release,

Quantum science and innovation are giving rise to promising advances in communications, computing, materials, sensing, health care, navigation and other key areas. The Government of Canada is committed to helping shape the future of quantum technology by supporting Canada’s quantum sector and establishing leadership in this emerging and transformative domain.

Today [March 15, 2022], the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, is announcing an investment of $137.9 million through the Natural Sciences and Engineering Research Council of Canada’s (NSERC) Collaborative Research and Training Experience (CREATE) grants and Alliance grants. These grants are an important next step in advancing the National Quantum Strategy and will reinforce Canada’s research strengths in quantum science while also helping to develop a talent pipeline to support the growth of a strong quantum community.

Quick facts

Budget 2021 committed $360 million to build the foundation for a National Quantum Strategy, enabling the Government of Canada to build on previous investments in the sector to advance the emerging field of quantum technologies. The quantum sector is key to fuelling Canada’s economy, long-term resilience and growth, especially as technologies mature and more sectors harness quantum capabilities.

Development of quantum technologies offers job opportunities in research and science, software and hardware engineering and development, manufacturing, technical support, sales and marketing, business operations and other fields.

The Government of Canada also invested more than $1 billion in quantum research and science from 2009 to 2020—mainly through competitive granting agency programs, including Natural Sciences and Engineering Research Council of Canada programs and the Canada First Research Excellence Fund—to help establish Canada as a global leader in quantum science.

In addition, the government has invested in bringing new quantum technologies to market, including investments through Canada’s regional development agencies, the Strategic Innovation Fund and the National Research Council of Canada’s Industrial Research Assistance Program.

Bank of Canada, cryptocurrency, and quantum computing

My July 25, 2022 posting features a special project, Note: All emphases are mine,

… (from an April 14, 2022 HKA Marketing Communications news release on EurekAlert),

Multiverse Computing, a global leader in quantum computing solutions for the financial industry and beyond with offices in Toronto and Spain, today announced it has completed a proof-of-concept project with the Bank of Canada through which the parties used quantum computing to simulate the adoption of cryptocurrency as a method of payment by non-financial firms.

“We are proud to be a trusted partner of the first G7 central bank to explore modelling of complex networks and cryptocurrencies through the use of quantum computing,” said Sam Mugel, CTO [Chief Technical Officer] at Multiverse Computing. “The results of the simulation are very intriguing and insightful as stakeholders consider further research in the domain. Thanks to the algorithm we developed together with our partners at the Bank of Canada, we have been able to model a complex system reliably and accurately given the current state of quantum computing capabilities.”

Multiverse Computing conducted its innovative work related to applying quantum computing for modelling complex economic interactions in a research project with the Bank of Canada. The project explored quantum computing technology as a way to simulate complex economic behaviour that is otherwise very difficult to simulate using traditional computational techniques.

By implementing this solution using D-Wave’s annealing quantum computer, the simulation was able to tackle financial networks as large as 8-10 players, with up to 2^90 possible network configurations. Note that classical computing approaches cannot solve large networks of practical relevance as a 15-player network requires as many resources as there are atoms in the universe.

Quantum Technologies and the Council of Canadian Academies (CCA)

In a May 26, 2022 blog posting the CCA announced its Expert Panel on Quantum Technologies (they will be issuing a Quantum Technologies report),

The emergence of quantum technologies will impact all sectors of the Canadian economy, presenting significant opportunities but also risks. At the request of the National Research Council of Canada (NRC) and Innovation, Science and Economic Development Canada (ISED), the Council of Canadian Academies (CCA) has formed an Expert Panel to examine the impacts, opportunities, and challenges quantum technologies present for Canadian industry, governments, and Canadians. Raymond Laflamme, O.C., FRSC, Canada Research Chair in Quantum Information and Professor in the Department of Physics and Astronomy at the University of Waterloo, will serve as Chair of the Expert Panel.

“Quantum technologies have the potential to transform computing, sensing, communications, healthcare, navigation and many other areas,” said Dr. Laflamme. “But a close examination of the risks and vulnerabilities of these technologies is critical, and I look forward to undertaking this crucial work with the panel.”

As Chair, Dr. Laflamme will lead a multidisciplinary group with expertise in quantum technologies, economics, innovation, ethics, and legal and regulatory frameworks. The Panel will answer the following question:

In light of current trends affecting the evolution of quantum technologies, what impacts, opportunities and challenges do these present for Canadian industry, governments and Canadians more broadly?

The Expert Panel on Quantum Technologies:

Raymond Laflamme, O.C., FRSC (Chair), Canada Research Chair in Quantum Information; the Mike and Ophelia Lazaridis John von Neumann Chair in Quantum Information; Professor, Department of Physics and Astronomy, University of Waterloo

Sally Daub, Founder and Managing Partner, Pool Global Partners

Shohini Ghose, Professor, Physics and Computer Science, Wilfrid Laurier University; NSERC Chair for Women in Science and Engineering

Paul Gulyas, Senior Innovation Executive, IBM Canada

Mark W. Johnson, Senior Vice-President, Quantum Technologies and Systems Products, D-Wave Systems

Elham Kashefi, Professor of Quantum Computing, School of Informatics, University of Edinburgh; Directeur de recherche au CNRS, LIP6 Sorbonne Université

Mauritz Kop, Fellow and Visiting Scholar, Stanford Law School, Stanford University

Dominic Martin, Professor, Département d’organisation et de ressources humaines, École des sciences de la gestion, Université du Québec à Montréal

Darius Ornston, Associate Professor, Munk School of Global Affairs and Public Policy, University of Toronto

Barry Sanders, FRSC, Director, Institute for Quantum Science and Technology, University of Calgary

Eric Santor, Advisor to the Governor, Bank of Canada

Christian Sarra-Bournet, Quantum Strategy Director and Executive Director, Institut quantique, Université de Sherbrooke

Stephanie Simmons, Associate Professor, Canada Research Chair in Quantum Nanoelectronics, and CIFAR Quantum Information Science Fellow, Department of Physics, Simon Fraser University

Jacqueline Walsh, Instructor; Director, initio Technology & Innovation Law Clinic, Dalhousie University

You’ll note that both the Bank of Canada and D-Wave Systems are represented on this expert panel.

The CCA Quantum Technologies report (in progress) page can be found here.

Does it mean anything?

Since I only skim the top layer of information (disparagingly described as ‘high level’ by the technology types I used to work with), all I can say is there’s a remarkable level of interest from various groups who are self-organizing. (The interest is international as well. I found the International Society for Quantum Gravity [ISQG], which had its first meeting in 2021.)

I don’t know what the purpose is other than it seems the Canadian focus seems to be on money. The board of trade and business council have no interest in primary research and the federal government’s national quantum strategy is part of Innovation, Science and Economic Development (ISED) Canada’s mandate. You’ll notice ‘science’ is sandwiched between ‘innovation’, which is often code for business, and economic development.

The Bank of Canada’s monetary interests are quite obvious.

The Perimeter Institute mentioned earlier was founded by Mike Lazaridis (from his Wikipedia entry) Note: Links have been removed,

… a Canadian businessman [emphasis mine], investor in quantum computing technologies, and founder of BlackBerry, which created and manufactured the BlackBerry wireless handheld device. With an estimated net worth of US$800 million (as of June 2011), Lazaridis was ranked by Forbes as the 17th wealthiest Canadian and 651st in the world.[4]

In 2000, Lazaridis founded and donated more than $170 million to the Perimeter Institute for Theoretical Physics.[11][12] He and his wife Ophelia founded and donated more than $100 million to the Institute for Quantum Computing at the University of Waterloo in 2002.[8]

That Institute for Quantum Computing? There’s an interesting connection. Raymond Laflamme, the chair for the CCA expert panel, was its director for a number of years and he’s closely affiliated with the Perimeter Institute. (I’m not suggesting anything nefarious or dodgy. It’s a small community in Canada and relationships tend to be tightly interlaced.) I’m surprised he’s not part of the quantum mechanics and gravity conference but that could have something to do with scheduling.

One last interesting bit about Laflamme, from his Wikipedia entry, Note: Links have been removed)

As Stephen Hawking’s PhD student, he first became famous for convincing Hawking that time does not reverse in a contracting universe, along with Don Page. Hawking told the story of how this happened in his famous book A Brief History of Time in the chapter The Arrow of Time.[3] Later on Laflamme made a name for himself in quantum computing and quantum information theory, which is what he is famous for today.

Getting back to the Quantum Mechanics & Gravity: Marrying Theory & Experiment, the public day looks pretty interesting and when is the next time you’ll have a chance to hobnob with all those Nobel Laureates?

Awe, science, and God

Having been brought up in a somewhat dogmatic religion, I was a bit resistant when I saw ‘religion’ mentioned in the news release but it seems I am being dogmatic. Here’s a definition from the Religion Wikipedia entry (Note: Links have been removed),

Religion is a social-cultural system of designated behaviors and practices, morals, worldviews, texts, sanctified places, prophecies, ethics, or organizations, that relates humanity to supernatural, transcendental, or spiritual elements. However, there is no scholarly consensus over what precisely constitutes a religion.[1][2]

This research into science and God suggests that the two ‘belief’ systems are not antithetical. From a July 18, 2019 Arizona State University (ASU) news release (also on EurekAlert but published on July 17, 2019) by Kimberlee D’Ardenne,

Most Americans believe science and religion are incompatible, but a recent study suggests that scientific engagement can actually promote belief in God.

Researchers from the Arizona State University Department of Psychology found that scientific information can create a feeling of awe, which leads to belief in more abstract views of God. The work will be published in the September 2019 issue of the Journal of Experimental Social Psychology and is now available online.

“There are many ways of thinking about God. Some see God in DNA, some think of God as the universe, and others think of God in Biblical, personified terms,” said Kathryn Johnson, associate research professor at ASU and lead author on the study. “We wanted to know if scientific engagement influenced beliefs about the existence or nature of God.”

Though science is often thought of in terms of data and experiments, ASU psychology graduate student Jordan Moon, who was a coauthor on the paper, said science might be more to some people. To test how people connect with science and the impact it had on their beliefs about God, the researchers looked at two types of scientific engagement: logical thinking or experiencing the feeling of awe.

The team first surveyed participants about how interested they were in science, how committed they were to logical thinking and how often they felt awe. Reporting a commitment to logic was associated with unbelief. The participants who reported both a strong commitment to logic and having experienced awe, or a feeling of overwhelming wonder that often leads to open-mindedness, were more likely to report believing in God. The most common description of God given by those participants was not what is commonly found in houses of worship: They reported believing in an abstract God described as mystical or limitless.

“When people are awed by the complexity of life or the vastness of the universe, they were more inclined to think in more spiritual ways,” Johnson said. “The feeling of awe might make people more open to other ways of conceptualizing God.”

In another experiment, the research team had the participants engage with science by watching videos. While a lecture about quantum physics led to unbelief or agnosticism, watching a music video about how atoms are both particles and waves led people to report feeling awe. Those who felt awe also were more likely to believe in an abstract God.

“A lot of people think science and religion do not go together, but they are thinking about science in too simplistic a way and religion in too simplistic a way,” said Adam Cohen, professor of psychology and senior author on the paper. “Science is big enough to accommodate religion, and religion is big enough to accommodate science.”

Cohen added that the work could lead to broader views of both science and religion.

Morris Okun, Matthew Scott and Holly O’Rourke from ASU and Joshua Hook from the University of North Texas also contributed to the work. The study was funded by the John Templeton Foundation.

Here’s a link to and a citation for the paper,

Science, God, and the cosmos: Science both erodes (via logic) and promotes (via awe) belief in God by Kathryn A.Johnson, Jordan W.Moon, Morris A.Okun, Matthew J.Scott, Holly P.O’Rourke, Joshua N.Hook, Adam B. Cohen. Journal of Experimental Social Psychology
Volume 84, September 2019, 103826 DOI: https://doi.org/10.1016/j.jesp.2019.103826

This paper is behind a paywall.

I noted the funding from the John Templeton Foundation and recalled they have a prize that relates to this topic.

2019 Templeton Prize winner

A March 20, 2019 article by Lee Billings for Scientific American offers a profile of the 2019 Templeton Prize winner,

Marcelo Gleiser, a 60-year-old Brazil-born theoretical physicist at Dartmouth College and prolific science popularizer, has won this year’s Templeton Prize. Valued at just under $1.5 million, the award from the John Templeton Foundation annually recognizes an individual “who has made an exceptional contribution to affirming life’s spiritual dimension.” [emphasis mine] Its past recipients include scientific luminaries such as Sir Martin Rees and Freeman Dyson, as well as religious or political leaders such as Mother Teresa, Desmond Tutu and the Dalai Lama.

Across his 35-year scientific career, Gleiser’s research has covered a wide breadth of topics, ranging from the properties of the early universe to the behavior of fundamental particles and the origins of life. But in awarding him its most prestigious honor, the Templeton Foundation chiefly cited his status as a leading public intellectual revealing “the historical, philosophical and cultural links between science, the humanities and spirituality.” He is also the first Latin American to receive the prize.

Scientific American spoke with Gleiser about the award, how he plans to advance his message of consilience, the need for humility in science, why humans are special, and the fundamental source of his curiosity as a physicist.

You’ve written and spoken eloquently about nature of reality and consciousness, the genesis of life, the possibility of life beyond Earth, the origin and fate of the universe, and more. How do all those disparate topics synergize into one, cohesive message for you

To me, science is one way of connecting with the mystery of existence. And if you think of it that way, the mystery of existence is something that we have wondered about ever since people began asking questions about who we are and where we come from. So while those questions are now part of scientific research, they are much, much older than science. I’m not talking about the science of materials, or high-temperature superconductivity, which is awesome and super important, but that’s not the kind of science I’m doing. I’m talking about science as part of a much grander and older sort of questioning about who we are in the big picture of the universe. To me, as a theoretical physicist and also someone who spends time out in the mountains, this sort of questioning offers a deeply spiritual connection with the world, through my mind and through my body. Einstein would have said the same thing, I think, with his cosmic religious feeling.

If you’re interested, this is a wide ranging profile touching on one of the big questions in physics, Is there a theory of everything?

For anyone curious about the Templeton Foundation, you can find out more here.

October 2019 science and art/science events in Vancouver and other parts of Canada

This is a scattering of events, which I’m sure will be augmented as we properly start the month of October 2019.

October 2, 2019 in Waterloo, Canada (Perimeter Institute)

If you want to be close enough to press the sacred flesh (Sir Martin Rees), you’re out of luck. However, there are still options ranging from watching a live webcast from the comfort of your home to watching the lecture via closed circuit television with other devoted fans at a licensed bistro located on site at the Perimeter Institute (PI) to catching the lecture at a later date via YouTube.

That said, here’s why you might be interested,

Here’s more from a September 11, 2019 Perimeter Institute (PI) announcement received via email,

Surviving the Century
MOVING TOWARD A POST-HUMAN FUTURE
Martin Rees, UK Astronomer Royal
Wednesday, Oct. 2 at 7:00 PM ET

Advances in technology and space exploration could, if applied wisely, allow a bright future for the 10 billion people living on earth by the end of the century.

But there are dystopian risks we ignore at our peril: our collective “footprint” on our home planet, as well as the creation and use of technologies so powerful that even small groups could cause a global catastrophe.

Martin Rees, the UK Astronomer Royal, will explore this unprecedented moment in human history during his lecture on October 2, 2019. A former president of the Royal Society and master of Trinity College, Cambridge, Rees is a cosmologist whose work also explores the interfaces between science, ethics, and politics. Read More.

Mark your calendar! Tickets will be available on Monday, Sept. 16 at 9 AM ET

Didn’t get tickets for the lecture? We’ve got more ways to watch.
Join us at Perimeter on lecture night to watch live in the Black Hole Bistro.
Catch the live stream on Inside the Perimeter or watch it on Youtube the next day
Become a member of our donor thank you program! Learn more.

It took me a while to locate an address for PI venue since I expect that information to be part of the announcement. (insert cranky emoticon here) Here’s the address: Perimeter Institute, Mike Lazaridis Theatre of Ideas, 31 Caroline St. N., Waterloo, ON

Before moving onto the next event, I’m including a paragraph from the event description that was not included in the announcement (from the PI Outreach Surviving the Century webpage),

In his October 2 [2019] talk – which kicks off the 2019/20 season of the Perimeter Institute Public Lecture Series – Rees will discuss the outlook for humans (or their robotic envoys) venturing to other planets. Humans, Rees argues, will be ill-adapted to new habitats beyond Earth, and will use genetic and cyborg technology to transform into a “post-human” species.

I first covered Sir Martin Rees and his concerns about technology (robots and cyborgs run amok) in this November 26, 2012 posting about existential risk. He and his colleagues at Cambridge University, UK, proposed a Centre for the Study of Existential Risk, which opened in 2015.

Straddling Sept. and Oct. at the movies in Vancouver

The Vancouver International Film Festival (VIFF) opened today, September 26, 2019. During its run to October 11, 2019 there’ll be a number of documentaries that touch on science. Here are three of the documentaries most closely adhere to the topics I’m most likely to address on this blog. There is a fourth documentary included here as it touches on ecology in a more hopeful fashion than is the current trend.

Human Nature

From the VIFF 2019 film description and ticket page,

One of the most significant scientific breakthroughs in history, the discovery of CRISPR has made it possible to manipulate human DNA, paving the path to a future of great possibilities.

The implications of this could mean the eradication of disease or, more controversially, the possibility of genetically pre-programmed children.

Breaking away from scientific jargon, Human Nature pieces together a complex account of bio-research for the layperson as compelling as a work of science-fiction. But whether the gene-editing powers of CRISPR (described as “a word processor for DNA”) are used for good or evil, they’re reshaping the world as we know it. As we push past the boundaries of what it means to be human, Adam Bolt’s stunning work of science journalism reaches out to scientists, engineers, and people whose lives could benefit from CRISPR technology, and offers a wide-ranging look at the pros and cons of designing our futures.

Tickets
Friday, September 27, 2019 at 11:45 AM
Vancity Theatre

Saturday, September 28, 2019 at 11:15 AM
International Village 10

Thursday, October 10, 2019 at 6:45 PM
SFU Goldcorp

According to VIFF, the tickets for the Sept. 27, 2019 show are going fast.

Resistance Fighters

From the VIFF 2019 film description and ticket page,

Since mass-production in the 1940s, antibiotics have been nothing less than miraculous, saving countless lives and revolutionizing modern medicine. It’s virtually impossible to imagine hospitals or healthcare without them. But after years of abuse and mismanagement by the medical and agricultural communities, superbugs resistant to antibiotics are reaching apocalyptic proportions. The ongoing rise in multi-resistant bacteria – unvanquishable microbes, currently responsible for 700,000 deaths per year and projected to kill 10 million yearly by 2050 if nothing changes – and the people who fight them are the subjects of Michael Wech’s stunning “science-thriller.”

Peeling back the carefully constructed veneer of the medical corporate establishment’s greed and complacency to reveal the world on the cusp of a potential crisis, Resistance Fighters sounds a clarion call of urgency. It’s an all-out war, one which most of us never knew we were fighting, to avoid “Pharmageddon.” Doctors, researchers, patients, and diplomats testify about shortsighted medical and economic practices, while Wech offers refreshingly original perspectives on environment, ecology, and (animal) life in general. As alarming as it is informative, this is a wake-up call the world needs to hear.

Sunday, October 6, 2019 at 5:45 PM
International Village 8

Thursday, October 10, 2019 at 2:15 PM
SFU Goldcorp

According to VIFF, the tickets for the Oct. 6, 2019 show are going fast.

Trust Machine: The Story of Blockchain

Strictly speaking this is more of a technology story than science story but I have written about blockchain and cryptocurrencies before so I’m including this. From the VIFF 2019 film description and ticket page,

For anyone who has questions about cryptocurrencies like Bitcoin (and who doesn’t?), Alex Winter’s thorough documentary is an excellent introduction to the blockchain phenomenon. Trust Machine offers a wide range of expert testimony and a variety of perspectives that explicate the promises and the risks inherent in this new manifestation of high-tech wizardry. And it’s not just money that blockchains threaten to disrupt: innovators as diverse as UNICEF and Imogen Heap make spirited arguments that the industries of energy, music, humanitarianism, and more are headed for revolutionary change.

A propulsive and subversive overview of this little-understood phenomenon, Trust Machine crafts a powerful and accessible case that a technologically decentralized economy is more than just a fad. As the aforementioned experts – tech wizards, underground activists, and even some establishment figures – argue persuasively for an embrace of the possibilities offered by blockchains, others criticize its bubble-like markets and inefficiencies. Either way, Winter’s film suggests a whole new epoch may be just around the corner, whether the powers that be like it or not.

Tuesday, October 1, 2019 at 11:00 AM
Vancity Theatre

Thursday, October 3, 2019 at 9:00 PM
Vancity Theatre

Monday, October 7, 2019 at 1:15 PM
International Village 8

According to VIFF, tickets for all three shows are going fast

The Great Green Wall

For a little bit of hope, From the VIFF 2019 film description and ticket page,

“We must dare to invent the future.” In 2007, the African Union officially began a massively ambitious environmental project planned since the 1970s. Stretching through 11 countries and 8,000 km across the desertified Sahel region, on the southern edges of the Sahara, The Great Green Wall – once completed, a mosaic of restored, fertile land – would be the largest living structure on Earth.

Malian musician-activist Inna Modja embarks on an expedition through Senegal, Mali, Nigeria, Niger, and Ethiopia, gathering an ensemble of musicians and artists to celebrate the pan-African dream of realizing The Great Green Wall. Her journey is accompanied by a dazzling array of musical diversity, celebrating local cultures and traditions as they come together into a community to stand against the challenges of desertification, drought, migration, and violent conflict.

An unforgettable, beautiful exploration of a modern marvel of ecological restoration, and so much more than a passive source of information, The Great Green Wall is a powerful call to take action and help reshape the world.

Sunday, September 29, 2019 at 11:15 AM
International Village 10

Wednesday, October 2, 2019 at 6:00 PM
International Village 8
Standby – advance tickets are sold out but a limited number are likely to be released at the door

Wednesday, October 9, 2019 at 11:00 AM
International Village 9

As you can see, one show is already offering standby tickets only and the other two are selling quickly.

For venue locations, information about what ‘standby’ means and much more go here and click on the Festival tab. As for more information the individual films, you’ll links to trailers, running times, and more on the pages for which I’ve supplied links.

Brain Talks on October 16, 2019 in Vancouver

From time to time I get notices about a series titled Brain Talks from the Dept. of Psychiatry at the University of British Columbia. A September 11, 2019 announcement (received via email) focuses attention on the ‘guts of the matter’,

YOU ARE INVITED TO ATTEND:

BRAINTALKS: THE BRAIN AND THE GUT

WEDNESDAY, OCTOBER 16TH, 2019 FROM 6:00 PM – 8:00 PM

Join us on Wednesday October 16th [2019] for a series of talks exploring the
relationship between the brain, microbes, mental health, diet and the
gut. We are honored to host three phenomenal presenters for the evening:
Dr. Brett Finlay, Dr. Leslie Wicholas, and Thara Vayali, ND.

DR. BRETT FINLAY [2] is a Professor in the Michael Smith Laboratories at
the University of British Columbia. Dr. Finlay’s  research interests are
focused on host-microbe interactions at the molecular level,
specializing in Cellular Microbiology. He has published over 500 papers
and has been inducted into the Canadian  Medical Hall of Fame. He is the
co-author of the  books: Let Them Eat Dirt and The Whole Body
Microbiome.

DR. LESLIE WICHOLAS [3]  is a psychiatrist with an expertise in the
clinical understanding of the gut-brain axis. She has become
increasingly involved in the emerging field of Nutritional Psychiatry,
exploring connections between diet, nutrition, and mental health.
Currently, Dr. Wicholas is the director of the Food as Medicine program
at the Mood Disorder Association of BC.

THARA VAYALI, ND [4] holds a BSc in Nutritional Sciences and a MA in
Education and Communications. She has trained in naturopathic medicine
and advocates for awareness about women’s physiology and body literacy.
Ms. Vayali is a frequent speaker and columnist that prioritizes
engagement, understanding, and community as pivotal pillars for change.

Our event on Wednesday, October 16th [2019] will start with presentations from
each of the three speakers, and end with a panel discussion inspired by
audience questions. After the talks, at 7:30 pm, we host a social
gathering with a rich spread of catered healthy food and non-alcoholic
drinks. We look forward to seeing you there!

Paetzhold Theater

Vancouver General Hospital; Jim Pattison Pavilion, Vancouver, BC

Attend Event

That’s it for now.

2014 Maddox Prize winners and more ( a letter writing compaign)* from Sense about Science*

The UK’s ‘Sense about Science’ organization announced the two winners of its 2014 John Maddox (aka, the ‘standing up for science’) Prize in late October 2014 (from the Oct. 28, 2014 announcement),

I am delighted to share that last night [Oct. 27, 2014] Dr Emily Willingham and Dr David Robert Grimes were announced as the winners of the 2014 John Maddox Prize, at our annual reception held with the Royal Pharmaceutical Society.

After lengthy deliberation, this year’s judges (Tracey Brown, Philip Campbell, Colin Blakemore and Martin Rees) awarded the prize to these two people who embody the spirit of the prize, showing courage in promoting science and evidence on a matter of public interest, despite facing difficulty and hostility in doing so.

The call for 2014 nominations was mentioned in an Aug. 18, 2014 post. Here’s more about each of the winners (from the 2014 John Maddox Prize webpage on the Sense about Science website),

The judges awarded the prize to freelance journalist Dr Emily Willingham and early career scientist Dr David Robert Grimes for courage in promoting science and evidence on a matter of public interest, despite facing difficulty and hostility in doing so. …

David Grimes writes bravely on challenging and controversial issues, including nuclear power and climate change. He has persevered despite hostility and threats, such as on his writing about the evidence in the debate on abortion in Ireland. He does so while sustaining his career as a scientist at the University of Oxford.

Emily Willingham, a US writer, has brought discussion about evidence, from school shootings to home birth, to large audiences through her writing. She has continued to reach across conflict and disputes about evidence to the people trying to make sense of them. She is facing a lawsuit for an article about the purported link between vaccines and autism.

A Nov. 1, 2014 post by Nick Cohen for the Guardian newspaper discusses one of the 2014 winners in the context of a post about standing up to science ignorance and Ebola in the US, scroll down abut 15% of the way),

The joint winners confronted beliefs that are as prevalent in Britain as America: that vaccination causes autism, that homeopathic medicines work, that manmade climate change does not exist and that adding fluoride to the water supply is a threat to health. (I didn’t know it until the prize jury told me but Sinn Féin is leading a vigorous anti-fluoride campaign in Dublin – well, I suppose it’s progress for the IRA to go from blowing off peoples’ heads to merely rotting their teeth.)

David Robert Grimes, one of the winners, said that, contrary to the myth of the scientific bully, most of his colleagues wanted to keep out of public debate, presumably because they did not wish to receive the threats of violence fanatics and quacks have directed at him. If we are to improve public policy in areas as diverse as the fight against Ebola to the treatment of drug addicts, they need to be a braver, and more willing to tell the public, which so often funds their research, what they have learned.

Grimes makes a useful distinction. Most people just want more information and scientists should be prepared to make their case clearly and concisely. Then there are the rest – Ukip, the Tea Party, governors of Maine, Sinn Féin, David Cameron, climate change deniers – who will block out any evidence that contradicts their beliefs. They confirm the truth of Paul Simon’s line: “All lies and jest, still the man hears what he wants to hear and disregards the rest.”

Lydia Lepage (a post-doctoral researcher at the University of Edinburgh and a member of the Voice of Young Science, which is run by Sense About Science) over on the Conversation writes about both winners in an Oct. 28, 2014 post (Note: Links have been removed),

Willingham is a freelance science journalist whose evidence-based article: “Blame Wakefield for missed autism-gut connection” drew intense criticism and a lawsuit from Andrew Wakefield, the discredited scientist known for his now-retracted 1998 Lancet paper on the alleged link between vaccines and autism. She criticised the “red herring and the subsequent noxious cloud that his fraud left over any research examining autism and the gut”.

Willingham’s self-declared passion is “presenting accurate, evidence-based information”. She says:

Standing up for science and public health in the face of not only unyielding but also sometimes threatening opposition can be tiring and demoralising.

Grimes is a post-doctoral researcher at the University of Oxford in the UK, working on modelling oxygen distribution in tumours. He has been awarded the Maddox Prize for reaching out to the public through his writing on a range of challenging and controversial issues, including nuclear power and climate change.

Grimes continues to present the evidence, despite receiving threats, particularly surrounding discussion on abortion in Ireland. Following his article on six myths about cancer, in which he addressed the “dubious and outlandish” information that can be found on the internet, he received physical and digital hate-mail.

Sense about Science next announced an ‘Ask for Evidence’ website, from a Nov. 2, 2014 announcement,

We are excited to announce that Ask for Evidence online is now live! And people are already using it to ask for the evidence behind claims they’ve come across. Check out www.askforevidence.org

It’s our new interactive website that makes asking for evidence and getting help understanding that evidence as easy as possible. You can use it to ask politicians, companies, NGOs and anyone else for evidence behind their claims, while you’re on the train, walking down the street or sitting in front of the TV. And if you need help understanding the evidence you’ve been sent, that’s there too. With the help of partners and friends we’ve built a help centre that has captured what we’ve learnt over the past 12 years answering thousands of requests for help in understanding evidence.

Finally,. there’s the latest announcement about an effort to influence the World Health Organization’s (WHO) new policy on reporting the results of clinical trials, from the Nov. 11, 2014 announcement,

Following our pressure, the World Health Organization is drafting a policy on reporting the results of clinical trials.

We have to grab this fantastic opportunity with both hands and make sure that the most influential health body in the world comes out with a statement that strongly supports clinical trials transparency.

But you only have until Saturday 15th November 2014 to add your voice.

The draft WHO policy does not call for the disclosure of the results of past trials, only future ones. The vast majority of medicines we use every day were approved by regulators a decade or more ago and so were tested in clinical trials over the past decades.

So email the WHO to tell them their policy should:

  1. Call for the results of all past clinical trials to be reported, as well as all future clinical trials.
  2. Require results to be reported within 12 months, rather than permitting delays of 18-30 months. The USA’s FDA Amendment Act, the newly adopted EU Clinical Trials Regulation and pharmaceutical companies including GSK and LEO Pharma all agree that 12 months is enough time to report results.
  3. Encourage researchers to put results on publicly accessible registers, in useful, standardised formats.

Email ictrpinfo@who.int today.

Be sure to include your name and contact details as the WHO will not consider anonymous comments.

You can also use the full AllTrials response to write your email if you wish.

Read the full AllTrials response.

I am encouraged to see a move towards more transparency in reporting the results of clinical trials whether or not this bid to include past clinical trials is successful, although that would certainly be excellent news.

* (a letter writing campaign) was added to the head and ‘sense about science’ was changed to ‘Sense about Science’ on Nov. 14, 2014 1015 hundred hours PDT.

Should we love our robots or are robots going be smarter than we are? TED’s 2014 All Stars Session 5: The Future is Ours (maybe)

Rodney Brooks seems to be a man who loves robots, from his TED biography,

Rodney Brooks builds robots based on biological principles of movement and reasoning. The goal: a robot who can figure things out.

MIT professor Rodney Brooks studies and engineers robot intelligence, looking for the holy grail of robotics: the AGI, or artificial general intelligence. For decades, we’ve been building robots to do highly specific tasks — welding, riveting, delivering interoffice mail — but what we all want, really, is a robot that can figure things out on its own, the way we humans do.

Brooks makes a plea for easy-to-use (programme) robots and mentions his Baxter robot as an example that should be improved; Brooks issues a challenge to make robots better. (Baxter was used as the base for EDI introduced earlier in TED’s 2014 Session 8 this morning (March 20, 2014).

By contrast, Sir Martin Rees, astrophysicist has some concerns about robots and artificial intelligence as per my Nov. 26, 2012 posting about his (and others’) proposal to create the Cambridge Project for Existential Risk. From his TED biography,

Martin Rees, one of the world’s most eminent astronomers, is a professor of cosmology and astrophysics at the University of Cambridge and the UK’s Astronomer Royal. He is one of our key thinkers on the future of humanity in the cosmos.

Sir Martin Rees has issued a clarion call for humanity. His 2004 book, ominously titled Our Final Hour, catalogues the threats facing the human race in a 21st century dominated by unprecedented and accelerating scientific change. He calls on scientists and nonscientists alike to take steps that will ensure our survival as a species.

Rees states that the worst threats to planetary survival come from humans not, as it did in the past, nature. While science offers great possibilities, it has an equally dark side. Rees suggests robots going rogue, activists hijacking synthetic biology to winnow out the population, and more. He suggests that there is a 50% chance that we could suffer a devastating setback. Rees then mentions the proposed Cambridge Centre for Existential Risk and the importance of studying the possibility of human extinction and ways to mitigate risk.

Steven Johnson, writer, was introduced next (from his TED biography),

Steven Berlin Johnson examines the intersection of science, technology and personal experience.

A dynamic writer and speaker, Johnson crafts captivating theories that draw on a dizzying array of disciplines, without ever leaving his audience behind. Author Kurt Anderson described Johnson’s book Emergence as “thoughtful and lucid and charming and staggeringly smart.” The same could be said for Johnson himself. His big-brained, multi-disciplinary theories make him one of his generation’s more intriguing thinkers. His books take the reader on a journey — following the twists and turns his own mind makes as he connects seemingly disparate ideas: ants and cities, interface design and Victorian novels.

He will be hosting a new PBS (Public Broadcasting Service) series, ‘How We Got to Now’ (mentioned in Hector Tobar’s Aug. 7, 2013 article about the PBS series in the Los Angeles Times) and this talk sounds like it might be a preview of sorts. Johnson plays a recording made 20 years before Alexander Graham Bell ‘first’ recorded sound. The story he shares is about an inventor who didn’t think to include a playback feature for his recordings. He simply didn’t think about it as he was interested in doing something else (I can’t quite remember what that was now) and, consequently, his invention and work got lost for decades. Despite that, it forms part of the sound recording story. Thankfully, modern sound recording engineers have developed a technique which allows us to hear those ‘lost’ sounds today.

The UK’s Futurefest and an interview with Sue Thomas

Futurefest with “some of the planet’s most radical thinkers, makers and performers” is taking place in London next weekend on Sept. 28 – 29, 2013 and  I am very pleased to be featuring an interview with one of  Futurefest’s speakers, Sue Thomas who amongst many other accomplishments was also the founder of the  Creative Writing and New Media programme at De Montfort University, UK, where I got my master’s degree.

Here’s Sue,

suethomas

Sue Thomas was formerly Professor of New Media at De Montfort University. Now she writes and consults on digital well-being. Her new book ‘Technobiophilia: nature and cyberspace’ explains how contact with the natural world can help soothe our connected lives.http://www.suethomas.net @suethomas

  • I understand you are participating in Futurefest’s SciFi Writers’ Parliament; could you explain what that is and what the nature of your participation will be?

The premise of the session is to invite Science Fiction writers to play with the idea that they have been given the power to realise the kinds of new societies and cultures they imagine in their books. Each of us will present a brief proposal for the audience to vote on. The panel will be chaired by Robin Ince, a well-known comedian, broadcaster, and science enthusiast. The presenters are Cory Doctorow, Pat Cadigan, Ken MacLeod, Charles Stross, Roz Kaveney and myself.

  • Do you have expectations for who will be attending ‘Parliament’ and will they be participating as well as watching?

I’m expecting the audience for FutureFest http://www.futurefest.org/ to be people interested in future forecasting across the four themes of the event: Well-becoming, In the imaginarium,  We are all gardeners now, and The value of everything. There are plenty of opportunities for them to participate, not just in discussing and voting in panels like ours, but also in The Daily Future, a Twitter game, and Playify, which will run around and across the weekend. 

  • How are you preparing for ‘Parliament’?

 I will propose A Global Environmental Protection Act for Cyberspace The full text of the proposal is  on my blog here http://suethomasnet.wordpress.com/2013/09/05/futurefest/ It’s based on the thinking and research around my new book Technobiophilia: nature and cyberspace http://suethomasnet.wordpress.com/technobiophilia/ which coincidentally comes out in the UK two days before FutureFest. In the runup to the event I’ll also be gathering peoples’ views and refining my thoughts.

sue thomas_technobiophilia

  • Is there any other event you’re looking forward to in particular and why would that be?

The whole of FutureFest looks great and I’m excited about being there all weekend to enjoy it. The following week I’m doing a much smaller but equally interesting event at my local Cafe Scientifique, which is celebrating its first birthday with a talk from me about Technobiophilia. I’ve only recently moved to Bournemouth so this will be a great chance to meet the kinds of interesting local people who come to Cafe Scientifique in all parts of the world. http://suethomasnet.wordpress.com/2013/09/12/cafe-scientifique/

 

I’ll also be launching the book in North America with an online lecture in the Metaliteracy MOOC at SUNY Empire State University. The details are yet to be released but it’s booked for 18 November. http://metaliteracy.cdlprojects.com/index.html

  • Is there anything you’d like to add?

I’m also doing another event at FutureFest which might be of interest, especially to people interested in the future of death. It’s called xHumed and this is what it’s about: If we can archive and store our personal data, media, DNA and brain patterns, the question of whether we can bring back the dead is almost redundant. The right question is should we? It is the year 2050AD and great thought leaders from history have been “xHumed”. What could possibly go wrong? Through an interactive performance Five10Twelve will provoke and encourage the audience to consider the implications via soundbites and insights from eminent experts – both living and dead. I’m expecting some lively debate!

Thank you,  Sue for bringing Futurefest to life and congratulations on your new book!

You can find out more about Futurefest and its speakers here at the Futurefest website. I found Futurefest’s ticket webpage (which is associated with the National Theatre) a little more  informative about the event as a whole,

Some of the planet’s most radical thinkers, makers and performers are gathering in East London this September to create an immersive experience of what the world will feel like over the next few decades.

From the bright and uplifting to the dark and dystopian, FutureFest will present a weekend of compelling talks, cutting-edge shows, and interactive performances that will inspire and challenge you to change the future.

Enter the wormhole in Shoreditch Town Hall on the weekend of 28 and 29 September 2013 and experience the next phase of being human.

FutureFest is split into four sessions, Saturday Morning, Saturday Afternoon, Sunday Morning and Sunday Afternoon. You can choose to come to one, two, three or all sessions. They all have a different flavour, but each one will immerse you deep in the future.

Please note that FutureFest is a living, breathing festival so sessions are subject to change. We’ll keep you up to date on our FutureFest website.

Saturday Morning will feature The Blind Giant author Nick Harkaway, bionic man Bertolt Meyer and techno-cellist Peter Gregson. There will also be secret agents, villages of the future and a crowd-sourced experiment in futurology with some dead futurists.

Saturday Afternoon has forecaster Tamar Kasriel helping to futurescape your life, and gamemaker Alex Fleetwood showing us what life will be like in the Gameful century. We’ve got top political scientists David Runciman and Diane Coyle exploring the future of democracy. There will also be a mass-deception experiment, more secret agents and a look forward to what the weather will be like in 2100.

Sunday Morning sees Sermons of the Future. Taking the pulpit will be Wikipedia’s Jimmy Wales, social entrepreneur and model Lily Cole, and Astronomer Royal Martin Rees. Meanwhile the comedian Robin Ince will be chairing a Science Fiction Parliament with top SF authors, Roberto Unger will be analysing the future of religion and one of the world’s top chefs, Andoni Aduriz, will be exploring how food will make us feel in the future.

Sunday Afternoon will feature a futuristic take on the Sunday lunch, with food futurologist Morgaine Gaye inviting you for lunch in the Gastrodome with insects and 3D meat print-outs on the menu. Smari McCarthy, founder of Iceland’s Pirate Party and Wikileaks worker, will be exploring life in a digitised world, and Charlie Leadbeater, Diane Coyle and Mark Stevenson will be imagining cities and states of the future.

I noticed that a few Futurefest speakers have been featured here:

Eric Drexler, ‘Mr. Nano’, was last mentioned in a May 6, 2013 posting about a talk he was giving in Seattle, Washington to promote his new book, Radical Abundance.

Martin Rees, Emeritus Professor of Cosmology and Astrophysics, was mentioned in a Nov. 26, 3012 posting about the Cambridge Project for Existential Risk (humans relative to robots).

Bertolt Meyer, a young researcher from Zurich University and a lifelong user of prosthetic technology, in a Jan. 30, 2013 posting about building a bionic man.

Cory Doctorow, a science fiction writer, who ran afoul of James Moore, then Minister of Canadian Heritage and now Minister of Industry Canada, who accused him of being a ‘radical extremists’  prior to new copyright legislation  for Canadians, was mentioned in a June 25, 2010 posting.

Wish I could be at London’s Futurefest in lieu of that I will wish the organizers and participants all the best.

* On a purely cosmetic note, on Dec. 5, 2013, I changed the paragraph format in the responses.

Existential risk

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Price along with Martin Rees, Emeritus Professor of Cosmology and Astrophysics, and Jaan Tallinn, Co-Founder of Skype, are the driving forces behind this proposed new centre at Cambridge University. From the Cambridge Project for Existential Risk webpage,

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. …

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind.

Price and Tallinn co-wrote an Aug. 6, 2012 article for the Australia-based, The Conversation website, about their concerns,

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”.

He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.

Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)?

It appears Price, Rees, and Tallinn are not the only concerned parties, from the Nov. 25, 2012 research news piece on the Cambridge University website,

With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”

Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point.

According to the Huffington Post article by Lui, they expect to launch the centre next year (2013). In the meantime, for anyone who’s looking for more information about the ‘intelligence explosion’ or  ‘singularity’ as it’s also known, there’s a Wikipedia essay on the topic.  Also, you may want to stay tuned to this channel (blog) as I expect to have some news about an artificial intelligence project based at the University of Waterloo (Ontario, Canada) and headed by Chris Eliasmith at the university’s Centre for Theoretical Neuroscience, later this week.