Tag Archives: Stephen Hawking

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.

Stephen Hawking comic updates ‘Stephen Hawking: Riddles of Time & Space’ and adds life story for a tribute issue

Artist: Robert Aragon. Courtesy: TidalWave Productions

It would seem I wasn’t having one of my brighter days today (Feb. 7, 2019) and it took me a while to to decode the messaging about this Stephen Hawking comic book. Briefly, they’ve (TidalWave Productions; Note: The company seems to have more than one name) repackaged an old title (Stephen Hawking: Riddles of Time & Space) and included new material in the form of his life story. After some searching, as best as I can tell, the ‘Tribute’ was originally released sometime in 2018 in a digital version. This latest push for publicity was likely occasioned by the release of a print version.

Here’s more from a February 7, 2019 TidalWave Entertainment/Bluewater Productions news release (received via email),

TidalWave Comics, applauded for illustrated biographies featuring the
famous and infamous who influence our politics, entertainment, and
social justice, is proud to present its newest comic book release this
week. Telling the life story of a world-renowned physicist, cosmologist,
and author Stephen Hawking, “Tribute: Stephen Hawking,” is written
by Michael Lent, Brian McCarthy and Michael Frizell with art by Zach
Bassett. The comic book features a cover by famed artist Robert Aragon.

“Tribute: Stephen Hawking” is out this week in print and digital.
With the passing of English cosmologist, theoretical physicist, and
author, the world has lost one of the greatest scientific minds of the
20th and 21st Centuries. Hawking united the general theory of relativity
with quantum mechanics but may be more known for his rare, early-onset
and slow-progressing battle with Lou Gehrig’s disease. Hawking believed
in the concept of an infinite multiverse. Perhaps he’s watching us
mourn his loss.

Stephen Hawking is one of the most brilliant minds of this century. The
comic explores his brilliance while revealing some surprises.

Hawking’s life has been the subject of several movies, including the
2014 hit, “The Theory of Everything” starring Eddy Redmayne, who
received an Oscar and a Golden Globe for his performance as the
scientist dealing with an early-onset slow-progressing form of Lou
Gehrig’s disease. The comic seeks to add to Hawking’s story.

“I learned a lot from reading the script and doing the research for the
issue.  The very concept of making an engaging comic book where the
protagonist is essentially immobile is a pretty tall order, but I think
the key to us keeping it exciting was being able to get inside his mind
(one of the greatest of our time) and show some of his most abstract
concepts in a visual and dynamic way,” said artist Bassett.

Darren G. Davis, publisher and creative force behind TidalWave, believes
as Bassett does that the visual storytelling model is a good way to tell
the stories of real people. “I was a reluctant reader when I was a
kid. The colorful pages and interesting narrative I found in comic books
drew me in and made me want to read.” In a market crowded with
superheroes, the publisher’s work is embraced by major media outlets,
libraries, and schools.

Michael Frizell, one of TidalWave’s writers and the author of the
Bettie Page comic, enjoys writing for TidalWave’s biography lines
Political Power, Orbit, Female Force, Tribute, and Fame because of the
publisher’s approach to the books. “Darren asks us to focus on the
positive and to dig deep to explore the things that make the subject
tick – the things that drive them,” Frizell said.

In print on Amazon and are available on your e-reader from iTunes,
Kindle, Nook, ComiXology, DriveThru Comics, Google Play, Overdrive,
IVerse, Biblioboard, Madefire, Axis360, Blio, Entitle, EPIC!,
Trajectory, SpinWhiz, Smash Words, Kobo and wherever eBooks are sold.

TidalWave’s recent partnership with Ingram allows them to produce
high-quality books on demand – a boon for the independent publisher. The
comic book will feature a heavy-stock cover and bright, clean colors in
the interior. Ingram works across the full publishing spectrum, aiding
some of the largest names in the business to local indie authors.

Comic book and book stores can order these titles in print at INGRAM.

TidalWave’s biography comic book series has been embraced by the media
and featured on television news outlets including The Today Show and on
CNN. The series has also been featured in many publications such as The
Los Angeles Times, MTV, Time Magazine, and People Magazine.


For more information about the company, visit www.tidalwavecomics.com
 
About TidalWave Comics
TidalWave delivers a multimedia experience unparalleled in the burgeoning graphic fiction and nonfiction marketplace. Dynamic storytelling coupled with groundbreaking art delivers an experience like no other. Stories are told through multiple platforms and genres, gracing the pages of graphic novels, novelizations, engaging audio dramas, cutting-edge film projects, and more. Diversity defines Storm’s offerings in the burgeoning pop culture marketplace, offering fresh voices and innovative storytellers.

As one of the top independent publishers of comic book and graphic novels, TidalWave unites cutting-edge art and engaging stories produced by the publishing industry’s most exciting artists and writers. Its extensive catalog of comic book titles includes the bestsellers “10th Muse” and “The Legend of Isis,” complemented by a line of young adult books and audiobooks. TidalWave’s publishing partnerships include legendary filmmaker Ray Harryhausen (“Wrath of the Titans,” “Sinbad: Rogue of Mars,” “Jason and the Argonauts,” and more), novelists S.E. Hinton (“The Puppy Sister”) and William F. Nolan (“Logan’s Run”), and celebrated actors Vincent Price (“Vincent Price Presents”), and Adam West of 1966’s “Batman” fame (“The Mis-Adventures of Adam West”). TidalWave also publishes a highly-successful line of biographical comics under the titles “Orbit,” “Fame,” “Beyond,” “Tribute,” “Female Force,” and “Political Power.”

Should you happen to operate a comic and/or book store, I have found the Ingram (Content Group) website. Happy ordering!

Alberta adds a newish quantum nanotechnology research hub to the Canada’s quantum computing research scene

One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.

Alberta’s quantum nanotechnology hub (graduate programme)

Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,

Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.

Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.

Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.

It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.

The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.

“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.

But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”

“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.

Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.

Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.

East vs. West—Again?

Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),

 Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.

Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.

Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,

History-bannerHistory has repeatedly demonstrated the power of research in physics to transform society.  As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research.  That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough  technologies.

Establishing a World Class Centre in Quantum Research:

The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics.  Perimeter was established in 2000 as an independent theoretical physics research institute.  Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth).  Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute.  In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it.  Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.

Stephen HawkingPerimeter is located in a Governor-General award winning designed building in Waterloo.  Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility.  A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.

Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo.  IQC was established as an experimental research institute focusing on quantum information.  Mike established IQC with an initial donation of $33.3 million.  Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives.  As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts.  Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.

Quantum ComputingMike and Doug Fregin have been close friends since grade 5.  They are also co-founders of BlackBerry (formerly Research In Motion Limited).  Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million.  Since that time Doug has donated a total of $30 million to Perimeter Institute.  Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million.  As suggested by its name, WIN is devoted to research in the area of nanotechnology.  It has established as an area of primary focus the intersection of nanotechnology and quantum physics.

With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world.  QNC was opened in September 2012 and houses researchers from both IQC and WIN.

Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:

In the Research LabFor many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper.  That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge.  Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries.  Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries.  Local experimentalists are very much playing a leading role in this regard.  It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.

Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example.  The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications.  Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.

Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),

… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.

Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?

Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.

Semeniuk offers an overview of the D-Wave Systems story,

D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …

The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …

Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”

D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.

It seems Lazaridis is not the only one who likes to hold company information tightly.

Back to Semeniuk and D-Wave,

Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.

But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …

Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …

I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),

Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while  hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for  Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.

There’s a lot more to Semeniuk’s article but this is the last excerpt,

The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing  a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].

I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?

In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).

Finally, you can find Semeniuk’s October 2017 article here but be aware it’s behind a paywall.

Whither we goest?

Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate  programme, it’s early days yet and no one should ever count out Alberta.

Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.

Prime Minister Trudeau, the quantum physicist

Prime Minister Justin Trudeau’s apparently extemporaneous response to a joking (non)question about quantum computing by a journalist during an April 15, 2016 press conference at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada has created a buzz online, made international news, and caused Canadians to sit taller.

For anyone who missed the moment, here’s a video clip from the Canadian Broadcasting Corporation (CBC),

Aaron Hutchins in an April 15, 2016 article for Maclean’s magazine digs deeper to find out more about Trudeau and quantum physics (Note: A link has been removed),

Raymond Laflamme knows the drill when politicians visit the Perimeter Institute. A photo op here, a few handshakes there and a tour with “really basic, basic, basic facts” about the field of quantum mechanics.

But when the self-described “geek” Justin Trudeau showed up for a funding announcement on Friday [April 15, 2016], the co-founder and director of the Institute for Quantum Computing at the University of Waterloo wasn’t met with simple nods of the Prime Minister pretending to understand. Trudeau immediately started talking about things being waves and particles at the same time, like cats being dead and alive at the same time. It wasn’t just nonsense—Trudeau was referencing the famous thought experiment of the late legendary physicist Erwin Schrödinger.

“I don’t know where he learned all that stuff, but we were all surprised,” Laflamme says. Soon afterwards, as Trudeau met with one student talking about superconductivity, the Prime Minister asked her, “Why don’t we have high-temperature superconducting systems?” something Laflamme describes as the institute’s “Holy Grail” quest.

“I was flabbergasted,” Laflamme says. “I don’t know how he does in other subjects, but in quantum physics, he knows the basic pieces and the important questions.”

Strangely, Laflamme was not nearly as excited (tongue in cheek) when I demonstrated my understanding of quantum physics during our interview (see my May 11, 2015 posting; scroll down about 40% of the way to the Ramond Laflamme subhead).

As Jon Butterworth comments in his April 16, 2016 posting on the Guardian science blog, the response says something about our expectations regarding politicians,

This seems to have enhanced Trudeau’s reputation no end, and quite right too. But it is worth thinking a bit about why.

The explanation he gives is clear, brief, and understandable to a non-specialist. It is the kind of thing any sufficiently engaged politician could pick up from a decent briefing, given expert help. …

Butterworth also goes on to mention journalists’ expectations,

The reporter asked the question in a joking fashion, not unkindly as far as I can tell, but not expecting an answer either. If this had been an announcement about almost any other government investment, wouldn’t the reporter have expected a brief explanation of the basic ideas behind it? …

As for the announcement being made by Trudeau, there is this April 15, 2016 Perimeter Institute press release (Note: Links have been removed),

Prime Minister Justin Trudeau says the work being done at Perimeter and in Canada’s “Quantum Valley” [emphasis mine] is vital to the future of the country and the world.

Prime Minister Justin Trudeau became both teacher and student when he visited Perimeter Institute today to officially announce the federal government’s commitment to support fundamental scientific research at Perimeter.

Joined by Minister of Science Kirsty Duncan and Small Business and Tourism Minister Bardish Chagger, the self-described “geek prime minister” listened intensely as he received brief overviews of Perimeter research in areas spanning from quantum science to condensed matter physics and cosmology.

“You don’t have to be a geek like me to appreciate how important this work is,” he then told a packed audience of scientists, students, and community leaders in Perimeter’s atrium.

The Prime Minister was also welcomed by 200 teenagers attending the Institute’s annual Inspiring Future Women in Science conference, and via video greetings from cosmologist Stephen Hawking [he was Laflamme’s PhD supervisor], who is a Perimeter Distinguished Visiting Research Chair. The Prime Minister said he was “incredibly overwhelmed” by Hawking’s message.

“Canada is a wonderful, huge country, full of people with big hearts and forward-looking minds,” Hawking said in his message. “It’s an ideal place for an institute dedicated to the frontiers of physics. In supporting Perimeter, Canada sets an example for the world.”

The visit reiterated the Government of Canada’s pledge of $50 million over five years announced in last month’s [March 2016] budget [emphasis mine] to support Perimeter research, training, and outreach.

It was the Prime Minister’s second trip to the Region of Waterloo this year. In January [2016], he toured the region’s tech sector and universities, and praised the area’s innovation ecosystem.

This time, the focus was on the first link of the innovation chain: fundamental science that could unlock important discoveries, advance human understanding, and underpin the groundbreaking technologies of tomorrow.

As for the “quantum valley’ in Ontario, I think there might be some competition here in British Columbia with D-Wave Systems (first commercially available quantum computing, of a sort; my Dec. 16, 2015 post is the most recent one featuring the company) and the University of British Columbia’s Stewart Blusson Quantum Matter Institute.

Getting back to Trudeau, it’s exciting to have someone who seems so interested in at least some aspects of science that he can talk about it with a degree of understanding. I knew he had an interest in literature but there is also this (from his Wikipedia entry; Note: Links have been removed),

Trudeau has a bachelor of arts degree in literature from McGill University and a bachelor of education degree from the University of British Columbia…. After graduation, he stayed in Vancouver and he found substitute work at several local schools and permanent work as a French and math teacher at the private West Point Grey Academy … . From 2002 to 2004, he studied engineering at the École Polytechnique de Montréal, a part of the Université de Montréal.[67] He also started a master’s degree in environmental geography at McGill University, before suspending his program to seek public office.[68] [emphases mine]

Trudeau is not the only political leader to have a strong interest in science. In our neighbour to the south, there’s President Barack Obama who has done much to promote science since he was elected in 2008. David Bruggeman in an April 15, 2016  post (Obama hosts DNews segments for Science Channel week of April 11-15, 2016) and an April 17, 2016 post (Obama hosts White House Science Fair) describes two of Obama’s most recent efforts.

ETA April 19, 2016: I’ve found confirmation that this Q&A was somewhat staged as I hinted in the opening with “Prime Minister Justin Trudeau’s apparently extemporaneous response … .” Will Oremus’s April 19, 2016 article for Slate.com breaks the whole news cycle down and points out (Note: A link has been removed),

Over the weekend, even as latecomers continued to dine on the story’s rapidly decaying scraps, a somewhat different picture began to emerge. A Canadian blogger pointed out that Trudeau himself had suggested to reporters at the event that they lob him a question about quantum computing so that he could knock it out of the park with the newfound knowledge he had gleaned on his tour.

The Canadian blogger who tracked this down is J. J. McCullough (Jim McCullough) and you can read his Oct. 16, 2016 posting on the affair here. McCullough has a rather harsh view of the media response to Trudeau’s lecture. Oremus is a bit more measured,

… Monday brought the countertake parade—smaller and less pompous, if no less righteous—led by Gawker with the headline, “Justin Trudeau’s Quantum Computing Explanation Was Likely Staged for Publicity.”

But few of us in the media today are immune to the forces that incentivize timeliness and catchiness over subtlety, and even Gawker’s valuable corrective ended up meriting a corrective of its own. Author J.K. Trotter soon updated his post with comments from Trudeau’s press secretary, who maintained (rather convincingly, I think) that nothing in the episode was “staged”—at least, not in the sinister way that the word implies. Rather, Trudeau had joked that he was looking forward to someone asking him about quantum computing; a reporter at the press conference jokingly complied, without really expecting a response (he quickly moved on to his real question before Trudeau could answer); Trudeau responded anyway, because he really did want to show off his knowledge.

Trotter deserves credit, regardless, for following up and getting a fuller picture of what transpired. He did what those who initially jumped on the story did not, which was to contact the principals for context and comment.

But my point here is not to criticize any particular writer or publication. The too-tidy Trudeau narrative was not the deliberate work of any bad actor or fabricator. Rather, it was the inevitable product of today’s inexorable social-media machine, in which shareable content fuels the traffic-referral engines that pay online media’s bills.

I suggest reading both McCullough’s and Oremus’s posts in their entirety should you find debates about the role of media compelling.

Informal roundup of robot movies and television programmes and a glimpse into our robot future

David Bruggeman has written an informal series of posts about robot movies. The latest, a June 27, 2015 posting on his Pasco Phronesis blog, highlights the latest Terminator film and opines that the recent interest could be traced back to the rebooted Battlestar Galactica television series (Note: Links have been removed),

I suppose this could be traced back to the reboot of Battlestar Galactica over a decade ago, but robots and androids have become an increasing presence on film and television, particularly in the last 2 years.

In the movies, the new Terminator film comes out next week, and the previews suggest we will see a new generation of killer robots traveling through time and space.  Chappie is now out on your digital medium of choice (and I’ll post about any science fiction science policy/SciFiSciPol once I see it), so you can compare its robot police to those from either edition of Robocop or the 2013 series Almost Human.  Robots also have a role …

The new television series he mentions, Humans (click on About) debuted on the US tv channel, AMC, on Sunday, June 28, 2015 (yesterday).

HUMANS is set in a parallel present, where the latest must-have gadget for any busy family is a Synth – a highly-developed robotic servant, eerily similar to its live counterpart. In the hope of transforming the way his family lives, father Joe Hawkins (Tom Goodman-Hill) purchases a Synth (Gemma Chan) against the wishes of his wife (Katharine Parkinson), only to discover that sharing life with a machine has far-reaching and chilling consequences.

Here’s a bit more information from its Wikipedia entry,

Humans (styled as HUM∀NS) is a British-American science fiction television series, debuted in June 2015 on Channel 4 and AMC.[2] Written by the British team Sam Vincent and Jonathan Brackley, based on the award-winning Swedish science fiction drama Real Humans, the series explores the emotional impact of the blurring of the lines between humans and machines. The series is produced jointly by AMC, Channel 4 and Kudos.[3] The series will consist of eight episodes.[4]

David also wrote about Ex Machina, a recent robot film with artistic ambitions, in an April 26, 2015 posting on his Pasco Phronesis blog,

I finally saw Ex Machina, which recently opened in the United States.  It’s a minimalist film, with few speaking roles and a plot revolving around an intelligence test.  Of the robot movies out this year, it has received the strongest reviews, and it may take home some trophies during the next awards season.  Shot in Norway, the film is both lovely to watch and tricky to engage.  I finished the film not quite sure what the characters were thinking, and perhaps that’s a lesson from the film.

Unlike Chappie and Automata, the intelligent robot at the center of Ex Machina is not out in the world. …

He started the series with a Feb. 8, 2015 posting which previews the movies in his later postings but also includes a couple of others not mentioned in either the April or June posting, Avengers: Age of Ultron and Spare Parts.

It’s interesting to me that these robots  are mostly not related to the benign robots in the movie, ‘Forbidden Planet’, a reworking of Shakespeare’s The Tempest in outer space, in ‘Lost in Space’, a 1960s television programme, and in the Jetsons animated tv series of the 1960s. As far as I can tell not having seen the new movies in question, the only benign robot in the current crop would be ‘Chappie’. It should be mentioned that the ‘Terminator’, in the person of Arnold Schwarzenegger, has over a course of three or four movies evolved from a destructive robot bent on evil to a destructive robot working on behalf of good.

I’ll add one more more television programme and I’m not sure if the robot boy is good or evil but there’s Extant where Halle Berry’s robot son seems to be in a version of the Pinocchio story (an ersatz child want to become human), which is enjoying its second season on US television as of July 1, 2015.

Regardless of one or two ‘sweet’ robots, there seems to be a trend toward ominous robots and perhaps, in addition to Battlestar Galactica, the concerns being raised by prominent scientists such as Stephen Hawking and those associated with the Centre for Existential Risk at the University of Cambridge have something to do with this trend and may partially explain why Chappie did not do as well at the box office as hoped. Thematically, it was swimming against the current.

As for a glimpse into the future, there’s this Children’s Hospital of Los Angeles June 29, 2015 news release,

Many hospitals lack the resources and patient volume to employ a round-the-clock, neonatal intensive care specialist to treat their youngest and sickest patients. Telemedicine–with real-time audio and video communication between a neonatal intensive care specialist and a patient–can provide access to this level of care.

A team of neonatologists at Children’s Hospital Los Angeles investigated the use of robot-assisted telemedicine in performing bedside rounds and directing daily care for infants with mild-to-moderate disease. They found no significant differences in patient outcomes when telemedicine was used and noted a high level of parent satisfaction. This is the first published report of using telemedicine for patient rounds in a neonatal intensive care unit (NICU). Results will be published online first on June 29 in the Journal of Telemedicine and Telecare.

Glimpse into the future?

The part I find most fascinating was that there was no difference in outcomes, moreover, the parents’ satisfaction rate was high when robots (telemedicine) were used. Finally, of the families who completed the after care survey (45%), all indicated they would be comfortable with another telemedicine (robot) experience. My comment, should robots prove to be cheaper in the long run and the research results hold as more studies are done, I imagine that hospitals will introduce them as a means of cost cutting.

The future of work during the age of robots and artificial intelligence

2014 was quite the year for discussions about robots/artificial intelligence (AI) taking over the world of work. There was my July 16, 2014 post titled, Writing and AI or is a robot writing this blog?, where I discussed the implications of algorithms which write news stories (business and sports, so far) in the wake of a deal that Associated Press signed with a company called Automated Insights. A few weeks later, the Pew Research Center released a report titled, AI, Robotics, and the Future of Jobs, which was widely covered. As well, sometime during the year, renowned physicist Stephen Hawking expressed serious concerns about artificial intelligence and our ability to control it.

It seems that 2015 is going to be another banner for this discussion. Before launching into the latest on this topic, here’s a sampling of the Pew Research and the response to it. From an Aug. 6, 2014 Pew summary about AI, Robotics, and the Future of Jobs by Aaron Smith and Janna Anderson,

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade.

We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet. …

I wouldn’t have expected Jeff Bercovici’s Aug. 6, 2014 article for Forbes to be quite so hesitant about the possibilities of our robotic and artificially intelligent future,

As part of a major ongoing project looking at the future of the internet, the Pew Research Internet Project canvassed some 1,896 technologists, futurists and other experts about how they see advances in robotics and artificial intelligence affecting the human workforce in 2025.

The results were not especially reassuring. Nearly half of the respondents (48%) predicted that robots and AI will displace more jobs than they create over the coming decade. While that left a slim majority believing the impact of technology on employment will be neutral or positive, that’s not necessarily grounds for comfort: Many experts told Pew they expect the jobs created by the rise of the machines will be lower paying and less secure than the ones displaced, widening the gap between rich and poor, while others said they simply don’t think the major effects of robots and AI, for better or worse, will be in evidence yet by 2025.

Chris Gayomali’s Aug. 6, 2014 article for Fast Company poses an interesting question about how this brave new future will be financed,

A new study by Pew Internet Research takes a hard look at how innovations in robotics and artificial intelligence will impact the future of work. To reach their conclusions, Pew researchers invited 12,000 experts (academics, researchers, technologists, and the like) to answer two basic questions:

Will networked, automated, artificial intelligence (AI) applications and robotic devices have displaced more jobs than they have created by 2025?
To what degree will AI and robotics be parts of the ordinary landscape of the general population by 2025?

Close to 1,900 experts responded. About half (48%) of the people queried envision a future in which machines have displaced both blue- and white-collar jobs. It won’t be so dissimilar from the fundamental shift we saw in manufacturing, in which fewer (human) bosses oversaw automated assembly lines.

Meanwhile, the other 52% of experts surveyed speculate while that many of the jobs will be “substantially taken over by robots,” humans won’t be displaced outright. Rather, many people will be funneled into new job categories that don’t quite exist yet. …

Some worry that over the next 10 years, we’ll see a large number of middle class jobs disappear, widening the economic gap between the rich and the poor. The shift could be dramatic. As artificial intelligence becomes less artificial, they argue, the worry is that jobs that earn a decent living wage (say, customer service representatives, for example) will no longer be available, putting lots and lots of people out of work, possibly without the requisite skill set to forge new careers for themselves.

How do we avoid this? One revealing thread suggested by experts argues that the responsibility will fall on businesses to protect their employees. “There is a relentless march on the part of commercial interests (businesses) to increase productivity so if the technical advances are reliable and have a positive ROI [return on investment],” writes survey respondent Glenn Edens, a director of research in networking, security, and distributed systems at PARC, which is owned by Xerox. “Ultimately we need a broad and large base of employed population, otherwise there will be no one to pay for all of this new world.” [emphasis mine]

Alex Hearn’s Aug. 6, 2014 article for the Guardian reviews the report and comments on the current educational system’s ability to prepare students for the future,

Almost all of the respondents are united on one thing: the displacement of work by robots and AI is going to continue, and accelerate, over the coming decade. Where they split is in the societal response to that displacement.

The optimists predict that the economic boom that would result from vastly reduced costs to businesses would lead to the creation of new jobs in huge numbers, and a newfound premium being placed on the value of work that requires “uniquely human capabilities”. …

But the pessimists worry that the benefits of the labor replacement will accrue to those already wealthy enough to own the automatons, be that in the form of patents for algorithmic workers or the physical form of robots.

The ranks of the unemployed could swell, as people are laid off from work they are qualified in without the ability to retrain for careers where their humanity is a positive. And since this will happen in every economic sector simultaneously, civil unrest could be the result.

One thing many experts agreed on was the need for education to prepare for a post-automation world. ““Only the best-educated humans will compete with machines,” said internet sociologist Howard Rheingold.

“And education systems in the US and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorise what is told them, preparing them for life in a 20th century factory.”

Then, Will Oremus’ Aug. 6, 2014 article for Slate suggests we are already experiencing displacement,

… the current jobless recovery, along with a longer-term trend toward income and wealth inequality, has some thinkers wondering whether the latest wave of automation is different from those that preceded it.

Massachusetts Institute of Technology researchers Andrew McAfee and Erik Brynjolfsson, among others, see a “great decoupling” of productivity from wages since about 2000 as technology outpaces human workers’ education and skills. Workers, in other words, are losing the race between education and technology. This may be exacerbating a longer-term trend in which capital has gained the upper hand on labor since the 1970s.

The results of the survey were fascinating. Almost exactly half of the respondents (48 percent) predicted that intelligent software will disrupt more jobs than it can replace. The other half predicted the opposite.

The lack of expert consensus on such a crucial and seemingly straightforward question is startling. It’s even more so given that history and the leading economic models point so clearly to one side of the question: the side that reckons society will adjust, new jobs will emerge, and technology will eventually leave the economy stronger.

More recently, Manish Singh has written about some of his concerns as a writer who could be displaced in a Jan. 31, 2015 (?) article for Beta News (Note: A link has been removed),

Robots are after my job. They’re after yours as well, but let us deal with my problem first. Associated Press, an American multinational nonprofit news agency, revealed on Friday [Jan. 30, 2015] that it published 3,000 articles in the last three months of 2014. The company could previously only publish 300 stories. It didn’t hire more journalists, neither did its existing headcount start writing more, but the actual reason behind this exponential growth is technology. All those stories were written by an algorithm.

The articles produced by the algorithm were accurate, and you won’t be able to separate them from stories written by humans. Good lord, all the stories were written in accordance with the AP Style Guide, something not all journalists follow (but arguably, should).

There has been a growth in the number of such software. Narrative Science, a Chicago-based company offers an automated narrative generator powered by artificial intelligence. The company’s co-founder and CTO, Kristian Hammond, said last year that he believes that by 2030, 90 percent of news could be written by computers. Forbes, a reputable news outlet, has used Narrative’s software. Some news outlets use it to write email newsletters and similar things.

Singh also sounds a note of concern for other jobs by including this video (approximately 16 mins.) in his piece,

This video (Humans Need Not Apply) provides an excellent overview of the situation although it seems C. G. P. Grey, the person who produced and posted the video on YouTube, holds a more pessimistic view of the future than some other futurists.  C. G. P. Grey has a website here and is profiled here on Wikipedia.

One final bit, there’s a robot art critic which some are suggesting is superior to human art critics in Thomas Gorton’s Jan. 16, 2015 (?) article ‘This robot reviews art better than most critics‘ for Dazed Digital (Note: Links have been removed),

… the Novice Art Blogger, a Tumblr page set up by Matthew Plummer Fernandez. The British-Colombian artist programmed a bot with deep learning algorithms to analyse art; so instead of an overarticulate critic rambling about praxis, you get a review that gets down to the nitty-gritty about what exactly you see in front of you.

The results are charmingly honest: think a round robin of Google Translate text uninhibited by PR fluff, personal favouritism or the whims of a bad mood. We asked Novice Art Blogger to review our most recent Winter 2014 cover with Kendall Jenner. …

Beyond Kendall Jenner, it’s worth reading Gorton’s article for the interview with Plummer Fernandez.

Science and the arts: a science rap promotes civil discussion about science and religion; a science movie and a play; and a chemistry article about authenticating a Lawren Harris painting

Canadian-born rapper of science and many other topics, Baba Brinkman sent me an update about his current doings (first mentioned in an Aug. 1, 2014 posting featuring his appearances at the 2014 Edinburgh Fringe Festival, his Rap Guide to Religion being debuted at the Fringe, and his Kickstarter campaign to raise money for the creation of an animated rap album of his news Rap Guide to Religion), Note: Links have been removed,

Greetings from Edinburgh! In the past two and half weeks I’ve done fifteen performances of The Rap Guide to Religion for a steadily building audience here at the Fringe, and we recently had a whole pile of awesome reviews published, which I will excerpt below, but first a funny story.

Yesterday [August 14, 2014] BBC [British Broadcasting Corporation] Sunday Morning TV was in to film my performance. They had a scheme to send a right wing conservative Christian to the show and then film us having an argument afterwards. The man they sent certainly has the credentials. Reverend George Hargreaves is a Pentecostal Minister and former leader of the UK Christian Party, as well as a young earth creationist and strong opponent of abortion and homosexuality. He led the protests that got “Jerry Springer the Opera” shut down in London a few years back, and is on record as saying that religion is not an appropriate subject for comedy. Before he converted to Christianity, the man was also a DJ and producer of pop music for the London gay scene, interesting background.

So after an hour of cracking jokes at religion’s expense, declaring myself an unapologetic atheist, and explaining why evolutionary science gives a perfectly satisfying naturalistic account of where religion comes from, I sat down with Reverend George and was gobsmacked when he started the interview with: “I don’t know if we’re going to have anything to debate about… I LOVED your show!” We talked for half an hour with the cameras rolling and at one point George said “I don’t know what we disagree about,” so I asked him: “Do you think one of your ancestors was a fish?” He declared that statement a fishy story and denied it, and then we found much to disagree about.

I honestly thought I had written a hard-hitting, provocative and controversial show, but it turns out the religious are loving it as much as the nonbelievers – and I’m not sure how I feel about that. I asked Reverend George why he wasn’t offended, even though he’s officially against comedy that targets religion, and he told me it’s because I take the religious worldview seriously, instead of lazily dismissing it as delusional. The key word here is “lazily” rather than “delusional” because I don’t pull punches about religion being a series of delusions, but I don’t think those delusions are pointless. I think they have evolved (culturally and genetically) to solve adaptive problems in the past, and for religious people accustomed to atheists being derisive and dismissive that’s a (semi) validating perspective.

To listen to songs from The Rap Guide to Religion, you need to back my Kickstarter campaign so I can raise the money to produce a proper record. To check out what the critics here in Edinburgh have to say about my take on religion, read on. And if you want to help organize a gig somewhere, just let me know. The show is open for bookings.

On Sunday Morning [August 17, 2014 GMT] my segment with Reverend George will air on BBC One, so we’ll see what a million British people think of the debate.

All the best from the religious fringe,

Baba

Here’s a link to the BBC One Sunday Morning Live show, where hopefully you’ll be able to catch the segment featuring Baba and Reverend George Hargreaves either livestreamed or shortly thereafter.

A science movie and a science play

Onto the science movie and the play: David Bruggeman on his Pasco Phronesis blog writes about two upcoming movie biopics featuring Alan Turing and Stephen Hawking respectively, in an Aug. 8, 2014 posting. Having covered the Turing movie here (at length) in a July 22, 2014 posting here’s the new information about the Hawking movie from David’s Aug, 8, 2014 posting,

Alan Turing and Stephen Hawking are noted British scientists, well recognized for their work and for having faced significant challenges in their lives.  While they were in different fields and productive in different parts of the 20th century (Hawking is still with us), their stories will compete in movieplexes (at least in the U.S.) this November.

The Theory of Everything is scheduled for release on November 7 and focuses on the early career and life of Hawking.  He’s portrayed by Eddie Redmayne, and the film is directed by James Marsh.  Marsh has several documentaries to his credit, including the Oscar-winning Man on Wire.  Theory is the third film project on Hawking since 2004, but the first to get much attention outside of the United Kingdom (this might explain why it won’t debut in the U.K. until New Year’s Day).  It premieres at the Toronto International Film Festival next month [Sept. 2014].

David features some trailers for both movies and additional information.

Interestingly the science play focuses on the friendship between a female UK scientist and her former student, Margaret Thatcher (a UK Prime Minister). From an Aug. 13, 2014 Alice Bell posting on the Guardian science blog network (Note: Links have been removed),

Adam Ganz’s new play – The Chemistry Between Them, to be broadcast on Radio 4 this month – explores one of the most intriguing friendships in the history of science and politics: Margaret Thatcher and Dorothy Hodgkin.

As well as winning the Nobel Prize in Chemistry for her pioneering scientific work on the structures of proteins, Hodgkin was a left-wing peace campaigner who was awarded the Soviet equivalent of the Nobel Peace Prize, the Order of Lenin. Hardly Thatcher’s type, you might think. But Hodgkin was Thatcher’s tutor at university, and the relationships between science, politics and women in high office are anything but straightforward.

I spoke to Ganz about his interest in the subject, and started by asking him to tell us more about the play.

… they stayed friends throughout Dorothy’s life. Margaret Thatcher apparently had a photo of Dorothy Hodgkin in Downing Street, and they maintained a kind of warm relationship. The play happens in two timescales – one is a meeting in 1983 in Chequers where Dorothy came to plead with Margaret to take nuclear disarmament more seriously at a time when Cruise missiles and SS20s were being stationed in Europe. In fact I’ve set it – I’m not sure of the exact date – shortly after the Korean airliner was shot down, when the Russians feared Nato were possibly planning a first strike. And that is intercut with the time when Margaret is studying chemistry and looking at her journey; what she learned at Somerville, but especially what she learned from Dorothy.

Here’s a link to the BBC 4 webpage for The Chemistry Between Them. I gather the broadcast will be Weds., Aug. 20, 2014 at 1415 hours GMT.

Chemistry and authentication of a Lawren Harris painting

The final item for this posting concerns Canadian art, chemistry, and the quest to prove the authenticity of a painting. Roberta Staley, editor of Canadian Chemical News (ACCN), has written a concise technical story about David Robertson’s quest to authenticate a painting he purchased some years ago,

Fourteen years ago, David Robertson of Delta, British Columbia was holidaying in Ontario when he stopped at a small antique shop in the community of Bala, two hours north of Toronto in cottage country. An unsigned 1912 oil painting caught his attention. Thinking it evocative of a Group of Seven painting, Robertson paid the asking price of $280 and took it home to hang above his fireplace.

Roberta has very kindly made it available as a PDF: ChemistryNews_Art.Mystery.Group.7. It will also be available online at the Canadian Chemical News website soon. (It’s not in the July/August 2014 issue.)

For anyone who might recognize the topic, I wrote a sprawling five-part series (over 5000 words) on the story starting with part one. Roberta’s piece is 800 words and offers her  account of the tests for both Autumn Harbour and the authentic Harris painting, Hurdy Gurdy. I was able to attend only one of them (Autumn Harbour).

David William Robertson, Autumn Harbour’s owner has recently (I received a notice on Aug. 13, 2014) updated his website with all of the scientific material and points of authentication that he feels prove his case.

Have a very nice weekend!

Where’s the science? Stephen Hawking’s Brave New World debuts Nov. 15, 2013

Yesterday, Nov. 14, 2013, I happened to catch Dr. Carin Bondar being interviewed on a local (Vancouver, Canada) television (tv) programme about her upcoming appearances as one of the hosts of Stephen Hawking’s Brave New World series (season two) being debuted tonight (Nov. 15, 2013). While enthusiastic about this latest venture, Dr. Bondar didn’t offer much science information during the interview where she focused on her adventures as part of a virtual military team and her surprise at some of the work being done in the field of prosthetics. There’s a bit more detail about the programme (not the science) in Bondar’s Nov. 12, 2013 blog entry on the Huffington Post website,

One of the highlights of my career thus far was being involved in a groundbreaking television series Stephen Hawking’s Brave New World premiering on Discovery World. A co-operative project between Handel Productions (Canada) and IWC (England), the series showcases some of the most mind-blowing new technologies that will impact our daily lives in the not-too-distant future.

Each of the six, one-hour episodes is narrated by Professor Stephen Hawking, world-renown physicist and author of the best-seller A Brief History of Time, and is comprised of the investigations of a team of five scientists who travel the world — Myself and Professor Chris Eliasmith from Canada, Dr. Daniel Kraft from the US, and Professor Jim Al-Khalili and Dr. Aarathi Prasad from the UK.

The premiere episode, called Inspired by Nature, is all about how we need only to look to the natural world for some of the most awe-inspiring inventions. Millions of years of evolution have resulted in some highly complex and innovative strategies for life across the animal kingdom…and this episode shows us how humans are attempting to re-create them for our own purposes.

Stephen Hawking’s Brave New World premieres Friday, November 15 at 8 p.m. ET/10 p.m. PT on Discovery World.

Bondar’s personal blog offers very little more, from a Nov. 1, 2013 posting,

Hi Everyone! I’m thrilled to be one of the presenters on season two of ‘Brave New World with Stephen Hawking’, which will premiere on November 15th. Shooting took place last spring all over the states. It was a crazy, exhausting whirlwind from Atlanta to San Diego, LA, Houston, Pittsburgh and Boston, but it was one of the coolest experiences of my life. I love this promo image of me in a Faraday (bird) cage at the Boston Museum of Science.

The Discovery World website’s programme webpage provides a bit more detail (where’s the science?) about the first three shows in the series,

STEPHEN HAWKING’S BRAVE NEW WORLD: “Inspired by Nature”
Hawking and his team investigate groundbreaking innovations in science inspired by nature. Aarathi Prasad road tests two of the most advanced all-terrain robots in the world designed to go where humans and vehicles can’t; Chris Eliasmith examines an extraordinary new fabric that mimics the adhesive ability of gecko feet and bonds to any surface; Daniel Kraft visits Vancouver-based Nuytco Research where underwater subs are used to simulate zero gravity to train astronauts for deep space exploration; Jim Al-Khalili examines how re-engineering a virus can prevent pandemics; and Carin Bondar discovers how Nikola Tesla’s remarkable dream of wireless power is finally being realized.

STEPHEN HAWKING’S BRAVE NEW WORLD: “Code Red”
Hawking and his team examine new inventions that will change how humans deal with crises in the future. Chris Eliasmith looks into a revolutionary pilotless helicopter (the K-Max), that can fly and perform complex manoeuvres on its own; Daniel Kraft tests out the latest high-tech bomb disposal robot; Jim Al-Khalili checks out a sniper rifle equipped with jet fighter target tracking technology; Carin Bondar examines face recognition binoculars that can identify criminals within 15 seconds; then, Aarathi Prasad examines a lifesaving breakthrough that allows oxygen to be injected directly into the bloodstream.
STEPHEN HAWKING’S BRAVE NEW WORLD: “Virtual World”
Hawking and his team investigate technology transforming the idea of reality. Carin Bondar takes part in a remarkable 3D virtual training program created for the military; Aarathi Prasad tests a new system that maps locations inaccessible by GPS; Daniel Kraft investigates 3D bio-printing where computer designs can be turned into living tissue; Chris Eliasmith tests the latest in gaming technology – a breakthrough in virtual reality that promises the most immersive experience yet; and Jim Al-Khalili tests a computer that can read the human mind.

It would have been nice to find out a little more about the science and a little less about the exciting aspects of these adventures. Perhaps the producers thought it best to confine the science to the broadcast.

The local tv programme where Dr. Bondar was interviewed is called The Rush and while the Nov. 14, 2012 interview has yet (as of Nov. 15, 2013, 13H30 or 1:30 pm PDT) to be posted online, you should be able to find it shortly.

I have mentioned Chris Eliasmith (University of Waterloo, Ontario, Canada) here before, notably in my November 29, 2012 posting about his work simulating neurons in the virtual world.