Tag Archives: Susan Baxter

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.

COVID-19: caution and concern not panic

There’s a lot of information being pumped out about COVID-19 and not all of it is as helpful as it might be. In fact, the sheer volume can seem overwhelming despite one’s best efforts to be calm.

Here are a few things I’ve used to help relieve some fo the pressure as numbers in Canada keep rising.

Inspiration from the Italians

I was thrilled to find Emily Rumball’s March 18 ,2020 article titled, “Italians making the most of quarantine is just what the world needs right now (VIDEOS),” on the Daily Hive website. The couple dancing on the balcony while Ginger Rogers and Fred Astaire are shown dancing on the wall above is my favourite.

As the Italians practice social distancing and exercise caution, they are also demonstrating that “life goes on” even while struggling as one of the countries hit hardest by COVID-19.

Investigating viruses and the 1918/19 pandemic vs. COVID-19

There has been some mention of and comparison to the 1918/19 pandemic (also known as the Spanish flu) in articles by people who don’t seem to be particularly well informed about that earlier pandemic. Susan Baxter offers a concise and scathing explanation for why the 1918/19 situation deteriorated as much as it did in her February 8, 2010 posting. As for this latest pandemic (COVID-19), she explains what a virus actually is and suggests we all calm down in her March 17, 2020 posting. BTW, she has an interdisciplinary PhD for work largely focused on health economics. She is also a lecturer in the health sciences programme at Simon Fraser University (Vancouver, Canada). Full disclosure: She and I have a longstanding friendship.

Marilyn J. Roossinck, a professor of Plant Pathology and Environmental Microbiology at Pennsylvania State University, wrote a February 20, 2020 essay for The Conversation titled, “What are viruses anyway, and why do they make us so sick? 5 questions answered,”

4. SARS was a formidable foe, and then seemed to disappear. Why?

Measures to contain SARS started early, and they were very successful. The key is to stop the chain of transmission by isolating infected individuals. SARS had a short incubation period; people generally showed symptoms in two to seven days. There were no documented cases of anyone being a source of SARS without showing symptoms.

Stopping the chain of transmission is much more difficult when the incubation time is much longer, or when some people don’t get symptoms at all. This may be the case with the virus causing CoVID-19, so stopping it may take more time.

1918/19 pandemic vs. COVID-19

Angela Betsaida B. Laguipo, with a Bachelor of Nursing degree from the University of Baguio, Philippine is currently completing her Master’s Degree, has written a March 9, 2020 article for News Medical comparing the two pandemics,

The COVID-19 is fast spreading because traveling is an everyday necessity today, with flights from one country to another accessible to most.

Some places did manage to keep the virus at bay in 1918 with traditional and effective methods, such as closing schools, banning public gatherings, and locking down villages, which has been performed in Wuhan City, in Hubei province, China, where the coronavirus outbreak started. The same method is now being implemented in Northern Italy, where COVID-19 had killed more than 400 people.

The 1918 Spanish flu has a higher mortality rate of an estimated 10 to 20 percent, compared to 2 to 3 percent in COVID-19. The global mortality rate of the Spanish flu is unknown since many cases were not reported back then. About 500 million people or one-third of the world’s population contracted the disease, while the number of deaths was estimated to be up to 50 million.

During that time, public funds are mostly diverted to military efforts, and a public health system was still a budding priority in most countries. In most places, only the middle class or the wealthy could afford to visit a doctor. Hence, the virus has [sic] killed many people in poor urban areas where there are poor nutrition and sanitation. Many people during that time had underlying health conditions, and they can’t afford to receive health services.

I recommend reading Laguipo’s article in its entirety right down to the sources she cites at the end of her article.

Ed Yong’s March 20, 2020 article for The Atlantic, “Why the Coronavirus Has Been So Successful; We’ve known about SARS-CoV-2 for only three months, but scientists can make some educated guesses about where it came from and why it’s behaving in such an extreme way,” provides more information about what is currently know about the coronavirus, SATS-CoV-2,

One of the few mercies during this crisis is that, by their nature, individual coronaviruses are easily destroyed. Each virus particle consists of a small set of genes, enclosed by a sphere of fatty lipid molecules, and because lipid shells are easily torn apart by soap, 20 seconds of thorough hand-washing can take one down. Lipid shells are also vulnerable to the elements; a recent study shows that the new coronavirus, SARS-CoV-2, survives for no more than a day on cardboard, and about two to three days on steel and plastic. These viruses don’t endure in the world. They need bodies.

But why do some people with COVID-19 get incredibly sick, while others escape with mild or nonexistent symptoms? Age is a factor. Elderly people are at risk of more severe infections possibly because their immune system can’t mount an effective initial defense, while children are less affected because their immune system is less likely to progress to a cytokine storm. But other factors—a person’s genes, the vagaries of their immune system, the amount of virus they’re exposed to, the other microbes in their bodies—might play a role too. In general, “it’s a mystery why some people have mild disease, even within the same age group,” Iwasaki [Akiko Iwasaki of the Yale School of Medicine] says.

We still have a lot to learn about this.

Going nuts and finding balance with numbers

Generally speaking,. I find numbers help me to put this situation into perspective. It seems I’m not alone; Dr. Daniel Gillis’ (Guelph University in Ontario, Canada) March 18, 2020 blog post is titled, Statistics In A Time of Crisis.

Hearkening back in history, the Wikipedia entry for Spanish flu offers a low of 17M deaths in a 2018 estimate to a high of !00M deaths in a 2005 estimate. At this writing (Friday, March 20, 2020 at 3 pm PT), the number of coronovirus cases worldwide is 272,820 with 11, 313 deaths.

Articles like Michael Schulman’s March 16, 2020 article for the New Yorker might not be as helpful as one hope (Note: Links have been removed),

Last Wednesday night [March 11, 2020], not long after President Trump’s Oval Office address, I called my mother to check in about the, you know, unprecedented global health crisis [emphasis mine] that’s happening. She told me that she and my father were in a cab on the way home from a fun dinner at the Polo Bar, in midtown Manhattan, with another couple who were old friends.

“You went to a restaurant?!” I shrieked. This was several days after she had told me, through sniffles, that she was recovering from a cold but didn’t see any reason that she shouldn’t go to the school where she works. Also, she was still hoping to make a trip to Florida at the end of the month. My dad, a lawyer, was planning to go into the office on Thursday, but thought that he might work from home on Friday, if he could figure out how to link up his personal computer. …

… I’m thirty-eight, and my mother and father are sixty-eight and seventy-four, respectively. Neither is retired, and both are in good shape. But people sixty-five and older—more than half of the baby-boomer population—are more susceptible to COVID-19 and have a higher mortality rate, and my parents’ blithe behavior was as unsettling as the frantic warnings coming from hospitals in Italy.

Clearly, Schulman is concerned about his parents’ health and well being but the tone of near hysteria is a bit off-putting. We’re not in a crisis (exception: the Italians and, possibly, the Spanish and the French)—yet.

Tyler Dawson’s March 20, 2020 article in The Province newspaper (in Vancouver, British Columbia) offers dire consequences from COVID-19 before pivoting,

COVID-19 will leave no Canadian untouched.

Travel plans halted. First dates postponed. School semesters interrupted. Jobs lost. Retirement savings decimated. Some of us will know someone who has gotten sick, or tragically, died from the virus.

By now we know the terminology: social distancing, flatten the curve. Across the country, each province is taking measures to prepare, to plan for care, and the federal government has introduced financial measures amounting to more than three per cent of the country’s GDP to float the economy onward.

The response, says Steven Taylor, a University of British Columbia psychiatry professor and author of The Psychology of Pandemics, is a “balancing act.” [emphasis mine] Keep people alert, but neither panicked nor tuned out.

“You need to generate some degree of anxiety that gets people’s attention,” says Taylor. “If you overstate the message it could backfire.”

Prepare for uncertainty

In the same way experts still cannot come up with a definitive death rate for the 1918/19 pandemic, they are having trouble with this one too although, now, they’re trying to model the future rather than trying to establish what happened in the past. David Adam’s March 12, 2020 article forThe Scientist, provides some insight into the difficulties (Note: Links have been removed)

Like any other models, the projections of how the outbreak will unfold, how many people will become infected, and how many will die, are only as reliable as the scientific information they rest on. And most modelers’ efforts so far have focused on improving these data, rather than making premature predictions.

“Most of the work that modelers have done recently or in the first part of the epidemic hasn’t really been coming up with models and predictions, which is I think how most people think of it,” says John Edmunds, who works in the Centre for the Mathematical Modelling of Infectious Diseases at the London School of Hygiene & Tropical Medicine. “Most of the work has really been around characterizing the epidemiology, trying to estimate key parameters. I don’t really class that as modeling but it tends to be the modelers that do it.”

These variables include key numbers such as the disease incubation period, how quickly the virus spreads through the population, and, perhaps most contentiously, the case-fatality ratio. This sounds simple: it’s the proportion of infected people who die. But working it out is much trickier than it looks. “The non-specialists do this all the time and they always get it wrong,” Edmunds says. “If you just divide the total numbers of deaths by the total numbers of cases, you’re going to get the wrong answer.”

Earlier this month, Tedros Adhanom Ghebreyesus, the head of the World Health Organization, dismayed disease modelers when he said COVID-19 (the disease caused by the SARS-CoV-2 coronavirus) had killed 3.4 percent of reported cases, and that this was more severe than seasonal flu, which has a death rate of around 0.1 percent. Such a simple calculation does not account for the two to three weeks it usually takes someone who catches the virus to die, for example. And it assumes that reported cases are an accurate reflection of how many people are infected, when the true number will be much higher and the true mortality rate much lower.

Edmunds calls this kind of work “outbreak analytics” rather than true modeling, and he says the results of various specialist groups around the world are starting to converge on COVID-19’s true case-fatality ratio, which seems to be about 1 percent.[emphasis mine]

The 1% estimate in Adam’s article accords with Jeremy Samuel Faust’s (an emergency medicine physician at Brigham and Women’s Hospital in Boston, faculty in its division of health policy and public health, and an instructor at Harvard Medical School) estimates in a March 4, 2020 article (COVID-19 Isn’t As Deadly As We Think featured in my March 9, 2020 posting).

In a March 17, 2020 article by Steven Lewis (a health policy consultant formerly based in Saskatchewan, Canada; now living in Australia) for the Canadian Broadcasting Corporation’s (CBC) news online website, he covers some of the same ground and offers a somewhat higher projected death rate while refusing to commit,

Imagine you’re a chief public health officer and you’re asked the question on everyone’s mind: how deadly is the COVID-19 outbreak?

With the number of cases worldwide approaching 200,000, and 1,000 or more cases in 15 countries, you’d think there would be an answer. But the more data we see, the tougher it is to come up with a hard number.

Overall, the death rate is around four per cent — of reported cases. That’s also the death rate in China, which to date accounts for just under half the total number of global cases.

China is the only country where a) the outcome of almost all cases is known (85 per cent have recovered), and b) the spread has been stopped (numbers plateaued about a month ago). 

A four per cent death rate is pretty high — about 40 times more deadly than seasonal flu — but no experts believe that is the death rate. The latest estimate is that it is around 1.5 per cent. [emphasis mine] Other models suggest that it may be somewhat lower. 

The true rate can be known only if every case is known and confirmed by testing — including the asymptomatic or relatively benign cases, which comprise 80 per cent or more of the total — and all cases have run their course (people have either recovered or died). Aside from those in China, almost all cases identified are still active. 

Unless a jurisdiction systematically tests a large random sample of its population, we may never know the true rate of infection or the real death rate. 

Yet for all this unavoidable uncertainty, it is still odd that the rates vary so widely by country.

His description of the situation in Europe is quite interesting and worthwhile if you have the time to read it.

In the last article I’m including here, Murray Brewster offers some encouraging words in his March 20, 2020 piece about the preparations being made by the Canadian Armed Forces (CAF),

The Canadian military is preparing to respond to multiple waves of the COVID-19 pandemic which could stretch out over a year or more, the country’s top military commander said in his latest planning directive.

Gen. Jonathan Vance, chief of the defence staff, warned in a memo issued Thursday that requests for assistance can be expected “from all echelons of government and the private sector and they will likely come to the Department [of National Defence] through multiple points of entry.”

The directive notes the federal government has not yet directed the military to move into response mode, but if or when it does, a single government panel — likely a deputy-minister level inter-departmental task force — will “triage requests and co-ordinate federal responses.”

It also warns that members of the military will contract the novel coronavirus, “potentially threatening the integrity” of some units.

The notion that the virus caseload could recede and then return is a feature of federal government planning.

The Public Health Agency of Canada has put out a notice looking for people to staff its Centre for Emergency Preparedness and Response during the crisis and the secondment is expected to last between 12 and 24 months.

The Canadian military, unlike those in some other nations, has high-readiness units available. Vance said they are already set to reach out into communities to help when called.

Planners are also looking in more detail at possible missions — such as aiding remote communities in the Arctic where an outbreak could cripple critical infrastructure.

Defence analyst Dave Perry said this kind of military planning exercise is enormously challenging and complicated in normal times, let alone when most of the federal civil service has been sent home.

“The idea that they’re planning to be at this for year is absolutely bang on,” said Perry, a vice-president at the Canadian Global Affairs Institute.

In other words, concern and caution are called for not panic. I realize this post has a strongly Canada-centric focus but I’m hopeful others elsewhere will find this helpful.

Being smart about using artificial intelligence in the field of medicine

Since my August 20, 2018 post featured an opinion piece about the possibly imminent replacement of radiologists with artificial intelligence systems and the latest research about employing them for diagnosing eye diseases, it seems like a good time to examine some of the mythology embedded in the discussion about AI and medicine.

Imperfections in medical AI systems

An August 15, 2018 article for Slate.com by W. Nicholson Price II (who teaches at the University of Michigan School of Law; in addition to his law degree he has a PhD in Biological Sciences from Columbia University) begins with the peppy, optimistic view before veering into more critical territory (Note: Links have been removed),

For millions of people suffering from diabetes, new technology enabled by artificial intelligence promises to make management much easier. Medtronic’s Guardian Connect system promises to alert users 10 to 60 minutes before they hit high or low blood sugar level thresholds, thanks to IBM Watson, “the same supercomputer technology that can predict global weather patterns.” Startup Beta Bionics goes even further: In May, it received Food and Drug Administration approval to start clinical trials on what it calls a “bionic pancreas system” powered by artificial intelligence, capable of “automatically and autonomously managing blood sugar levels 24/7.”

An artificial pancreas powered by artificial intelligence represents a huge step forward for the treatment of diabetes—but getting it right will be hard. Artificial intelligence (also known in various iterations as deep learning and machine learning) promises to automatically learn from patterns in medical data to help us do everything from managing diabetes to finding tumors in an MRI to predicting how long patients will live. But the artificial intelligence techniques involved are typically opaque. We often don’t know how the algorithm makes the eventual decision. And they may change and learn from new data—indeed, that’s a big part of the promise. But when the technology is complicated, opaque, changing, and absolutely vital to the health of a patient, how do we make sure it works as promised?

Price describes how a ‘closed loop’ artificial pancreas with AI would automate insulin levels for diabetic patients, flaws in the automated system, and how companies like to maintain a competitive advantage (Note: Links have been removed),

[…] a “closed loop” artificial pancreas, where software handles the whole issue, receiving and interpreting signals from the monitor, deciding when and how much insulin is needed, and directing the insulin pump to provide the right amount. The first closed-loop system was approved in late 2016. The system should take as much of the issue off the mind of the patient as possible (though, of course, that has limits). Running a close-loop artificial pancreas is challenging. The way people respond to changing levels of carbohydrates is complicated, as is their response to insulin; it’s hard to model accurately. Making it even more complicated, each individual’s body reacts a little differently.

Here’s where artificial intelligence comes into play. Rather than trying explicitly to figure out the exact model for how bodies react to insulin and to carbohydrates, machine learning methods, given a lot of data, can find patterns and make predictions. And existing continuous glucose monitors (and insulin pumps) are excellent at generating a lot of data. The idea is to train artificial intelligence algorithms on vast amounts of data from diabetic patients, and to use the resulting trained algorithms to run a closed-loop artificial pancreas. Even more exciting, because the system will keep measuring blood glucose, it can learn from the new data and each patient’s artificial pancreas can customize itself over time as it acquires new data from that patient’s particular reactions.

Here’s the tough question: How will we know how well the system works? Diabetes software doesn’t exactly have the best track record when it comes to accuracy. A 2015 study found that among smartphone apps for calculating insulin doses, two-thirds of the apps risked giving incorrect results, often substantially so. … And companies like to keep their algorithms proprietary for a competitive advantage, which makes it hard to know how they work and what flaws might have gone unnoticed in the development process.

There’s more,

These issues aren’t unique to diabetes care—other A.I. algorithms will also be complicated, opaque, and maybe kept secret by their developers. The potential for problems multiplies when an algorithm is learning from data from an entire hospital, or hospital system, or the collected data from an entire state or nation, not just a single patient. …

The [US Food and Drug Administraiont] FDA is working on this problem. The head of the agency has expressed his enthusiasm for bringing A.I. safely into medical practice, and the agency has a new Digital Health Innovation Action Plan to try to tackle some of these issues. But they’re not easy, and one thing making it harder is a general desire to keep the algorithmic sauce secret. The example of IBM Watson for Oncology has given the field a bit of a recent black eye—it turns out that the company knew the algorithm gave poor recommendations for cancer treatment but kept that secret for more than a year. …

While Price focuses on problems with algorithms and with developers and their business interests, he also hints at some of the body’s complexities.

Can AI systems be like people?

Susan Baxter, a medical writer with over 20 years experience, a PhD in health economics, and author of countless magazine articles and several books, offers a more person-centered approach to the discussion in her July 6, 2018 posting on susanbaxter.com,

The fascination with AI continues to irk, given that every second thing I read seems to be extolling the magic of AI and medicine and how It Will Change Everything. Which it will not, trust me. The essential issue of illness remains perennial and revolves around an individual for whom no amount of technology will solve anything without human contact. …

But in this world, or so we are told by AI proponents, radiologists will soon be obsolete. [my August 20, 2018 post] The adaptational learning capacities of AI mean that reading a scan or x-ray will soon be more ably done by machines than humans. The presupposition here is that we, the original programmers of this artificial intelligence, understand the vagaries of real life (and real disease) so wonderfully that we can deconstruct these much as we do the game of chess (where, let’s face it, Big Blue ate our lunch) and that analyzing a two-dimensional image of a three-dimensional body, already problematic, can be reduced to a series of algorithms.

Attempting to extrapolate what some “shadow” on a scan might mean in a flesh and blood human isn’t really quite the same as bishop to knight seven. Never mind the false positive/negatives that are considered an acceptable risk or the very real human misery they create.

Moravec called it

It’s called Moravec’s paradox, the inability of humans to realize just how complex basic physical tasks are – and the corresponding inability of AI to mimic it. As you walk across the room, carrying a glass of water, talking to your spouse/friend/cat/child; place the glass on the counter and open the dishwasher door with your foot as you open a jar of pickles at the same time, take a moment to consider just how many concurrent tasks you are doing and just how enormous the computational power these ostensibly simple moves would require.

Researchers in Singapore taught industrial robots to assemble an Ikea chair. Essentially, screw in the legs. A person could probably do this in a minute. Maybe two. The preprogrammed robots took nearly half an hour. And I suspect programming those robots took considerably longer than that.

Ironically, even Elon Musk, who has had major production problems with the Tesla cars rolling out of his high tech factory, has conceded (in a tweet) that “Humans are underrated.”

I wouldn’t necessarily go that far given the political shenanigans of Trump & Co. but in the grand scheme of things I tend to agree. …

Is AI going the way of gene therapy?

Susan draws a parallel between the AI and medicine discussion with the discussion about genetics and medicine (Note: Links have been removed),

On a somewhat similar note – given the extent to which genetics discourse has that same linear, mechanistic  tone [as AI and medicine] – it turns out all this fine talk of using genetics to determine health risk and whatnot is based on nothing more than clever marketing, since a lot of companies are making a lot of money off our belief in DNA. Truth is half the time we don’t even know what a gene is never mind what it actually does;  geneticists still can’t agree on how many genes there are in a human genome, as this article in Nature points out.

Along the same lines, I was most amused to read about something called the Super Seniors Study, research following a group of individuals in their 80’s, 90’s and 100’s who seem to be doing really well. Launched in 2002 and headed by Angela Brooks Wilson, a geneticist at the BC [British Columbia] Cancer Agency and SFU [Simon Fraser University] Chair of biomedical physiology and kinesiology, this longitudinal work is examining possible factors involved in healthy ageing.

Turns out genes had nothing to do with it, the title of the Globe and Mail article notwithstanding. (“Could the DNA of these super seniors hold the secret to healthy aging?” The answer, a resounding “no”, well hidden at the very [end], the part most people wouldn’t even get to.) All of these individuals who were racing about exercising and working part time and living the kind of life that makes one tired just reading about it all had the same “multiple (genetic) factors linked to a high probability of disease”. You know, the gene markers they tell us are “linked” to cancer, heart disease, etc., etc. But these super seniors had all those markers but none of the diseases, demonstrating (pretty strongly) that the so-called genetic links to disease are a load of bunkum. Which (she said modestly) I have been saying for more years than I care to remember. You’re welcome.

The fundamental error in this type of linear thinking is in allowing our metaphors (genes are the “blueprint” of life) and propensity towards social ideas of determinism to overtake common sense. Biological and physiological systems are not static; they respond to and change to life in its entirety, whether it’s diet and nutrition to toxic or traumatic insults. Immunity alters, endocrinology changes, – even how we think and feel affects the efficiency and effectiveness of physiology. Which explains why as we age we become increasingly dissimilar.

If you have the time, I encourage to read Susan’s comments in their entirety.

Scientific certainties

Following on with genetics, gene therapy dreams, and the complexity of biology, the June 19, 2018 Nature article by Cassandra Willyard (mentioned in Susan’s posting) highlights an aspect of scientific research not often mentioned in public,

One of the earliest attempts to estimate the number of genes in the human genome involved tipsy geneticists, a bar in Cold Spring Harbor, New York, and pure guesswork.

That was in 2000, when a draft human genome sequence was still in the works; geneticists were running a sweepstake on how many genes humans have, and wagers ranged from tens of thousands to hundreds of thousands. Almost two decades later, scientists armed with real data still can’t agree on the number — a knowledge gap that they say hampers efforts to spot disease-related mutations.

In 2000, with the genomics community abuzz over the question of how many human genes would be found, Ewan Birney launched the GeneSweep contest. Birney, now co-director of the European Bioinformatics Institute (EBI) in Hinxton, UK, took the first bets at a bar during an annual genetics meeting, and the contest eventually attracted more than 1,000 entries and a US$3,000 jackpot. Bets on the number of genes ranged from more than 312,000 to just under 26,000, with an average of around 40,000. These days, the span of estimates has shrunk — with most now between 19,000 and 22,000 — but there is still disagreement (See ‘Gene Tally’).

… the inconsistencies in the number of genes from database to database are problematic for researchers, Pruitt says. “People want one answer,” she [Kim Pruitt, a genome researcher at the US National Center for Biotechnology Information {NCB}] in Bethesda, Maryland] adds, “but biology is complex.”

I wanted to note that scientists do make guesses and not just with genetics. For example, Gina Mallet’s 2005 book ‘Last Chance to Eat: The Fate of Taste in a Fast Food World’ recounts the story of how good and bad levels of cholesterol were established—the experts made some guesses based on their experience. That said, Willyard’s article details the continuing effort to nail down the number of genes almost 20 years after the human genome project was completed and delves into the problems the scientists have uncovered.

Final comments

In addition to opaque processes with developers/entrepreneurs wanting to maintain their secrets for competitive advantages and in addition to our own poor understanding of the human body (how many genes are there anyway?), there are same major gaps (reflected in AI) in our understanding of various diseases. Angela Lashbrook’s August 16, 2018 article for The Atlantic highlights some issues with skin cancer and shade of your skin (Note: Links have been removed),

… While fair-skinned people are at the highest risk for contracting skin cancer, the mortality rate for African Americans is considerably higher: Their five-year survival rate is 73 percent, compared with 90 percent for white Americans, according to the American Academy of Dermatology.

As the rates of melanoma for all Americans continue a 30-year climb, dermatologists have begun exploring new technologies to try to reverse this deadly trend—including artificial intelligence. There’s been a growing hope in the field that using machine-learning algorithms to diagnose skin cancers and other skin issues could make for more efficient doctor visits and increased, reliable diagnoses. The earliest results are promising—but also potentially dangerous for darker-skinned patients.

… Avery Smith, … a software engineer in Baltimore, Maryland, co-authored a paper in JAMA [Journal of the American Medical Association] Dermatology that warns of the potential racial disparities that could come from relying on machine learning for skin-cancer screenings. Smith’s co-author, Adewole Adamson of the University of Texas at Austin, has conducted multiple studies on demographic imbalances in dermatology. “African Americans have the highest mortality rate [for skin cancer], and doctors aren’t trained on that particular skin type,” Smith told me over the phone. “When I came across the machine-learning software, one of the first things I thought was how it will perform on black people.”

Recently, a study that tested machine-learning software in dermatology, conducted by a group of researchers primarily out of Germany, found that “deep-learning convolutional neural networks,” or CNN, detected potentially cancerous skin lesions better than the 58 dermatologists included in the study group. The data used for the study come from the International Skin Imaging Collaboration, or ISIC, an open-source repository of skin images to be used by machine-learning algorithms. Given the rise in melanoma cases in the United States, a machine-learning algorithm that assists dermatologists in diagnosing skin cancer earlier could conceivably save thousands of lives each year.

… Chief among the prohibitive issues, according to Smith and Adamson, is that the data the CNN relies on come from primarily fair-skinned populations in the United States, Australia, and Europe. If the algorithm is basing most of its knowledge on how skin lesions appear on fair skin, then theoretically, lesions on patients of color are less likely to be diagnosed. “If you don’t teach the algorithm with a diverse set of images, then that algorithm won’t work out in the public that is diverse,” says Adamson. “So there’s risk, then, for people with skin of color to fall through the cracks.”

As Adamson and Smith’s paper points out, racial disparities in artificial intelligence and machine learning are not a new issue. Algorithms have mistaken images of black people for gorillas, misunderstood Asians to be blinking when they weren’t, and “judged” only white people to be attractive. An even more dangerous issue, according to the paper, is that decades of clinical research have focused primarily on people with light skin, leaving out marginalized communities whose symptoms may present differently.

The reasons for this exclusion are complex. According to Andrew Alexis, a dermatologist at Mount Sinai, in New York City, and the director of the Skin of Color Center, compounding factors include a lack of medical professionals from marginalized communities, inadequate information about those communities, and socioeconomic barriers to participating in research. “In the absence of a diverse study population that reflects that of the U.S. population, potential safety or efficacy considerations could be missed,” he says.

Adamson agrees, elaborating that with inadequate data, machine learning could misdiagnose people of color with nonexistent skin cancers—or miss them entirely. But he understands why the field of dermatology would surge ahead without demographically complete data. “Part of the problem is that people are in such a rush. This happens with any new tech, whether it’s a new drug or test. Folks see how it can be useful and they go full steam ahead without thinking of potential clinical consequences. …

Improving machine-learning algorithms is far from the only method to ensure that people with darker skin tones are protected against the sun and receive diagnoses earlier, when many cancers are more survivable. According to the Skin Cancer Foundation, 63 percent of African Americans don’t wear sunscreen; both they and many dermatologists are more likely to delay diagnosis and treatment because of the belief that dark skin is adequate protection from the sun’s harmful rays. And due to racial disparities in access to health care in America, African Americans are less likely to get treatment in time.

Happy endings

I’ll add one thing to Price’s article, Susan’s posting, and Lashbrook’s article about the issues with AI , certainty, gene therapy, and medicine—the desire for a happy ending prefaced with an easy solution. If the easy solution isn’t possible accommodations will be made but that happy ending is a must. All disease will disappear and there will be peace on earth. (Nod to Susan Baxter and her many discussions with me about disease processes and happy endings.)

The solutions, for the most part, are seen as technological despite the mountain of evidence suggesting that technology reflects our own imperfect understanding of health and disease therefore providing what is at best an imperfect solution.

Also, we tend to underestimate just how complex humans are not only in terms of disease and health but also with regard to our skills, understanding, and, perhaps not often enough, our ability to respond appropriately in the moment.

There is much to celebrate in what has been accomplished: no more black death, no more smallpox, hip replacements, pacemakers, organ transplants, and much more. Yes, we should try to improve our medicine. But, maybe alongside the celebration we can welcome AI and other technologies with a lot less hype and a lot more skepticism.

L’Oréal introduces wearable cosmetic electronic patch (my UV patch)

You don’t (well, I don’t) expect a cosmetics company such as L’Oréal to introduce products at the Consumer Electronics Show (CES) held in Las Vegas (Nevada, US) annually (Jan. 6 – 9, 2016).

A Jan. 6, 2016 article by Zoe Kleinman for BBC (British Broadcasting Corporation) news online explains,

Beauty giant L’Oreal has unveiled a smart skin patch that can track the skin’s exposure to harmful UV rays at the technology show CES in Las Vegas.

The product will be launched in 16 countries including the UK this summer, and will be available for free [emphasis mine].

It contains a photosensitive blue dye, which changes colour when exposed to ultraviolet light.

But the wearer must take a photo of it and then upload it to an app to see the results.

It’s a free app, eh? A cynic might suggest that the company will be getting free data in return.

A Jan. 6, 2016 L’Oréal press release, also on PR Newswire, provides more details (Note: Links have been removed),

Today [Jan. 6, 2016] at the Consumer Electronics Show, L’Oréal unveiled My UV Patch, the first-ever stretchable skin sensor designed to monitor UV exposure and help consumers educate themselves about sun protection. The new technology arrives at a time when sun exposure has become a major health issue, with 90% of nonmelanoma skin cancers being associated with exposure to ultraviolet (UV) radiation from sun* in addition to attributing to skin pigmentation and photoaging.

To address these growing concerns, L’Oréal Group’s leading dermatological skincare brand, La Roche-Posay, is introducing a first-of-its kind stretchable electronic, My UV Patch. The patch is a transparent adhesive that, unlike the rigid wearables currently on the market, stretches and adheres directly to any area of skin that consumers want to monitor. Measuring approximately one square inch in area and 50 micrometers thick – half the thickness of an average strand of hair – the patch contains photosensitive dyes that factor in the baseline skin tone and change colors when exposed to UV rays to indicate varying levels of sun exposure.

Consumers will be able to take a photo of the patch and upload it to the La Roche-Posay My UV Patch mobile app, which analyzes the varying photosensitive dye squares to determine the amount of UV exposure the wearer has received. The My UV Patch mobile app will be available on both iOS and Android, incorporating Near Field Communications (NFC)-enabled technology into the patch-scanning process for Android. My UV Patch is expected to be made available to consumers later this year.

“Connected technologies have the potential to completely disrupt how we monitor the skin’s exposure to various external factors, including UV,” says Guive Balooch, Global Vice President of L’Oréal’s Technology Incubator. “Previous technologies could only tell users the amount of potential sun exposure they were receiving per hour while wearing a rigid, non-stretchable device. The key was to design a sensor that was thin, comfortable and virtually weightless so people would actually want to wear it. We’re excited to be the first beauty company entering the stretchable electronics field and to explore the many potential applications for this technology within our industry and beyond.”

My UV Patch was developed by L’Oréal’s U.S.-based Technology Incubator, a business division dedicated entirely to technological innovation, alongside MC10, Inc., a leading stretchable electronics company using cutting-edge innovation to create the most intelligent, stretchable systems for biometric healthcare analytics. L’Oréal also worked with PCH who design engineered the sensor. The stretchable, peel-and-stick wearable unites L’Oréal Group’s extensive scientific research on the skin and expertise with UV protection with MC10’s strong technological capabilities in physiological sensing and pattern recognition algorithms to measure skin changes over time, and PCH’s 20-year experience in product development, manufacturing and supply chain.

“With My UV Patch, L’Oréal is taking the lead in developing the next generation of smart skincare technology powered by MC10’s unique, stretchable electronics platform, that truly addresses a consumer need,” said Scott Pomerantz, CEO of MC10. “This partnership with L’Oréal marks an exciting new milestone for MC10 and underscores the intersection of tech and beauty and the boundless potential of connected devices within the beauty market.”

*Source: Skin Cancer Foundation 2015

“Together with La Roche-Posay dermatologists like myself, we share a mission to help increase sun safe behavior,” added Alysa Herman, MD.  “La Roche-Posay recently commissioned a global study in 23 countries, which surveyed 19,000 women and men and found a huge gap in consumer behavior: even though 92% were aware that unprotected sun exposure can cause health problems, only 26% of Americans protect themselves all year round, whatever the season. With the new My UV Patch, for the first time, we are leveraging technology to help incite a true behavioral change through real-time knowledge. ”

About L’Oréal

L’Oréal has devoted itself to beauty for over 105 years. With its unique international portfolio of 32 diverse and complementary brands, the Group generated sales amounting to 22.5 billion euros in 2014 and employs 78,600 people worldwide. As the world’s leading beauty company, L’Oréal is present across all distribution networks: mass market, department stores, pharmacies and drugstores, hair salons, travel retail and branded retail.

Research and innovation, and a dedicated research team of 3,700 people, are at the core of L’Oréal’s strategy, working to meet beauty aspirations all over the world and attract one billion new consumers in the years to come. L’Oréal’s new sustainability commitment for 2020 “Sharing Beauty With All” sets out ambitious sustainable development objectives across the Group’s value chain. www.loreal.com

About LA ROCHE-POSAY and ANTHELIOS

Recommended by more than 25,000 dermatologists worldwide, La Roche-Posay offers a unique range of daily skincare developed with dermatologists to meet their standards in efficacy, tolerance and elegant textures for increased compliance. The products, which are developed using a strict formulation charter, include a minimal number of ingredients to reduce side effects and reactivity and are formulated with effective ingredients at optimal concentrations for increased efficacy. Additionally, La Roche-Posay products undergo stringent clinical testing to guarantee efficacy and safety, even on sensitive skin.

About MC10

MC10’s mission is to improve human health through digital healthcare solutions. The company combines its proprietary ultra-thin, stretchable body-worn sensors with advanced analytics to unlock health insights from physiological data. MC10 partners with healthcare organizations and researchers to advance medical knowledge and create monitoring and diagnostic solutions for patients and physicians. Backed by a strong syndicate of financial and strategic investors, MC10 has received widespread recognition for its innovative technology, including being named a 2014 CES Innovation in Design Honoree. MC10 is headquartered in Lexington, MA.  Visit MC10 online at www.mc10inc.com.

About PCH

PCH designs custom product solutions for startups and Fortune 500 companies. Whether design engineering and development, manufacturing and fulfilment, distribution or retail, PCH takes on the toughest challenges. If it can be imagined, it can be made. At PCH, we make. www.pchintl.com. Twitter: @PCH_Intl

Ryan O’Hare’s Jan. 6, 2016 article for the UK’s DailyMailOnline provides some additional technology details and offers images of the proposed patch, not reproduced here, (Note: A link has been removed),

The patch and free app, which will be launched in the summer, have been welcomed by experts.

Dr Christopher Rowland Payne, consultant dermatologist to The London Clinic, said: ‘This is an exciting device that will motivate people in a positive way to take control of their sun exposure and will encourage them to know when it is time to leave the sun or to reapply their sunscreen.

‘It is an ingenious way of giving people the information they need. I hope it will also get people talking to each other about safe sun exposure.’

The technology used in the UV patches is based on ‘biostamps’ designed by tech firm MC10.

They were originally designed to help medical teams measure the health of their patients either remotely, or without the need for large expensive machinery.

Motorola were exploring the patches as an alternative to using traditional passwords for security and access to devices.

Getting back to this ‘free app’ business, the data gathered could be used to help the company create future skincare products. If they are planning to harvest your data, there’s nothing inherently wrong with the practice but the company isn’t being as straightforward as it could be. In any event, you may want to take a good at the user agreement and decide for yourself.

Finally, I think it’s time to acknowledge medical writer, Dr. Susan Baxter, (not for the first time and not the last either) as I likely wouldn’t have thought past my general cynicism about data harvesting for a reason, additional to any humanitarian motivations L’Oréal might have, for offering a free mobile app. She doesn’t post on her blog that frequently but it’s always worth taking a look (http://www.susanbaxter.ca/blog-page/) and I recommend this July 30, 2014 post titled, ‘Civil Scientific Discourse RIP’ which focuses on vaccination and anti-vaccination positions. Do not expect a comfortable read.

More about MUSE, a Canadian company and its brain sensing headband; women and startups; Canadianess

I first wrote about Ariel Garten and her Toronto-based (Canada) company, InteraXon, in a Dec. 5, 2012 posting where I featured a product, MUSE (Muse), then described as a brainwave controller. A March 5, 2015 article by Lydia Dishman for Fast Company provides an update on the product now described as a brainwave-sensing headband and on the company (Note: Links have been removed),

The technology that had captured the imagination of millions was then incorporated to develop a headband called Muse. It sells at retail stores like BestBuy for about $300 and works in conjunction with an app called Calm as a tool to increase focus and reduce stress.

If you always wanted to learn to meditate without those pesky distracting thoughts commandeering your mind, Muse can help by taking you through a brief exercise that translates brainwaves into the sound of wind. Losing focus or getting antsy brings on the gales. Achieving calm rewards you with a flock of birds across your screen.

The company has grown to 50 employees and has raised close to $10 million from investors including Ashton Kutcher. Garten [Ariel Garten, founder and Chief Executive Founder] says they’re about to close on a Series B round, “which will be significant.”

She says that listening plays an important role at InteraXon. Reflecting back on what you think you heard is an exercise she encourages, especially in meetings. When the development team is building a tool, for example, they use their Muses to meditate and focus, which then allows for listening more attentively and nonjudgmentally.

Women and startups

Dishman references gender and high tech financing in her article about Garten,

Garten doesn’t dwell on her status as a woman in a mostly male-dominated sector. That goes for securing funding for the startup too, despite the notorious bias venture-capital investors have against women startup founders.

“I am sure I lost deals because I am a woman, but also because the idea didn’t resonate,” she says, adding, “I’m sure I gained some because I am a woman, so it is unfair to put a blanket statement on it.”

Yet Garten is the only female member of her C-suite, something she says “is just the way it happened.” Casting the net recently to fill the role of chief operating officer [COO], Garten says there weren’t any women in the running, in part because the position required hardware experience as well as knowledge of working with the Chinese.

She did just hire a woman to be senior vice president of sales and marketing, and says, “When we are hiring younger staff, we are gender agnostic.”

I can understand wanting to introduce nuance into the ‘gender bias and tech startup discussion’ by noting that some rejections could have been due to issues with the idea or implementation. But the comment about being the only female in late stage funding as “just the way it happened” suggests she is extraordinarily naïve or willfully blind. Given her followup statement about her hiring practices, I’m inclined to go with willfully blind. It’s hard to believe she couldn’t find any woman with hardware experience and China experience. It seems more likely she needed a male COO to counterbalance a company with a female CEO. As for being gender agnostic where younger staff are concerned, that’s nice but it’s not reassuring as women have been able to get more junior positions. It’s the senior positions such as COO which remain out of reach and, troublingly, Garten seems to have blown off the question with a weak explanation and a glib assurance of equality at the lower levels of the company.

For more about gender, high tech companies, and hiring/promoting practices, you can read a March 5, 2015 article titled, Ellen Pao Trial Reveals the Subtle Sexism of Silicon Valley, by Amanda Marcotte for Slate.

Getting back to MUSE, you can find out more here. You can find out more about InterAxon here. Unusually, there doesn’t seem to be any information about the management team on the website.

Canadianness

I thought it was interesting that InterAxon’s status as a Canada-based company was mentioned nowhere in Dishman’s article. This is in stark contrast to Nancy Owano’s  Dec. 5, 2012 article for phys.org,

A Canadian company is talking about having a window, aka computer screen, into your mind. … InteraXon, a Canadian company, is focused on making a business out of mind-control technology via a headband device, and they are planning to launch this as a $199 brainwave computer controller called Muse. … [emphases mine]

This is not the only recent instance I’ve noticed. My Sept. 1, 2014 posting mentions what was then an upcoming Margaret Atwood event at Arizona State University,

… (from the center’s home page [Note: The center is ASU’s Center for Science and the Imagination]),

Internationally renowned novelist and environmental activist Margaret Atwood will visit Arizona State University this November [2014] to discuss the relationship between art and science, and the importance of creative writing and imagination for addressing social and environmental challenges.

Atwood’s visit will mark the launch of the Imagination and Climate Futures Initiative … Atwood, author of the MaddAddam trilogy of novels that have become central to the emerging literary genre of climate fiction, or “CliFi,” will offer the inaugural lecture for the initiative on Nov. 5.

“We are proud to welcome Margaret Atwood, one of the world’s most celebrated living writers, to ASU and engage her in these discussions around climate, science and creative writing,” …  “A poet, novelist, literary critic and essayist, Ms. Atwood epitomizes the creative and professional excellence our students aspire to achieve.”

There’s not a single mention that she is Canadian there or in a recent posting by Martin Robbins about a word purge from the Oxford Junior Dictionary published by the Guardian science blog network (March 3, 2015 posting). In fact, Atwood was initially described by Robbins as one of Britain’s literary giants. I assume there were howls of anguish once Canadians woke up to read the article since the phrase was later amended to “a number of the Anglosphere’s literary giants.”

The omission of InterAxon’s Canadianness in Dishman’s article for an American online magazine and Atwood’s Canadianness on the Arizona State University website and Martin Robbins’ initial appropriation and later change to the vague-sounding “Anglospere” in his post for the British newspaper, The Guardian, means the bulk of their readers will likely assume InterAxon is American and that Margaret Atwood, depending on where you read about her, is either an American or a Brit.

It’s flattering that others want to grab a little bit of Canada for themselves.

Coda: The Oxford Junior Dictionary and its excision of ‘nature’ words

 

Robbins’ March 3, 2015 posting focused on a heated literary discussion about the excision of these words from the Oxford Junior Dictionary (Note:  A link has been removed),

“The deletions,” according to Robert Macfarlane in another article on Friday, “included acorn, adder, ash, beech, bluebell, buttercup, catkin, conker, cowslip, cygnet, dandelion, fern, hazel, heather, heron, ivy, kingfisher, lark, mistletoe, nectar, newt, otter, pasture and willow. The words taking their places in the new edition included attachment, block-graph, blog, broadband, bullet-point, celebrity, chatroom, committee, cut-and-paste, MP3 player and voice-mail.”

I’m surprised the ‘junior’ dictionary didn’t have “attachment,” “celebrity,” and “committee” prior to the 2007 purge. By the way, it seems no one noticed the purge till recently. Robbins has an interesting take on the issue, one with which I do not entirely agree. I understand needing to purge words but what happens a child reading a classic such as “The Wind in the Willows’ attempts to look up the word ‘willows’?  (Thanks to Susan Baxter who in a private communication pointed out the problems inherent with reading new and/or classic books and not being able to find basic vocabulary.)

Is it Nature or is it Henry Gee? science’s woman wars continue (or start up again)

I was thinking we’d get a few more months before another ‘how women are treated in science circles’ or gender issues (as it is sometimes known) story erupted. Our last cycle was featured in my Oct. 18, 2013 posting and mentioned again in my Dec. 31, 2013 posting titled: 2013: women, science, gender, and sex. (Note: I will be referring to these postinsg and the Oct. scandals again in this posting but first, I have to lay the groundwork.)

It seems Henry Gee, a senior editor at Nature magazine, disagreed with my preference for waiting a few more months and decided to start a new cycle on Jan. 17, 2014 when he outed (revealed her personal name) pseudonymous blogger and online presence, Dr. Isis, on his Twitter feed. Here’s the nature (pun noted) of the offence (from Michael Eisen’s Jan. 20, 2014 posting on his ‘it is NOT junk’ blog),

DrIsisHenryGeeIn addition to  Dr. Isis’ personal name, Gee describes her as an “inconsequential sports physio” which seems to have disturbed some folks at least as much as the outing. Dr. Isis describes herself this way (from the Isis the Scientist blog About page,

Dr. Isis is an exercise physiologist at a major research university working on some terribly impressive stuff. …

In the Jan. 20, 2014 posting on her blog, Dr. Isis responds to Gee’s action on Twitter (partial excerpt from the posting; Note: Links have been removed),

So, while I am “ok”, were his actions “ok?” Of course not, and they give me pause. I have undoubtedly been vocal over the last four years of the fact that I believe Nature, the flagship of our profession, does not have a strong track record of treating women fairly. I believe that Henry Gee, a representative of the journal, is responsible for some of that culture.  That’s not “vitriolic” and it’s not “bullying”. That is me saying, as a woman, that there is something wrong with how this journal and its editors engage 50% of the population (or 20% of scientists) and I believe in my right to say “this is not ‘ok’.”  Henry Gee responded by skywriting my real name because he believed that would hurt me personally – my career, my safety, my family. Whatever. Regardless of the actual outcome, the direct personal nature of the attack is highlighted by its support from some that I “had it coming.. [emphasis mine]

Henry Gee’s actions were meant to intimidate me into silence. He took this approach likely with the thought that it was the most powerful way he could hurt me. Nothing more. Although I am ok, there are some recent victims of outing behavior that are not. That’s frightening. To think that the editor of a journal would respond to criticism of his professional conduct regarding the fair treatment of women by attempting to personally injure and damage..

I recommend reading the post in it’s entirety as she also addresses the adjective, ‘inconsequential’ and expands further on the issues she has with Nature (magazine). As for the emphasis I”ve added to the phrase “… I have it coming …”, it reminded me of this passage in my Dec. 31, 2013 posting,

think we (men and women) are obliged to take good look at sexism around us and within us and if you still have any doubts about the prevalence of sexism and gender bias against women, take a look at Sydney Brownstone’s Oct. 22, 2013 article for Fast Company,

These ads for U.N. Women show what happens if you type things like “women need to” into Google. The autocomplete function will suggest ways to fill in the blank based on common search terms such as “know their place” and “shut up.”

A quick, unscientific study of men-based searches comes up with very different Autocomplete suggestions. Type in “men need to,” and you’ll get “feel needed,” “grow up,” or “ejaculate.” Type in “men shouldn’t,” and you might get, “wear flip flops.”

Those searches were made in March 2013.

Gee managed to fuse two prevailing attitudes toward women in a single tweet, rage when women aren’t ‘nice’ or ‘don’t know their place’ (apparently, Dr. Isis can be quite stinging in her criticisms and so he outs her) and dismissiveness (she’s an “inconsequential sports physio”) while showcasing Nature’s (his employer) and by extension his own importance in the world of science (“Nature quakes in its boots”).

Michael Eisen in his Jan. 20, 2014 posting explains why he thinks this situation is important and unpacks some of the reasons why a young scientist might wish to operate with a pseudonym (Note: A link has been removed),

Gee and Dr. Isis have apparently had issues in the past. I don’t know the full history, but I was witness to some of it after Gee published a misogynistic short story in Nature several years back. Gee behaved like an asshole back then, and apparently he has not stopped.

Think about what happened here. A senior figure at arguably the most important journal in science took it upon himself to reveal the name of a young, female, Latina scientist with whom he has fought and whom he clearly does not like. …

Having myself come under fairly withering criticism from Dr. Isis, I feel somewhat qualified to speak to this. She has a sharp tongue. She speaks with righteous indignation. I don’t always think she’s being fair. And, to be honest, her words hurt. But you know what? She was also right. I have learned a lot from my interactions with Dr. Isis – albeit sometimes painfully. I reflected on what she had to say – and why she was saying it. I am a better person for it. I have to admit that her confrontational style is effective.

If our conflicts had existed in the “real world” where I’m a reasonably well known, male tenured UC [University of California] Berkeley professor and HHMI  [Howard Hughes Medical Institute] Investigator and she’s a young, female, Latina woman at the beginning of her research career, the deck is stacked against her. Whatever the forum, odds are I’m going to come out ahead, not because I’m right, but because that’s just the way this world works. And I think we can all agree that this is a very bad thing. This kind of power imbalance is toxic and distorting. It infuses every interaction. The worst part of it is obvious – it serves to keep people who start down, down. But it also gives people on the other side the false sense that they are right. It prevents them from learning and growing.

But when my interlocutor is anonymous, the balance of power shifts. Not completely. But it does shift. And it was enough, I think, to fundamentally change the way the conversations ended. And that was a good thing. I know I’m not going to convince many people that they should embrace this feeling of discomfort – this loss of power. But I hope, at least, people can appreciate why some amongst us feel so strongly about protecting this tool in their arsenal, and why what Gee did is more fundamental and reprehensible than the settling of a grudge.

I recommend reading Eisen’s posting in its entirety and this Jan. 21, 2014 posting by Dr. Julienne Rutherford on her Biological ANthropology Developing Investigators Troop blog. She provides more context for this situation and a personal perspective as an untenured professor herself (Note: Links have been removed),

As a biological anthropologist working toward tenure, a paper in Nature could “make” my career. I have as-yet-untenured colleagues at Ivies who get tsked-tsked for NOT submitting to Nature. The reverence for impact factors requires us to consider this the pinnacle of scientific publishing, at the same time that senior representatives of that very same journal with public platforms show absolutely no shame in trivializing our efforts as scientists or our very real struggles as outsiders in the Old White Boys Club. Struggles that make me feel like this a lot, and I actually have it pretty easy.

This continued outsider existence is what leads many to seek the clearly imperfect protection of an online pseudonym. Pseudonymity on the the internet has a long and defensible history, largely as protection of some kind, often against reprisals by employers. Sometimes as protection against cyber-stalking and sometimes real-life stalking and physical assault. But another reason is that it can offer protection against the clubbishness and bullying of privileged scholars with powers to hire, publish, grant funds. The power to deem one as a scientist of consequence. The power to refuse the pervasive poison that is their privilege and blindness. …

Interestingly, the same day Gee lashed out at Dr. Isis, Nature issued an apology for a letter they had recently published. Here’s an excerpt from the letter that was published online on Jan. 15, 2014,

Research: Publish on the basis of quality, not gender by Lukas Koube. Nature 505, 291 (16 January 2014) doi:10.1038/505291e Published online 15 January 2014

The publication of research papers should be based on quality and merit, so the gender balance of authors is not relevant in the same way as it might be for commissioned writers (see Nature 504, 188; 2013) [a special issue on women and gender issues in science]. Neither is the disproportionate number of male reviewers evidence of gender bias. …

Koube’s letter is behind a paywall but i gather the rest of it continues in a similarly incendiary and uninformed fashion.

Kelly Hills writes about the letter and Nature’s apology in a Jan. 17, 2014 posting on her Life As An Extreme Sport blog (Note: Links have been removed),

While Nature’s apology is better than a nonpology, it’s not actually a full apology, and it doesn’t surprise me that it’s not being as well-received as the editors likely hoped. I detailed some of my issues with the apology on Twitter this morning, but I wanted to take the time to actually expand on what is necessary for a complete apology.

You can find quite a few different opinions on what constitutes an actual apology. I am fond of a four stage approach: Recognition, Responsibility, Remorse/Regret, Remedy. I think it’d be easiest to go through each of these and the Nature apology, to see where they succeed, and where they fail. Hopefully this will be illustrative not only to them now, but others in the future.

… When you recognize your mistake, you need to be specific. This is what Nature said:

On re-examining the letter and the process, we consider that it adds no value to the discussion and unnecessarily inflames it, that it did not receive adequate editorial attention, and that we should not have published it.

This isn’t a bad start. Ultimately, there is recognition that the commentary was inflammatory and it shouldn’t have been published. That said, what would have made it a good example of recognition is acknowledgement that the commentary that was published was offensive, as well. It’s not about adding no value, or even being inflammatory–it’s that it’s a point of view that has been systematically deconstructed and debunked over years, to the point that people who hold it are actually advocating biased, if not complete misogynistic, positions.

I found this a very interesting read as Hills elucidates on one of my pet peeves, the non apology apology and something I recognize as one of my own faults, offering a non apology, i.e., offering excuses for my behaviour along with “I’m sorry.”

Before finishing this post, I want to include a little more information about Henry Gee (from his Wikipedia essay; Note: Links have been removed),

Dr Henry Gee (born 1962 in London, England) is a British paleontologist and evolutionary biologist. He is a senior editor of Nature, the scientific journal.[1]

Gee earnt his B.Sc. at the University of Leeds and completed his Ph.D. at Fitzwilliam College, Cambridge, where, in his spare time, he played keyboard for a jazz band fronted by Sonita Alleyne, who went on to establish the TV and radio production company Somethin’ Else.[2] Gee joined Nature as a reporter in 1987 and is now Senior Editor, Biological Sciences.[citation needed] He has published a number of books, including Before the Backbone: Views on the Origin of the Vertebrates (1996), In Search of Deep Time (1999),[3][4] A Field Guide to Dinosaurs (illustrated by Luis Rey) (2003) and Jacob’s Ladder (2004).

On January 17th, 2014, Gee became embroiled in internet controversy by revealing the identity of an anonymous science blogger, Melissa Bates [7]. Bates was an open critic of the scientific journal Nature, where Gee is a senior editor. Gee’s comments were an apparent attempt to discredit the blogger’s reputation, but many felt his doxing went too far[8] . It was later revealed that Gee is not unfamiliar with pseudonyms himself, using the pseudonym “Cromercrox” to curate his own Wikipedia entry.

I am a bit surprised by the lack of coverage on the Guardian science blogs where a number of essays were featured during the Oct. 2013 ‘sex scandals’. Perhaps no one has had enough time to write it up or perhaps the Guardian editors feel that enough has been written about gender and science. Note, Henry Gee writes for the Guardian.

It’s hard for me to tell whether or not Henry Gee’s Twitter feed (@HenryGeeBooks) is a personal account or a business account  (access seems to be restricted as of Jan. 22, 2014 12:40 pm PDT; you can access this) but it does seem that Gee has conflated his professional and personal lives in such a way that one may not be easily distinguishable from the other. This does leave me with a question, is Nature responsible for comments made on their employee’s personal Twitter feed (assuming HenryGeeBooks is a personal feed)?  No and yes.

As far as I’m concerned no employer has a right to control any aspects of an employee’s personal life unless it impacts their work, e.g. pedophiles should not be employed to work with young children. In Henry Gee’s case he invoked his employer and his professional authority as one of their editors with “Nature quakes in its boots” and that means I expect to see some sort of response from NPG .

I’ve mentioned the October 2013 scandals because Nature Publishing Group (NGP) owns Scientific American, one of the publications that was at the centre of the scandals. Their (Scientific American/NPG) response was found to be lacking that time too. At this point, we have two responses that are lacking (the excuses over the Scientific American aspects of the October 2013 scandals and the apology over the Koube letter published in January 2014) and a nonresponse with regard to Gee’s tweet.

Regarding Henry Gee, perhaps this massive indignation which has caused his Twitter page to be made inaccessible, at this time  will also cause him to reconsider his attitudes about women and about the power he wields (or wielded?). I fear that won’t be the case and that he’s more likely to be building resentment. Ultimately, this is what confounds me about these situations, how does one confront a bully without driving them into more extreme forms of the behaviour and attitudes that led to the confrontation? I don’t believe there’s ‘a one size fits all’ answer to this, I just wish there was more discussion about the issue. I speak here as a Canadian who is still haunted by L’École Polytechnique massacre in Montréal (from the Wikipedia essay; Note: Links have been removed),

The École Polytechnique Massacre, also known as the Montreal Massacre, occurred on December 6, 1989 at the École Polytechnique in Montreal, Quebec, Canada. Twenty-five-year-old Marc Lépine, armed with a legally obtained Mini-14 rifle and a hunting knife, shot twenty-eight people before killing himself. He began his attack by entering a classroom at the university, where he separated the male and female students. After claiming that he was “fighting feminism” and calling the women “a bunch of feminists,” he shot all nine women in the room, killing six. He then moved through corridors, the cafeteria, and another classroom, specifically targeting women to shoot. Overall, he killed fourteen women and injured ten other women and four men in just under twenty minutes before turning the gun on himself.[1][2]

I applaud the women who have spoken up and continue to speak up and I hope we all men and women can work towards ways of confronting bullies while also allowing for the possibility of change.

Finally, thanks to Susan Baxter for alerting me to this latest gender and science story cycle. Here’s Susan’s blog where she writes about medical matters (mostly). Her latest post concern’s Lyme’s disease.

2012 Canadian science blog roundup and some thoughts on a Canadian science blog network

This is my 3rd annual roundup of Canadian science blogs and the science blogging scene in Canada seems to be getting more lively (see my Dec. 31, 2010 posting and Dec. 29, 2011 posting to compare).

As I did last year, I will start with

Goodbyes

Don’t leave Canada appears to be gone as there hasn’t been posting there since May 4, 2011. I’m sorry to see it go as Rob Annan provided thoughtful commentary on science policy on a regular basis for years. Thank you, Rob. (BTW, he’s now the director of policy, research and evaluation at MITACS.)

Cool Science, John McKay’s blog has been shut down as of Oct. 24, 2012,

Hi everyone. This will mark the final post of the CoolScience.ca site and it will be quietly taken offline in November. I will also be closing down the Twitter and Facebook accounts and moving everything over to my professional accounts that are all focused on communicating science, technology, engineering and medicine.

The Dark Matter science blog by Tom Spears, which I reluctantly (as it was a ‘newspaper blog’ from the Ottawa Citizen)included last year  has since disappeared as has NeuroDojo, a blog written by a Canadian scientist in Texas.

Goodbye ish

Marc Leger’s Atoms and Numbers blog’s latest posting is dated Oct. 23, 2012 but the pattern here seems similar to Marie-Claire’s (see the next one) where the posting is erratic but relatively regular (once or twice per month) until October of this year.

Marie-Claire Shanahan is posting less frequently on her Boundary Vision blog with the last posting there on Oct. 9, 2012.

The Bubble Chamber blog from the University of Toronto’s Science Policy Work Group seems to be fading away with only one posting for 2012, Reply to Wayne Myrvold on the Higgs Boson.

Colin Schulz’s CMBR blog hasn’t had a new posting since July 13, 2012’s 11 Things You Didn’t Know About Canada. In any event, it looks like the blog is no longer primarily focused on science.

The Exponential Book blog by Massimo Boninsegni features an Oct. 24, 2012 posting and a similar posting pattern to Marie-Claire & Marc.

exposure/effect which was new last year has gone into a fairly lengthy hiatus as per its last post in January 30, 2012 posting.

Theoretical biologist, Mario Pineda-Krch of Mario’s Entangled Bank blog is also taking a lengthy hiatus as the last posting on that blog was June 11, 2012.

Nicole Arbour’s Canadian science blog for the UK High Commission in Ottawa hasn’t featured a posting since Oct. 15, 2012’s The Power of We: Adapting to climate change.

Gregor Wolbring’s Nano and Nano- Bio, Info, Cogno, Neuro, Synbio, Geo, Chem… features an Aug. 4, 2012 posting which links to one of his nano articles, (Nanoscale Science and Technology and People with Disabilities in Asia: An Ability Expectation Analysis) published elsewhere.

Jeff Sharom’s Science Canada blog highlights links to editorials and articles on Canadian science policy but doesn’t seem to feature original writing by Sharom or anyone else, consequently, it functions more as a reader/aggregator than a blog.

The Black Hole blog which was always more focused on prospect for Canadian science graduates than Canadian science, hence always a bit of a stretch for inclusion here, has moved to the University Affairs website where it focuses more exclusively on the Canadian academic scene with posts such as this, Free journal access for postdocs in between positions  from Dec. 12, 2012.

Returning to the roundup:

John Dupuis’ Confessions of a Science Librarian whose Dec. 26, 2012 posting, Best Science (Fiction) Books 2012: io9 seems timely for anyone taking a break at this time of year and looking for some reading material.

Daniel Lemire’s blog is known simply as Daniel Lemire. He’s a computer scientist in Montréal who writes one of the more technical blogs I’ve come across and his focus seems to be databases although his Dec. 10, 2012 posting covers the topic of how to get things accomplished when you’re already busy.

Dave Ng, a professor with the Michael Smith Laboratories at the University of British Columbia, is a very active science communicator who maintain the Popperfont blog. The latest posting (Dec. 24, 2012) features Sciencegeek Advent Calendar Extravaganza! – Day 24.

Eric Michael Johnson continues with his The Primate Diaries blog on the Scientific American blog network. His Dec. 6, 2012 posting is a reposted article but he has kept up a regular (once per month, more or less) posting schedule,

Author’s Note: The following originally appeared at ScienceBlogs.com and was subsequently a finalist in the 3 Quarks Daily Science Prize judged by Richard Dawkins. Fairness is the basis of the social contract. As citizens we expect that when we contribute our fair share we should receive our just reward. When social benefits are handed out …

Rosie Redfield is keeping with both her blogs, RRTeaching (latest posting, Dec. 6, 2012) and RRResearch (Nov. 17, 2012).

Sci/Why is a science blog being written by Canadian children’s writers who discuss science, words, and the eternal question – why?

Mathematician Nassif Ghoussoub’s Piece of Mind blog continues to feature incisive writing about science, science funding, policy and academe.

Canadian science writer Heather Pringle continues to post on the The Last Word on Nothing, a blog shared collectively by a number of well known science writers. Her next posting is scheduled for Jan. 3, 2013, according to the notice on the blog.

A little off my usual beat but I included these last year as they do write about science albeit medical and/or health science:

Susan Baxter’s blog Curmudgeon’s Corner features her insights into various medical matters, for example there’s her Dec. 1, 2012 posting on stress, the immune system, and the French antipathy towards capitalism.

Peter Janiszewski and Travis Saunders co-own two different blogs, Obesity Panacea, which is part of the PLoS (Public Library of Science) blogs network, and Science of Blogging which features very occasional posting but it’s worth a look for nuggets like this Oct. 12, 2012 (?) posting on social media for scientists.

After posting the 2011 roundup,

I had a number of suggestions for more Canadian science blogs such as these four who are part of the Scientific American SA) blogging network (in common with Eric Michael Johnson),

Dr. Carin Bondar posts on the SA blog, PsiVid, along with Joanne Manaster. There’s more than one Canadian science blogger who co-writes a blog. This one is self-described as, A cross section of science on the cyberscreen.

Glendon Mellow, a professional science illustrator,  posts on The Flying Trilobite (his own blog) and Symbiartic: the art of science and the science of art, an SA blog he shares with Kalliopi Monoyios.

Larry Moran, a biochemist at the University of Toronto, posts on science and anything else that tickles his fancy on his Sandwalk blog.

Eva Amsen who posts on a number of blogs including the NODE; the community site for developmental biologists  (which she also manages) but the best place to find a listing of her many blogs and interests is at easternblot.net, where she includes this self-description on the About page,

Online Projects

  • Musicians and Scientists – Why are so many people involved in both music and science? I’m on a mission to find out.
  • the NodeMy day job is managing a community site for developmental biologists around the world. The site is used by equal numbers of postdocs, PhD students, and lab heads.
  • SciBarCamp/SciBarCamb – I co-instigated SciBarCamp, an unconference for scientists, in Toronto in 2008. Since then I have co-organized five similar events in three countries, and have advised others on how to run science unconferences.
  • You Learn Something New Every Day – a Tumblr site that automatically aggregates tweets with the hashtag #ylsned, and Flickr photos tagged ylsned, to collect the interesting bits of trivia that people come across on a daily basis.
  • Lab Waste – During my last months in the lab as a PhD student, I made a mini-documentary (using CC-licensed materials) about the excessive amount of disposable plastics used in research labs. It screened in 2009 in the “Quirky Shorts” program of the Imagine Science Film Festival in New York.
  • Expression Patterns – In 2007 I was invited to blog on Nature Network. The complete archives from 2007-2012 are now on this site.
  • easternblot.net – Confusingly, my other science blog was named after this entire domain. It ran from 2005 to 2010, and can be found at science.easternblot.net

I believe Amsen is Canadian and working in the UK but if anyone could confirm, I would be much relieved.

Someone, who according to their About page prefers to remain anonymous but lives in Victoria, BC, and posts (somewhat irregularly, the last posting is dated Nov. 10, 2012) on The Olive Ridley Crawl,

I am an environmental scientist blogging about environmental and development issues that interest me. I prefer to be anonymous(e) because I work with some of the companies I may talk about and I want to avoid conflict of interest issues at work. This gets tricky because I am at the periphery of a lot of events happening in the world of my greatest expertise, persistent organic pollutants, endocrine disrupting compounds, their effects on health and the policy fights around chemicals, their use the controversies! So, I’ve reluctantly moved away from writing about what I know most about, which means this blog suffers severely. I still soldier on, though!

I was born, and grew up in India, so I am interested in all things South Asian and tend to view most all Western government and Western institution actions through a colonialist scratched lens! I am also becoming much more active about my feminism, so who knows what that will do to this blog. I have been meaning to write a monstrous essay about women, the environment and justice, but that’s a task!

I used to live in Chapel Hill, NC with a partner of long vintage (the partnership, that is, not her!) and a crazy cat who thinks he’s a dog. We moved to Victoria, BC in 2008 and I’ve been busy learning about Canadian policy, enjoying this most beautiful town I live in.

Why Olive Ridley? Well, the Olive Ridley sea turtle (Lepidochelys Olivacea) nests on the coasts of Madras, India and I got my start in the wonderful world of conservation working on the Olive Ridley with the Students’ Sea Turtle Conservation Network. So, I do have fond memories for this beautiful creature. And yes, as my dear partner reminds me, I did meet her on the beach when I was doing this work.

Agence Science-Presse (based in Québec and headed by Pascal Lapointe) features three blogs of its own:

Blogue ta science : les billets dédiés aux jeunes.

Discutez avec notre expert : avez-vous suivi notre enquête CSI ?

Autour des Blogues : les actualités de nos blogueurs et de la communauté.

There’s also a regular podcast under the Je vote pour la science banner.

genegeek appears to be Canadian (it has a domain in Canada) but the blog owner doesn’t really identify herself (there’s a photo) on the About page but no name and no biographical details. I did receive a tweet last year about genegeek from C. Anderson who I imagine is the blog owner.

There’s also the Canadian BioTechnologist2.0 blog, which is sponsored by Bio-Rad Canada and is written by an employee.

These next ones were added later in the year:

Chuck Black writes two blogs as he noted in June 2012,

I write two blogs which, while they focus more on space than science, do possess strong science components and overlap with some of the other blogs here.

They are: Commercial Space and Space Conference News.

Andy Park also came to my attention in June 2012. He writes the  It’s the Ecology, Stupid! blog.

Something About Science is a blog I featured in an Aug. 17, 2012 posting and I’m glad to see blogger, Lynn K, is still blogging.

New to the roundup in 2012:

SSChow, Sarah Chow’s blog, focuses on science events in Vancouver (Canada) and science events at the University of British Columbia and miscellaneous matters pertinent to her many science communication efforts.

The Canadian federal government seems to be trying its hand at science blogging with the Science.gc.ca Blogs (http://www.science.gc.ca/Blogs-WSE6EBB690-1_En.htm). An anemic effort given that boasts a total of six (or perhaps it’s five) posting in two or three years.

The Canadian Science Writers Association (CSWA) currently features a blog roll of its members’ blogs. This is a new initiative from the association and one I’m glad to see.  Here’s the list (from the CSWA member blog page),

Anne Steinø (Research Through the Eyes of a Biochemist)
Arielle Duhame-Ross (Salamander Hours)
Bob McDonald (I’m choking on this one since it’s a CBC [Canadian Broadcasting Corporation] blog for its Quirks and Quarks science pr0gram)
Cadell Last (The Ratchet)
Edward Willett
Elizabeth Howell (she seems to be blogging again and the easiest way for me to get to her postings was to click on the Archives link [I clicked on December 2012 to get the latest] after doing that I realized that the images on the page link to postings)
Heather Maughan
Justin Joschko
Kimberly Gerson (Endless Forms Most Beautiful)
Mark Green (a CSWA member, he was born and educated in the US where he lives and works; ordinarily I would not include him, even with his  CSWA membership status,  but he writes a monthly science column for a Cape Breton newspaper, which has made me pause)
Pamela Lincez (For the Love of Science)
Sarah Boon (Watershed Moments)
Susan Eaton (she seems to be reposting articles written [presumably by her] for the AAPG [American Association of Petroleum Geologists] Explorer and other organizations in her blog]

Barry Shell’s site (listed as a CSWA member blog) doesn’t match my admittedly foggy notion of a blog. It seems more of an all round Canadian science resource featuring profiles of Canadian scientists, a regularly updated news archive, and more. Science.ca is extraordinary and I’m thankful to have finally stumbled across it but it doesn’t feature dated posts in common with the other blogs listed here, even the most commercial ones.

Tyler Irving (I had no idea he had his own blog when I mentioned him in my Sept. 25, 2012 posting about Canadian chemists and the Canadian Chemical Institute’s publications) posts at the Scientific Canadian.

I choke again, as I do when mentioning blogs that are corporate media blogs, but in the interest of being as complete as possible Julia Belluz writes the Scien-ish blog about health for MacLean’s magazine.

Genome Alberta hosts a couple of blogs: Genomics and Livestock News & Views.

Occam’s Typewriter is an informal network of science bloggers two of whom are Canadian:

Cath Ennis (VWXYNot?) and Richard Wintle (Adventures in Wonderland). Note: The Guardian Science Blogs network seems to have some sort of relationship with Occam’s Typewriter as you will see postings from the Occam’s network featured as part of Occam’s Corner on the Guardian website.

My last blogger in this posting is James Colliander from the University of  Toronto’s Mathematics Department. He and Nassif (Piece of Mind blog mentioned previously) seem to share a similar interest in science policy and funding issues.

ETA Jan.2.13: This is a social science oriented blog maintained by a SSHRC- (Social Science and Humanities Research Council) funded network cluster called the Situating Science Cluster and the blog’s official name is: Cluster Blog. This is where you go to find out about Science and Technology Studies (STS) and History of Science Studies, etc. and events associated with those studies.

I probably should have started with this definition of a Canadian blogger, from the Wikipedia entry,

A Canadian blogger is the author of a weblog who lives in Canada, has Canadian citizenship, or writes primarily on Canadian subjects. One could also be considered a Canadian blogger if one has a significant Canadian connection, though this is debatable.

Given how lively the Canadian science blogging scene has become, I’m not sure I can continue with these roundups as they take more time each year.  At the very least, I’ll need to define the term Canadian Science blogger, in the hope of reducing the workload,  if I decide to continue after this year.

There’s a rather interesting Nov. 26, 2012 article by Stephanie Taylor for McGill Daily about the Canadian public’s science awareness and a dearth of Canadian science communication,

Much of the science media that Canadians consume and have access to is either American or British: both nations have a robust, highly visible science media sector. While most Canadians wouldn’t look primarily to American journalism for political news and analysis, science doesn’t have the same inherent national boundaries that politics does. While the laws of physics don’t change depending on which side of the Atlantic you’re on, there are scientific endeavours that are important to Canadians but have little importance to other nations. It’s unlikely that a British researcher would investigate the state of the Canadian cod fishery, or that the British press would cover it, but that research is critical to a substantial number of Canadians’ livelihoods.

On the other hand, as Canadian traditional media struggles to consistently cover science news, there’s been an explosion of scientists of all stripes doing a lot of the necessary big picture, broad context, critical analysis on the internet. The lack of space restrictions and accessibility of the internet (it’s much easier to start a blog than try to break in to traditional media) mean that two of the major barriers to complex discussion of science in the media are gone. Blogs struggle to have the same reach as newspapers and traditional media, though, and many of the most successful science blogs are under the online umbrella of mainstream outlets like Scientific American and Discover. Unfortunately and perhaps unsurprisingly, there is currently no Canadian science blog network like this. [emphasis mine]

Yes, let’s create a Canadian science blog network. I having been talking to various individuals about this over the last year (2012) and while there’s interest, someone offered to help and then changed their mind. Plus, I was hoping to persuade the the Canadian Science Writers Association to take it on but I think they were too far advanced in their planning for a member’s network to consider something more generalized (and far more expensive). So, if anyone out there has ideas about how to do this, please do comment and perhaps we can get something launched in 2013.

2011 roundup and thoughts on the Canadian science blogging scene

Last year I found about a dozen of us, Canadians blogging about science, and this year (2011) I count approximately 20 of us. Sadly, one blog has disappeared; Elizabeth Howell has removed her PARS3C blog from her website. Others appear to be in pause mode, Rob Annan at the Researcher Forum: Don’t leave Canada behind (no posts since May 4, 2011), The Bubble Chamber at the University of Toronto (no posts since Aug. 12, 2011), Gregor Wolbring’s  Nano and Nano- Bio, Info, Cogno, Neuro, Synbio, Geo, Chem…  (no new posts since Oct. 2010; I’m about ready to give up on this one) and Je vote pour la science (no posts since May 2011).

I’ve been fairly catholic in my approach to including blogs on this list although I do have a preference for blogs with an individual voice that focuses primarily on science (for example, explaining the science you’re writing about rather than complaining about a professor’s marking of your science paper).

Piece of Mind is Nassif Ghoussoub’s (professor of mathematics at the University of British Columbia) blog which is largely about academe, science, and grants. Nassif does go much further afield in some of his posts, as do we all from time to time. He’s quite outspoken and always interesting.

Cool Science is John McKay’s blog which he describes this way ” This site is about raising a creative rationalist in an age of nonsense. It is about parents getting excited about science, learning and critical thinking. It is about smart parents raising smart kids who can think for themselves, make good decisions and discern the credible from the incredible. ” His posts cover a wide range of topics from the paleontology museum in Alberta to a space shuttle launch to the science of good decisions and more.

Dave Ng makes me dizzy. A professor with the Michael Smith Laboratories at the University of British Columbia, he’s a very active science communicator who has started blogging again on the Popperfont blog. This looks like a compilation of bits from Twitter, some very brief postings, and bits from other sources. I’m seeing this style of blogging more frequently these days.

The queen of Canadian science blogging, Rosie Redfield, was just acknowledged as a ‘newsmaker of the year’ by Nature magazine. The Dec. 22, 20111 Vancouver Sun article by Margaret Munro had this to say,

A critical thinker in Vancouver has been named one of the top science newsmakers of the year.

“She appeared like a shot out of the blogosphere: a wild-haired Canadian microbiologist with a propensity to say what was on her mind,” the leading research journal Nature says of Rosie Redfield, a professor at the University of B.C.

The journal editors say Redfield is one of 10 individuals who “had an impact, good or bad, on the world of science” in 2011. She was chosen for her “critical” inquiry and “remarkable experiment in open science” that challenged a now-infamous “arsenic life” study funded by NASA.

Rosie has two blogs, RRResearch and RRTeaching. She used to say she wasn’t a blogger but I rather think she’s changed her tune.

Jeff Sharom’s Science Canada blog isn’t, strictly speaking, a blog so much as it is an aggregator of Canadian science policy news and a good one at that. There are also some very useful resources on the site. (I shamelessly plundered Jeff’s list to add more blogs to this posting).

The Black Hole is owned by Beth Swan and David Kent (although they often have guest posters too). Here’s a description from the About page,

I have entered the Post Doctoral Fellow Black Hole… I’ve witnessed a lot and heard about much more and, while this is the time in academic life when you’re meant to be the busiest, I have begun this blog. Just as a black hole is difficult to define, the label Post Doc is bandied about with recklessness by university administrators, professors, and even PDFs themselves. One thing is certain though… once you get sucked in, it appears to be near impossible to get back out.

David, Beth, and their contributors offer extensive discussions about the opportunities and the failings of the post graduate science experience.

Nicole Arbour, a Science and Innovation Officer at the British High Commission Office in Ottawa, Canada, blogs regularly about Canadian science policy and more on the Foreign and Commonwealth Office blogs.

Colin Schultz, a freelance science journalist, blogs at his website CMBR. He focuses largely on climate change, environmental research, space, and science communication.

exposure/effect is a blog about toxicology, chemical exposures, health and more, which is written by a scientist who chooses to use a pseudonym, ashartus.

Mario’s Entangled Bank is written by theoretical biologist, Mario Pineda-Krch at the University of Alberta. One of Pineda-Krch’s most recent postings was about a special section of a recent Science Magazine issue on Reproducible Research.

Boundary Vision is written by Marie-Claire Shanahan, a professor of science education at the University of Alberta. She not only writes a science blog, she also researches the language and the social spaces of science blogs.

Eric Michael Johnson writes The Primate Diaries blog which is now part of the Scientific American blog network. With a master’s degree in evolutionary anthropology, Johnson examines the interplay between evolutionary biology and politics both on his blog and as part of his PhD work (he’s a student at the University of British Columbia).

The Atoms and Numbers blog is written by Marc Leger. From the About Marc page,

I am a scientist who has always been curious and fascinated by how our universe works.  I love discovering the mysteries and surprises of our World.  I want to share this passion with others, and make science accessible to anyone willing to open their minds.

Many people have appreciated my ability to explain complex scientific ideas in simple terms, and this is one motivation behind my website, Atoms and Numbers.  I taught chemistry in universities for several years, and I participated in the Scientists in the Schools program as a graduate student at Dalhousie University, presenting chemistry magic shows to children and teenagers from kindergarten to grade 12.  I’ve also given presentations on chemistry and forensics to high school students.  I’m even acknowledged in a cookbook for providing a few morsels of information about food chemistry.

Massimo Boninsegni writes about science-related topics (some are about the academic side of science; some physics; some personal items) on his Exponential Book blog.

The Last Word on Nothing is a group blog that features Heather Pringle, a well-known Canadian science writer, on some posts. Pringle’s latest posting is, Absinthe and the Corpse Reviver, all about a legendary cure for hangovers. While this isn’t strictly speaking a Canadian science blog, there is a Canadian science blogger in the group and the topics are quite engaging.

Daniel Lemire’s blog is known simply as Daniel Lemire. He’s a computer scientist in Montréal who writes one of the more technical blogs I’ve come across and his focus seems to be databases. He does cover other topics too, notably in this post titled, Where do debt, credit and currencies come from?

Confessions of a Science Librarian by John Dupuis (head of the Steacie Science & Engineering Library at York University) is a blog I missed mentioning last year and I’m very glad I remembered it this year. As you might expect from a librarian, the last few postings have consisted of lists of the best science books of 2011.

Sci/Why is a science blog being written by Canadian children’s writers who discuss science, words, and the eternal question – why?

I have mixed feelings about including this blog, the Dark Matter science blog by Tom Spears, as it is a ‘newspaper blog’ from the Ottawa Citizen.

Similarly, the MaRS blog is a corporate initiative from the Toronto area science and technology business incubator, MaRS Discovery District.

The last three blogs I’m mentioning are from medical and health science writers.

Susan Baxter’s blog Curmudgeon’s Corner features her insights into various medical matters, for example there’s her Dec. 5, 2011 posting on mammograms, along with her opinions on spandex, travel, and politics.

Peter Janiszewski and Travis Saunders co-own two different blogs, Obesity Panacea, which is part of the PLoS (Public Library of Science) blogs network, and Science of Blogging (nothing posted since July 2011 but it’s well worth a look).

I don’t have anything particularly profound to say about the state of Canadian science blogging this year. It does look to be getting more populous online and I hope that trend continues. I do have a wish for the New Year; I think it should be easier to find Canadian science blogs and would like  to see some sort of network or aggregated list.

Nanotechnology enables robots and human enhancement: part 1

I’m doing something a little different as I’m going to be exploring some ideas about robots and AI today and human enhancement technologies over the next day or so. I have never been particularly interested in these topics but after studying and thinking about nanotechnology I have found that I can’t ignore them since nanotech is being used to enable these, for want of a better word, innovations. I have deep reservations about these areas of research, especially human enhancement, but I imagine I would have had deep reservations about electricity had I been around in the days when it was first being commercialized.

This item, Our Metallic Reflection: Considering Future Human-android Interactions, in Science Daily is what set me off,

Everyday human interaction is not what you would call perfect, so what if there was a third party added to the mix – like a metallic version of us? In a new article in Perspectives on Psychological Science, psychologist Neal J. Roese and computer scientist Eyal Amir from the University of Illinois at Urbana-Champaign investigate what human-android interactions may be like 50 years into the future.

As I understand the rough classifications, there are robots (machines that look like machines), androids (machines that look like and act like humans), and cyborgs (part human/part machine). By the way, my mother can be designated as a cyborg since she had her hip replacement a few years ago. It’s a pretty broad designation including people with pacemakers, joint replacements, as well as any other implanted object not native to a human body.

The rest of the Science Daily article goes on to state that by 2060 androids will be able to answer in human-like voices, answer questions and more. The scientists studying the potential interactions are trying to understand how people will react psychologically to these androids of 2060.

For an alternative discussion about robots, AI, etc. you can take a look at a project where Mary King, a collegue and fellow classmate (we completed an MA programme at De Montfort University), compares Western and Japanese responses to them.

This research project explores the theories and work of Japanese and Western scientists in the field of robotics and AI. I ask what differences exist in the approach and expectations of Japanese and Western AI scientists, and I show how these variances came about.

Because the Western media often cites Shinto as the reason for the Japanese affinity for robots, I ask what else has shaped Japan’s harmonious feelings for intelligent machines. Why is Japan eager to develop robots, and particularly humanoid ones? I also aim to discover if religion plays a role in shaping AI scientists’ research styles and perspectives. In addition, I ask how Western and Japanese scientists envision robots/AI playing a role in our lives. Finally, I enquire how the issues of roboethics and rights for robots are perceived in Japan and the West.

You can go here for more.  Amongst other gems, you’ll find this,

Since 1993 Robo-Priest has been on call 24-hours a day at Yokohama Central Cemetery. The bearded robot is programmed to perform funerary rites for several Buddhist sects, as well as for Protestants and Catholics. Meanwhile, Robo-Monk chants sutras, beats a religious drum and welcomes the faithful to Hotoku-ji, a Buddhist temple in Kakogawa city, Hyogo Prefecture. More recently, in 2005, a robot dressed in full samurai armour received blessings at a Shinto shrine on the Japanese island of Kyushu. Kiyomori, named after a famous 12th-century military general, prayed for the souls of all robots in the world before walking quietly out of Munakata Shrine.

It seems our androids are here already despite what the article in Science Daily indicates. More tomorrow.

Book launch announcement:  Susan Baxter, guest blogger here and lead author of The Estrogen Errors: Why Progesterone is Better for Women’s Health, is having a book launch tomorrow, Thursday, July 23, 2009 from 6 – 8 pm, at Strands Hair and Skin Treatment Centre, #203 – 131 Water St. (in the same complex as the kite store), Vancouver.