Vancouver and other Canadian cities are participating in an international culture event, Night of ideas/Nuit des idées, organized by the French Institute (Institut de France), a French Learned society first established in 1795 (during the French Revolution, which ran from 1789 to 1799 [Wikipedia entry]).
Before getting to the Canadian event, here’s more about the Night of Ideas from the event’s About Us page,
Initiated in 2016 during an exceptional evening that brought together in Paris foremost French and international thinkers invited to discuss the major issues of our time, the Night of Ideas has quickly become a fixture of the French and international agenda. Every year, on the last Thursday of January, the French Institute invites all cultural and educational institutions in France and on all five continents to celebrate the free flow of ideas and knowledge by offering, on the same evening, conferences, meetings, forums and round tables, as well as screenings, artistic performances and workshops, around a theme each one of them revisits in its own fashion.
For the 7th Night of Ideas, which will take place on 27 January 2022, the theme “(Re)building together” has been chosen to explore the resilience and reconstruction of societies faced with singular challenges, solidarity and cooperation between individuals, groups and states, the mobilisation of civil societies and the challenges of building and making our objects. This Nuit des Idées will also be marked by the beginning of the French Presidency of the Council of the European Union.
According to the About Us page, the 2021 event counted participants in 104 countries/190 cities/with other 200 events.
Vancouver: (Re)building together with NFTs [non-fungible tokens]
NFTs, or non-fungible tokens, can be used as blockchain-based proofs of ownership. The new NFT “phenomenon” can be applied to any digital object: photos, videos, music, video game elements, and even tweets or highlights from sporting events.
Millions of dollars can be on the line when it comes to NFTs granting ownership rights to “crypto arts.” In addition to showing the signs of being a new speculative bubble, the market for NFTs could also lead to new experiences in online video gaming or in museums, and could revolutionize the creation and dissemination of works of art.
This evening will be an opportunity to hear from artists and professionals in the arts, technology and academia and to gain a better understanding of the opportunities that NFTs present for access to and the creation and dissemination of art and culture. Jesse McKee, Head of Strategy at 221A, Philippe Pasquier, Professor at School of Interactive Arts & Technology (SFU) and Rhea Myers, artist, hacker and writer will share their experiences in a session moderated by Dorothy Woodend, cultural editor for The Tyee.
One last thing, if you have some French and find puppets interesting, the event in Victoria, British Columbia features both, “Catherine Léger, linguist and professor at the University of Victoria, with whom we will discover and come to accept the diversity of French with the help of marionnettes [puppets]; … .”
This is from a January 13, 2022 SFU Café Scientifique notice (received via email),
Happy New Year! We are excited to announce our next virtual SFU Café Scientifique!
Thursday January 27, 2022, 5:00-6:30 pm
Dr. Jonathan Choy, SFU Molecular Biology and Biochemistry
The Immune System: Our Great Protector Against Dangerous Stuff
Our bodies are constantly in contact with material in the environment, such as microbes, that are harmful to our health. Despite this, most people are healthy because the immune system patrols our bodies and protects us from these harmful environmental components. In this Cafe Scientifique, Dr. Jonathan Choy from the Department of Molecular Biology and Biochemistry will discuss how the immune system does this.
I found Dr. Choy’s profile page on the SFU website and found this description for his research interests,
T Cell Biology
T cells are specialized cells of the immune system that protect host organisms from infection but that also contribute to a wide array of human diseases. Research in my laboratory is focused on understanding the mechanisms by which T cells become inappropriately activated in disease settings and how they cause organ damage. We have provided particular attention to how innate immune signals, such as cytokines secreted by innate immune cells and vascular cells, control the outcome of T cell responses. Within this context, processes that inhibit the activation of T cells are also being studied in order to potentially prevent disease-causing immune responses. Our studies on this topic are applied most directly to inflammatory vascular diseases, such as transplant arteriosclerosis and giant cell arteritis.
Nitric Oxide Signaling and Production
Nitric oxide (NO) is a bioactive gas that controls many cell biological responses. Dysregulation of its production and/or bioactivity is involved in many diseases. My laboratory is interested in understanding how NO effects cell signaling and how its production is controlled by NO synthases. We are specifically interested in how NO-mediated protein S-nitrosylation, a post-translational modification caused by NO, affects cell signaling pathways and cellular functions.
I gather from the Café Scientifique write up that Dr. Choy’s talk is intended for a more general audience as opposed to the description of his research interests which are intended for students of molecular biology and biochemistry/
For those who are unfamiliar with it, Simon Fraser University is located in the Vancouver area (Canada).
Simon Fraser University’s (SFU) Metacreation Lab for Creative AI (artificial intelligence) in Vancouver, Canada, has just sent me (via email) a January 2022 newsletter, which you can find here. There are a two items I found of special interest.
Max Planck Centre for Humans and Machines Seminars
Max Planck Institute Seminar – The rise of Creative AI & its ethics January 11, 2022 at 15:00 pm [sic] CET | 6:00 am PST
Next Monday [sic], Philippe Pasquier, director of the Metacreation Labn will be providing a seminar titled “The rise of Creative AI & its ethics” [Tuesday, January 11, 2022] at the Max Planck Institute’s Centre for Humans and Machine [sic].
The Centre for Humans and Machines invites interested attendees to our public seminars, which feature scientists from our institute and experts from all over the world. Their seminars usually take 1 hour and provide an opportunity to meet the speaker afterwards.
The seminar is openly accessible to the public via Webex Access, and will be a great opportunity to connect with colleagues and friends of the Lab on European and East Coast time. For more information and the link, head to the Centre for Humans and Machines’ Seminars page linked below.
The Centre’s seminar description offers an abstract for the talk and a profile of Philippe Pasquier,
Creative AI is the subfield of artificial intelligence concerned with the partial or complete automation of creative tasks. In turn, creative tasks are those for which the notion of optimality is ill-defined. Unlike car driving, chess moves, jeopardy answers or literal translations, creative tasks are more subjective in nature. Creative AI approaches have been proposed and evaluated in virtually every creative domain: design, visual art, music, poetry, cooking, … These algorithms most often perform at human-competitive or superhuman levels for their precise task. Two main use of these algorithms have emerged that have implications on workflows reminiscent of the industrial revolution:
– Augmentation (a.k.a, computer-assisted creativity or co-creativity): a human operator interacts with the algorithm, often in the context of already existing creative software.
– Automation (computational creativity): the creative task is performed entirely by the algorithms without human intervention in the generation process.
Both usages will have deep implications for education and work in creative fields. Away from the fear of strong – sentient – AI, taking over the world: What are the implications of these ongoing developments for students, educators and professionals? How will Creative AI transform the way we create, as well as what we create?
Philippe Pasquier is a professor at Simon Fraser University’s School for Interactive Arts and Technology, where he directs the Metacreation Lab for Creative AI since 2008. Philippe leads a research-creation program centred around generative systems for creative tasks. As such, he is a scientist specialized in artificial intelligence, a multidisciplinary media artist, an educator, and a community builder. His contributions range from theoretical research on generative systems, computational creativity, multi-agent systems, machine learning, affective computing, and evaluation methodologies. This work is applied in the creative software industry as well as through artistic practice in computer music, interactive and generative art.
Folks at the Metacreation Lab have made available an interactive search engine for sounds, from the January 2022 newsletter,
Audio Metaphor is an interactive search engine that transforms users’ queries into soundscapes interpreting them. Using state of the art algorithms for sound retrieval, segmentation, background and foreground classification, AuMe offers a way to explore the vast open source library of sounds available on the freesound.org online community through natural language and its semantic, symbolic, and metaphorical expressions.
We’re excited to see Audio Metaphor included among many other innovative projects on Freesound Labs, a directory of projects, hacks, apps, research and other initiatives that use content from Freesound or use the Freesound API. Take a minute to check out the variety of projects applying creative coding, machine learning, and many other techniques towards the exploration of sound and music creation, generative music, and soundscape composition in diverse forms an interfaces.
Audio Metaphor (AuMe) is a research project aimed at designing new methodologies and tools for sound design and composition practices in film, games, and sound art. Through this project, we have identified the processes involved in working with audio recordings in creative environments, addressing these in our research by implementing computational systems that can assist human operations.
We have successfully developed Audio Metaphor for the retrieval of audio file recommendations from natural language texts, and even used phrases generated automatically from Twitter to sonify the current state of Web 2.0. Another significant achievement of the project has been in the segmentation and classification of environmental audio with composition-specific categories, which were then applied in a generative system approach. This allows users to generate sound design simply by entering textual prompts.
As we direct Audio Metaphor further toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation. The project will continue to be instrumental in the design and implementation of new tools for sound designers and artists.
Casting an eye back isn’t one of my strong points. Thankfully I can’t be forced into making a top 10 list of some kind. Should someone be deeply disappointed (tongue in cheek) that I failed to mention one of the big 2021 stories featured here, please leave a note in the Comments for this blog and I’ll do my best to add it.
Note: I very rarely feature space exploration unless there’s a nanotechnology or other emerging technology angle to it. There are a lot of people who do a much better job of covering space exploration than I can. (If you’re interested in an overview from a Canadian on the international race to space, you can start with this December 29, 2021 posting “Looking back at a booming year in space” by Bob McDonald of CBC’s [Canadian Broadcasting Corporation] Quirks & Quarks science radio programme.)
Now, onto FrogHeart’s latest year.
One of the standout stories in 2020/21 here and many, many places was the rise of the biotechnology community in British Columbia and elsewhere in Canada. Lipid nanoparticles used in COVID-19 vaccines became far better known than they ever had before and AbCellera took the business world by storm as its founder became a COVID billionaire.
Here is a sampling of the BC biotechnology/COVID-19 stories featured here,
“Why is Precision Nanosystems Inc. in the local (Vancouver, Canada) newspaper?” January 22, 2021 posting Note: The company is best known for its work on lipid nanoparticles
“mRNA, COVID-19 vaccines, treating genetic diseases before birth, and the scientist who started it all” March 5, 2021 posting Note: This posting also notes a Canadian connection in relation mRNA in the subsection titled “Entrepreneurs rush in”
“Getting erased from the mRNA/COVID-19 story” August 20, 2021 posting Note: This features a fascinating story from Nathan Vardi (for Forbes) of professional jealousies, competitiveness, and a failure to recognize opportunity when she comes visiting.
“Who’s running the life science companies’ public relations campaign in British Columbia (Vancouver, Canada)?” August 23, 2021 posting Note: This explores the biotech companies, the network, and provincial and federal funding, as well as, municipal (City of Vancouver) support and more.
Dolgin starts the story in 1987 and covers many players that were new to me although I did recognize some of the more recent and Canadian players such as Pieter Cullis and Ian MacLachlan. *ETA January 3 ,2021: Cullis and MacLachlan are both mentioned in my ‘Getting erased ..” August 20, 2021 posting.* Fun fact: Pieter Cullis was just named an Officer to the Order of Canada (from the Governor General’s December 29, 2021 news release),
Pieter Cullis, O.C. Vancouver, British Columbia
For his contributions to the advancement of biomedical research and drug development, and for his mentorship of the next generation of scientists and entrepreneurs.
Back to this roundup, I got interested in greener lithium mining, given its importance for batteries in electric vehicles and elsewhere,
2021 seems to have been the year when the science community started podcasting in a big way. Either the podcast was started this year or I stumbled across it this year (meaning it’s likely a podcast that is getting publicized because they had a good first year and they want more listeners for their second year),
“New podcast—Mission: Interplanetary and Event Rap: a one-stop custom rap shop Kickstarter” April 30, 2021 posting
“Nerdin’ About and Science Diction: a couple of science podcasts” Note: Not posted but maybe one day. Meanwhile, here they are:
Nerdin’ About describes itself as, “… a podcast where passionate nerds tell us about their research, their interests, and what they’ve been Nerdin’ About lately. A spin-off of Nerd Nite Vancouver, a community lecture series held in a bar, Nerdin’ About is here to explore these questions with you. Hosted by rat researcher Kaylee Byers (she/her) and astronomy educator Michael Unger (he/him). Elise Lane (she/her) is our Mixing Engineer. Music by Jay Arner. Artwork by Armin Mortazavi.”
Science Diction is a podcast offshoot of Science Friday (SciFri), a US National Public Radio (NPR) programme. “… Hosted by SciFri producer and self-proclaimed word nerd Johanna Mayer, each episode of Science Diction digs into the origin of a single word or phrase, and, with the help of historians, authors, etymologists, and scientists, reveals a surprising science connection. Did you know the origin of the word meme has more to do with evolutionary biology than lolcats? Or that the element cobalt takes its name from a very cheeky goblin from German folklore? …”
Integrating the body with machines is an ongoing interest of mine, these particular 2021 postings stood out but there are other postings (click on the Human Enhancement category or search the tag ‘machine/flesh’),
“Interior Infinite: carnival & chaos, a June 26 – September 5, 2021 show at Polygon Art Gallery (North Vancouver, Canada)” July 26, 2021 posting Note: While this isn’t an art/sci posting it does touch on a topic near and dear to my heart, writers. In particular, the literary theorist, Mikhail Mikhailovich Bakhtin.
“True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” December 3, 2021 posting
2022 and contronyms
I don’t make psychic predictions. As far as I’m concerned, 2022 will be a continuation of 2021, albeit with a few surprises.
My focus on nanotechnology and emerging technologies will remain. I expect artificial intelligence, CRISPR and gene editing (in general), quantum computing (technical work and commercialization), and neuromorphic computing will continue to make news. As for anything else, well, it wouldn’t be a surprise if you knew it was coming.
With regard to this blog, I keep thinking about cutting back so I can focus on other projects. Whether I finally follow through this year is a mystery to me.
Because words and writing are important to me, I’d like to end the year with this, which I found in early December 2021. From “25 Words That Are Their Own Opposites” on getpocket.com by Judith Herman originally written for “Mental Floss and … published June 15, 2018,”
Here’s an ambiguous sentence for you: “Because of the agency’s oversight, the corporation’s behavior was sanctioned.” Does that mean, “Because the agency oversaw the company’s behavior, they imposed a penalty for some transgression,” or does it mean, “Because the agency was inattentive, they overlooked the misbehavior and gave it their approval by default”? We’ve stumbled into the looking-glass world of contronyms—words that are their own antonyms.
1.Sanction (via French, from Latin sanctio(n-), from sancire ‘ratify,’) can mean “give official permission or approval for (an action)” or conversely, “impose a penalty on.”
2.Oversight is the noun form of two verbs with contrary meanings, “oversee” and “overlook.” Oversee, from Old English ofersēon (“look at from above”) means “supervise” (medieval Latin for the same thing: super-, “over” plus videre, “to see.”) Overlook usually means the opposite: “to fail to see or observe; to pass over without noticing; to disregard, ignore.”
3.Left can mean either remaining or departed. If the gentlemen have withdrawn to the drawing room for after-dinner cigars, who’s left? (The gentlemen have left and the ladies are left.)
4.Dust, along with the next two words, is a noun turned into a verb meaning either to add or to remove the thing in question. Only the context will tell you which it is. When you dust are you applying dust or removing it? It depends whether you’re dusting the crops or the furniture.
The contronym (also spelled “contranym”) goes by many names, including auto-antonym, antagonym, enantiodrome, self-antonym, antilogy and Janus word (from the Roman god of beginnings and endings, often depicted with two faces looking in opposite directions). …
Herman made liberal use, which she acknowledged, of the Mark Nichol article/list, “75 Contronyms (Words with Contradictory Meanings)” on Daily Writing Tips (Note: Based on the ‘comments’, Nichol’s list appears to be have been posted sometime in 2011),
3. Bill: A payment, or an invoice for payment
4. Bolt: To secure, or to flee
46. Quantum: Significantly large, or a minuscule part
47. Quiddity: Essence, or a trifling point of contention
The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,
The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.
As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.
Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.
What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.
In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.
In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.
At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).
The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.
The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?
I’ll get back to that last bit, “… what does it mean to be human?” later.
There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.
In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.
David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.
I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.
In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).
Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.
The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)
In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.
The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.
Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.
Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”
AI and creativity
The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.
Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)
The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.
There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.
What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),
… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI.
This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]
Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]
“I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]
So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)
The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),
The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.
[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.
“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”
Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),
You can hear Burroughs talk about the technique and how he started using it in 1959.
There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.
Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?
AI and emotions
The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.
Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.
(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)
While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?
While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).
“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”
This brings the question back to, what is consciousness?
What scientists aren’t taught
Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)
My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.
Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.
The experts, the connections, and the Canadian content
It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.
Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).
I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.
Google invests $4.5 Million in Montreal AI Research
A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].
Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:
Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),
Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.
In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.
“COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.“
– Yoshua Bengio, for Google’s Official Canada Blog
Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.
My hat’s off to Google’s marketing communications and public relations teams.
Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”
Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”
There is this from his LinkedIn profile,
I develop, create and host engaging live experiences & media to foster critical thinking.
I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.
There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)
Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.
Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.
Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.
I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,
Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.
Get ready for Xenobots 2.0.
Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”
Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,
While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.
And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.
Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?
As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.
David Suzuki, where are you?
Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.
Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,
Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks. However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.
Knight was using the term in its humorous, derogatory form.
The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.
To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.
Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.
For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.
*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.
A quick reminder, ARPICO stands for the Society of Italian Researchers & Professionals in Western Canada and while the upcoming speaker, Jason Halter, doesn’t seem to be Italian, his topic is quintessentially so.
From a November 5, 2021 ARPICO announcement (received via email),
After an extended break since our last (virtual) public event of last April and an unusually difficult summer, for BC in general and ARPICO in particular, we are happy to announce that our activity is restarting this fall. Our next event features a very enticing lecture presenting us with a story that neatly straddles art, science, and history, around one of the most intriguing portraits of the Renaissance, if not ever, by the great Leonardo Da Vinci. Modern day Renaissance man, designer, architect, historian and lover of Italia, Prof. Jason Halter will give us an account of his role, in collaboration with experts in other fields, in the uncovering of the so-called Earlier Mona Lisa, and verifying its authenticity. …
The lecture will take place on November 18th, 2021 at 7:00PM and will be hosted virtually, as our last few events have been. We continue to use BlueJeans as our videoconferencing platform, for which you will only require a web browser (Chrome, Firefox, Edge, Safari, Opera are all supported). Full detailed instructions on how the virtual event will unfold are available on the EventBrite listing here in the Technical Instruction section.
If participants wish to donate to ARPICO, this can be done within EventBrite; this would be greatly appreciated in order to help us continue to build upon our scholarship fund, and to defray the cost of the videoconferencing license.
The announcement goes on to provide details about the topic and the speaker,
Leonardo da Vinci’s Earlier (Isleworth) Mona Lisa:
Time Travel, Pattern Recognition, and the Scientific Method
A fascinating presentation and discussion of Da Vinci’s Earlier Mona Lisa in the context of the paper of the same title that was published in Leonardo Da Vinci’s Mona Lisa: New Perspectives, by Jean-Pierre Isbouts (Ed). Art historians have long debated the question why sources about the origin of the Mona Lisa portrait provide conflicting information. This monograph presents a solution for this quandary: those 16th-century sources don’t agree because they are not talking about the same painting. Jason Halter is one of a team of leading scholars and experts who have contributed to the veracity and authentication of this painting and the process has necessitated embracing technology and methods offered by science, which had not been uncovered before.
Design, Art & Architecture occupy a central position in the practice of Jason Halter & Wonder Inc. Having gained his formative experience under the tutelage of one of the world’s most important designers, Bruce Mau, Jason has won international acclaim for his innovative approach to design & art production. His unfettered curiosity & ability to realize ideas have made him intuit & manifest design solutions in new & novel ways.
As a Renaissance scholar, Halter spent several years teaching art & architecture of the late Gothic and early & late Renaissance in Florence and Rome, having held faculty positions with the University of Toronto & the University of British Columbia. He holds several degrees in history & architecture, & was awarded the prestigious Syracuse Fellowship during his post graduate work in Italy.
Halter was recently invited by the Mona Lisa Foundation, a consortium based in Zurich, Switzerland, to assist in the marketing and research for the ‘Earlier Mona Lisa’, 1503, by Leonardo Da Vinci. Contributing an article entitled ‘Time Travel. Pattern Recognition & the Scientific Method’, to the recent book entitled ‘Earlier Mona Lisa – New Perspectives’, published by the Fielding Graduate University, this new scholarship has established a series of insights and theories regarding this incredibly important artwork by Da Vinci, engaging new vital scientific investigation with critical cultural expertise on the work. The book was released in April 2019, ahead of an exhibition of the ‘Earlier Mona Lisa’ at Palazzo Bastogi in Florence, Italy in June of 2019, corresponding with the 500th anniversary of the passing of Leonardo da Vinci in 1519.
ARPICO offers an overview for how the night will proceed,
WHEN (EVENT): Thurs, November 18th, 2021 at 7:00PM (BlueJeans link active at 6:45PM)
WHERE: Online using the BlueJeans Conferencing platform.
The evening agenda is as follows:
6:45PM – BlueJeans Presentation link becomes active and registrants may join.
7:00pm – Start of the evening Event with introductions & lecture by Prof. Jason Halter
8:00 pm – Q & A Period via BlueJeans Chat Interface
Tickets are Needed
Tickets for this event are FREE. Due to limited seating at the venue, we ask that each household register once and watch the presentation together on a single device. You will receive the event videoconferencing invite link via email in your registration confirmation.
Can I update my registration information? Yes. If you have any questions, contact us at firstname.lastname@example.org
I am having trouble using EventBrite and cannot reserve my ticket(s). Can someone at ARPICO help me with my ticket reservation? Of course, simply send your ticket request to us at email@example.com so we help you.
I found this about the BlueJeans Conferencing platform on the ‘Leonardo da Vinci’s Earlier (Isleworth) Mona Lisa: Time Travel, Pattern Recognition, and the Scientific Method’ registration page,
The event will be managed via the videoconferencing platform BlueJeans Meetings, by clicking on the link that will be emailed to each registered individual (to the email address provided). Please, note that the event will not be active until 6:45 pm on the day of the event.
At that time, clicking on the link will automatically let you join the event via your web browser (Chrome, Firefox, Safari, Opera should all work smoothly). You are NOT required to download or install anything to your computer. The entire video stream will occur inside your web browser window just like any other website you might visit. There is no security risk or risk to your personal information. You can always join the event late, as this will not interfere with the presentation.
When you open the link you will be prompted to input a guest name. Please use your name that will allow us to identify you, and continue. On the following screen you may be prompted by your browser (depending on your settings) to allow access to use your computer’s microphone and camera. You do not need to approve these if you do not plan to talk or be seen at any time during the Q&A segment. Upon joining you should see a screen similar to the sample image seen below where the various icons superimposed on the pictures of participants will show when you hover the mouse pointer over the BlueJeans browser window.
By default, your system’s camera will be turned on and your microphone will be turned off. If you do not wish to show your face, you can of course do that by clicking on the camera icon like the one on the bottom right of the sample screenshot provided. We ask that you keep your microphone muted, since any background sounds and noises from your environment will be audible and may interfere with the speaker’s voice.
As we have done for the in-person events, we will be recording our virtual ones for future reference.
At any time during the lecture, participants will be able to post comments or questions for the speaker via the “chat” button also visible by hovering over the BlueJeans browser window. The moderator will read them for the speaker by way of a Q&A session at the end of the lecture.
In the days following the event we will be sending all participants a succinct feedback form, which we encourage you to fill in and send back to us.
A little background
It seems this talk is the outcome of a Mona Lisa Foundation initiative, which resulted in a 2019 book (mentioned earlier), Leonardo Da Vinci’s Mona Lisa: New Perspectives, by Jean-Pierre Isbouts (editor).
Few works of art have garnered as much attention from experts and the public as the ‘Mona Lisa’ in the Louvre Museum. By contrast, the ‘Earlier Mona Lisa’ has spent much of its existence hidden from view. Despite this, on the few occasions the painting has been available to be viewed, significant expert opinion has been recorded.
It is probably fair to say that attributing a painting to Leonardo da Vinci with certainty is one of the most difficult tasks in the field of Old Master paintings. To date there are about 18 to 20 paintings “more or less” attributed to Leonardo. One states “more or less” since there is not even one painting about which all the recognized Da Vinci experts agree. It is even disputed whether some parts of the famous ‘Mona Lisa’ portrait in the Louvre are not by the master. One famous expert said that attributing a painting to Leonardo is like “holding in one’s hand a burning iron rod.”
Nonetheless, it is generally accepted that he did paint all or at least the essential parts of those paintings currently attributed to him. In the case of a Da Vinci portrait, an attribution is generally agreed upon if the artist painted only the face, while some experts argue that it is even enough if Da Vinci had simply conceived the structure of the painting. It should be noted that some of his pupils and followers had great talent. The well-known ‘Lady with an Ermine’ and ‘La Belle Ferronière’ represent only recent attributions to Leonardo, having been attributed to pupils for almost 400 years.
Professor Jean-Pierre Isbouts says, “Every interpretation is subject to subsequent dispute. When you look at dating, when you look at authorship, when you look at provenance. So I think it’s just part of the world we live in that Leonardo scholarship happens to be a debating society whether you like it or not.”
The essay goes on to detail the key elements for establishing attribution and presents some contrarian views.
Jason Halter and science
I wish there was a little more detail about the science that Halter will be discussing. Halter’s science background seems to be confined to his work in architecture, which suggests material science. On the other hand, pattern recognition suggests algorithms and artificial intelligence.
As for Bruce Mau’s influence, mentioned as a colleague and mentor in Halter’s biographical details, Mau is a big deal in Canadian design circles who has an amateur’s interest (like mine) in science if his 2004 Massive Change show at the Vancouver Art Gallery is an accurate indicator. The show featured a bioengineered nose being grown in a beaker. (More about Massive Change and bioengineering in my February 21, 2013 posting.)
Every cancer arises following the accumulation of genetic changes known as mutations. Dr. Ryan Morin will discuss how genomics can allow us to understand how specific mutations influence the onset of lymphoma (and other common cancers) and can lead to new and more effective therapies.
There’s a little more detail about Morin’s work on his profile page on the BC Cancer Research Institute website,
Dr. Ryan Morin has been studying the genetic nature of lymphoid cancers using genomic methods for more than a decade. During his doctoral training at the University of British Columbia and BC Cancer, he pioneered the use of transcriptome and whole genome sequencing to identify driver mutations in non-Hodgkin lymphomas. Over the course of his training, he published a series of papers describing some of the most common genetic features of diffuse large B-cell (DLBCL) and follicular lymphomas including EZH2, KMT2D, CREBBP and MEF2B. Following his transition to an independent position at SFU, Dr. Morin has continued to identify genetic features of these and other aggressive lymphomas including non-coding (silent) regulatory drivers of cancer. His laboratory has implemented novel assays for the sensitive detection and genetic characterization of circulating tumour DNA (ctDNA). These “liquid biopsy” approaches continue to be developed as non-invasive methods for monitoring treatment response and resistance. Using these and other modern genomics tools and bioinformatics techniques, his team continues to explore the genetics of relapsed and refractory DLBCL with an ultimate goal of identifying novel biomarkers that predict treatment failure on specific therapies. This work has helped refine our understanding of genetic and gene expression differences that predict poor outcome in DLBCL.
Hopefully, Morin will be talking about the liquid biopsies and other non-invasive methods he and his team use in their work.
Nuclear energy is not usually of much interest to me but there is a Canadian company doing some interesting work in that area. So, before getting to the news about the company’s move, here’s a general description of fusion energy and how General Fusion (the company) is approaching the clean energy problem, from a June 18, 2021 posting by Bob McDonald on the Canadian Broadcasting Corporation’s (CBC) Quirks and Quarks blog (Note: Links have been removed),
Vancouver-based fusion energy company General Fusion has entered an agreement with the United Kingdom Atomic Energy Authority to build a nuclear fusion demonstration plant to be operational in 2025. It will take a unique approach to generating clean energy.
There is an industry joke that fusion energy has been 20 years away for 50 years. The quest to produce clean energy by duplicating the processes happening at the centre of the sun has been a difficult and expensive challenge.
It has yet to be accomplished on anything like a commercial scale. That is partly because on Earth the fusion process involves handling materials at extreme pressures and temperatures many times hotter than the surface of the sun.
The nuclear technology that has provided electricity for decades around the world relies on fission, which splits heavy atoms such as uranium into lighter elements, releasing energy. However, this produces hazardous and durable radioactive waste that must be stored, and more catastrophically has led to major accidents at Chernobyl and Fukushima.
Fusion is the opposite of fission. Lighter elements such as hydrogen are heated and compressed to fuse into heavier ones. This releases energy, but with a much smaller legacy of radioactive waste, and no risk of meltdown.
The world’s largest fusion reactor experiment, ITER (Latin for “the way”) [International Thermonuclear Experimental Reactor] is currently under construction in southern France. It’s a massive international collaboration developing on fusion technology that’s been been explored since it was invented in the Soviet Union in the 1950s. It involves a doughnut-shaped metallic chamber called a tokamak that is surrounded by incredibly powerful superconducting magnets.
An electrically charged gas, or plasma, will be injected into the chamber where the magnets hold it, compressed and suspended, so it does not touch the walls and burn through them. The plasma will be heated to the unbelievable temperature of 150 million C, when fusion begins to take place.
And therein lies the problem. So far, experimental fusion reactors have required more energy to heat the plasma to start the fusion reaction than can be harvested from the reaction itself. Size is part of the problem. Demonstration reactors are small and meant to test equipment and materials, not produce power. ITER is supposed to be large enough to produce 10 times as much power as is required to heat up its plasma.
And that’s the holy grail of fusion: to produce enough power that the nuclear fusion reaction can become self-sustaining.
General Fusion takes a completely different approach by using mechanical pressure to contain and heat the plasma, rather than gigantic electromagnets. A series of powerful pistons surround a container of liquid metal with the hydrogen plasma in the centre. The pistons mechanically squeeze the liquid on all sides at once, heating the fuel by compression the way fuel in a diesel engine is compressed and heated in a cylinder until it ignites.
Exciting, eh? If you have time, you may want to read McDonald’s June 18, 2021 posting for a few more details about General Fusion’s technology and for some embedded images.
At one point I was under the impression that General Fusion was involved with ITER but that seems to have been a misunderstanding on my part.
I first wrote about General Fusion in a December 2, 2011 posting titled: Burnaby-based company (Canada) challenges fossil fuel consumption with nuclear fusion. (For those unfamiliar with the Vancouver area, there’s the city of Vancouver and there’s Vancouver Metro, which includes the city of Vancouver and others in the region. Burnaby is part of Metro Vancouver; General Fusion is moving to Sea Island (near Vancouver Airport), in Richmond, which is also in Metro Vancouver.) Kenneth Chan’s October 20, 2021 article for the Daily Hive gives more detail about General Fusion’s new facilities (Note: A link has been removed),
The new facility will span two buildings at 6020 and 6082 Russ Baker Way, near YVR’s [Vancouver Airport] South Terminal. This includes a larger building previously used for aircraft engine maintenance and repair.
The relocation process could start before the end of 2021, allowing the company to more than quadruple its workforce over the coming years. Currently, it employs about 140 people.
The Sea Island [in Richmond] facility will house its corporate offices, primary fusion technology development division, and many of its engineering laboratories. This new facility provides General Fusion with the ability to build a new demonstration prototype to support the commercialization of its magnetized target fusion technology.
The company’s research and development into practical fusion technology as a zero-carbon power solution to address the world’s growing energy needs, while fighting climate change, is supported by the federal governments of Canada, US, and UK.
General Fusion is backed by dozens of large global private investors, including Bezos Expeditions, which is the personal investment entity for Amazon founder Jeff Bezos. It has raised a total of about USD$200 million in financing to date.
“British Columbia is at the centre of a thriving, world-class technology innovation ecosystem, just the right place for us to continue investing in our growing workforce and the future of our company,” said Christofer Mowry, CEO of General Fusion, in a statement.
Earlier this year, YVR also indicated it is considering allowing commercial and industrial developments on several hundred acres of under-utilized parcels of land next to the north and south runways, for uses that complement airport activities. This would also provide the airport with a new source of revenue, after major financial losses from the years-long impact of COVID-19.
Last year (in my November 5, 2020 posting about this event), it was ‘art and design’ and now, it’s ‘art + design’. I wonder if this is a provincial (British Columbia) choice or if this is being adopted generally. I’ll keep an eye out for it.
In the meantime, there’s a November 6, 2021 Science World event as noted by Rebecca Bollwitt in an October 20, 2021 posting on her miss604.com website,
On Saturday, November 6, 2021, Science World will livestream Girls and STEAM for free, with the aim of preparing British Columbia’s youth for the STEAM (Science, Technology, Engineering, Art + Design, Math) heavy job landscape of the future. The event will be opened by keynote speaker and popular science communicator, Dr. Samantha Yammine and runs from 9:00am to 12:00pm.
Girls and STEAM gives girls dreaming of future careers in STEAM a space to learn about a variety of scientific topics and careers from experts and mentors. Last year’s digital event saw 2,500 attendees from across the country participate.
With Canada facing a major gap in gender diversity in STEM careers, Girls and STEAM endeavours to inspire, engage and empower girls to pursue research-focused and technical careers by connecting them with female professionals and learning opportunities to help address the major gap in gender diversity that Canada is facing in these fields.
It’s been too long (in a January 19, 2015 posting) since I last featured an SFU Café Scientifique here. I’m glad to have the opportunity to do it again and just before COP26, the United Nations (UN) Climate Change Conference being held in Edinburgh, Scotland from October 31 – November 12, 2021..
From an October 19, 2021 SFU Café Scientifique announcement (received via email), Note: I have made some formatting changes,
We are excited to announce that our fall sessions are back by popular demand, and we look forward to having you join us for our next virtual [on Zoom] SFU Café Scientifique!
“Our climate is changing too fast for forests to adapt. Can we help them?“
Rapid climate change has resulted in the decline of tree species, as the spread of naturally resistant genetic variants to combat new conditions is too low. Dr. Jim Mattsson from the Department of Biological Sciences will give local examples of such losses and present research that identifies genetic variants with a high tolerance to new climate conditions that could potentially be used for reforesting affected areas.
According to Mattison’s SFU profile page where he’s listed as an Associate Professor, Plant Functional Genomics with an undergraduate degreee and PhD obtained at Uppsala University, Sweden and which gives a brief description of his interests,
Our research focuses on the genetic regulation of vascular tissue development in plants. Specific questions are (1) what is the molecular mechanism behind vascular strand formation? (2) which genes regulate fiber differentiation? (3) which genes regulate the rate of wood formation and the cellular composition of wood? … We are also setting up induced mutant populations of western red cedar and hybrid poplars for identification of mutations in genes of interest through so called TILLING technology. …
Given the focus on forests and trees, I’m a little surprised Mattison isn’t at the University of British Columbia (UBC) where they have a faculty of forestry. As for western red cedar (redcedar), I found a Genome BC (British Columbia) project, “Health Future Forests,” which focuses on western red cedars,
University of British Columbia researchers and British Columbia’s government are joining forces to protect and enhance one of the province’s most iconic symbols and valuable resources.
The life of the majestic western redcedar and the history of British Columbia have been intertwined for as long as humans have walked, fished and forested the West Coast. Known as arborvitae – the tree of life – the redcedar is both British Columbia’s official tree and a $1-billion annual industry. (Scientists spell “redcedar” as one word to indicate a false classification [emphasis mine]; the redcedar is actually a member of the cypress family.)
Industry will face a challenge as they transition from old growth forests, with trees more than 250 years old, to younger second-growth forests that have sprung up following human-caused events like logging or natural disturbances such as wildfire. Because of their size and age, second growth forests are less productive than old growth forests, generating less wood with lower durability.
In addition, the health of redcedar forests can be negatively impacted by shifts in the quantity and types of pests influenced by climate change.
Traditional breeding strategies for western redcedar can take decades to produce the desired traits of wood durability inherent in old growth trees. Dr. Joerg Bohlmann of the University of British Columbia is working with Dr. John H. Russell of British Columbia’s Ministry of Forests, Lands and Natural Resource Operations to apply genomic selection to reduce that time by up to 30 years. Genomic selection will accelerate the development of tree populations that are resistant to multiple pests and reduce the need for time-consuming and costly phenotyping (this involves observing the characteristics of an organism resulting from the interaction of its genes with the environment). Because key industry producers and users of these trees are actively participating in the project, technology transfer and commercialization will be seamless.
I haven’t forgotten the poplars, in an April 7, 2014 posting three different projects on poplars (scroll down about 40% of the way for the UBC work) are featured.
Perhaps Mattison is involved in the either or both the western redcedar and poplar work being done at UBC.
Coming up in November 2021
SAVE THE DATE: REGISTRATION OPENING SOON Thursday November 25, 2021 5:00-6:30pm
Dr. Ryan Morin, SFU Department of Molecular Biology and Biochemistry
“How genome research is influencing our understanding of B-cell lymphomas”
I gather the next one will be about cancer.
*October 28, 202 changed to October 28, 2021 on November 9, 2021.