Category Archives: pop culture

‘Six’ degrees of Kevin Bacon gene

It must have been a lighthearted moment that led to this new gene being called “degrees of Kevin Bacon” (dokb). Here’s more about the gene and the research from a May 24, 2024 University of Toronto (UofT) news release by Chris Sasaki, Note: Links have been removed,

A team of researchers from the University of Toronto has identified a gene in fruit flies that regulates the types of connections between flies within their “social network.”

The researchers studied groups of two distinct strains of Drosophila melanogaster fruit flies and found that one strain showed different types or patterns of connections within their networks than the other strain.

The connectivity-associated gene in the first strain was then isolated. When it was swapped with the other strain, the flies exhibited the connectivity of the first strain.

The researchers named the gene “degrees of Kevin Bacon” (dokb), for the prolific Hollywood star of such films as Footloose and Apollo 13. Bacon’s wide-ranging connections to other actors is the subject of the parlour game called “The Six Degrees of Kevin Bacon,” which plays on the popular idea that any two people on Earth can be linked through six or fewer mutual acquaintances.

“There’s been a lot of research around whether social network structure is inherited, but that question has been poorly understood,” says Rebecca Rooke, a post-doctoral fellow in the department of ecology and evolutionary biology in the Faculty of Arts & Science and lead author of the paper, published in Nature Communications. “But what we’ve now done is find the gene and proven there is a genetic component.”

The work was carried out as part of Rooke’s PhD thesis in Professor Joel Levine’s laboratory at U of T Mississauga before he moved to the department of ecology and evolutionary biology, where he is currently chair.

“This gives us a genetic perspective on the structure of a social group,” says Levine. “This is amazing because it says something important about the structure of social interactions in general and about the species-specific structure of social networks.

“It’s exciting to be thinking about the relationship between genetics and the group in this way. It may be the first time we’ve been able to do this.”

The researchers measured the type of connection by observing and recording on video groups of a dozen male flies placed in a container. Using software previously developed by Levine and post-doctoral researcher Jon Schneider, the team tracked the distance between flies, their relative orientation and the time they spent in close proximity. Using these criteria as measures of interaction, the researchers calculated the type of connection or “betweenness centrality” of each group.

Rooke, Levine and their colleagues point out that individual organisms with high betweenness centrality within a social network can act as “gatekeepers” who play an important role in facilitating interactions within their group.

Gatekeepers can influence factors like the distribution of food or the spread of disease. They also play a role in maintaining cohesion, enhancing communication and ensuring better overall health of their group.

In humans, betweenness centrality can even affect the spread of behaviours such as smoking, drug use and divorce.

At the same time, the researchers point out that social networks are unbiased and favour neither “good” nor “bad” outcomes. For example, high betweenness centrality in a network of scientists can increase potential collaborators; on the other hand, high betweenness centrality in another group can lead to the spread of a disease like COVID-19.

“You don’t get a good or a bad outcome from the structure of a network,” explains Levine. “The structure of a network could carry happiness or a disease.”

Rooke says an important next step will be to identify the overall molecular pathway that the gene and its protein are involved in “to try to understand what the protein is doing and what pathways it’s involved in – the answers to those questions will really give us a lot of insight into how these networks work.”

And while the dokb gene has only been found in flies so far, Rooke, Levine and their colleagues anticipate that similar molecular pathways between genes and social networks will be found in other species.

“For example, there’s a subset of cells in the human brain whose function relates to social experience – what in the popular press might be called the ‘social brain,’” says Levine.

“Getting from the fly to the human brain – that’s another line of research. But it almost has to be true that the things that we’re observing in insects will be found in a more nuanced, more dispersed way in the mammalian brain.”

Katie Hunt wrote a May 2, 2024 article, for CNN, about the research, shortly after the paper was published, which included some intriguing personal details and a good explanation of why fruit flies are used in genetic research, Note: Links have been removed,

Many species of animals form social groups and behave collectively: An elephant herd follows its matriarch, flocking birds fly in unison, humans gather at concert events. Even humble fruit fliesorganize themselves into regularly spaced clusters, researchers have found.

..

And now, scientists believe there is evidence that how central you are to your social network, a concept they call “high betweenness centrality,” could have a genetic basis. New research published Tuesday in the journal Nature Communications has identified a gene responsible for regulating the structure of social networks in fruit flies.

The study’s authors named the gene in question “degrees of Kevin Bacon,” or dokb, after a game that requires players to link celebrities to actor Bacon in as few steps as possible via the movies they have in common.

Inspired by “six degrees of separation,” the theory that nobody is more than six relationships away from any other person in the world, the game became a viral phenomenon three decades ago.

Senior author Joel Levine, a professor of biology at the University of Toronto who went to high school with Bacon in Philadelphia [emphases mine], said the actor was a good human example of “high betweenness centrality.”

Aware of Levine’s link with Bacon, study lead author Rebecca Rooke, a postdoctoral fellow of biology at the University of Toronto Mississauga, suggested the gene’s name.

Levine said that the “degrees of Kevin Bacon” gene was specific to fruit flies’ central nervous systems, but he thought similar genetic pathways would exist in other animals, including humans. The study opened up new opportunities for exploring the molecular evolution of social networks and collective behavior in other animals.

Drosophila melanogaster, best known for hovering around fruit bowls, has been a model organism to explore genetics for more than 100 years. The insects breed quickly and are easy to keep.

While flies are very different from humans, the creatures have long been central to biological and genetic discovery.

“Fruit flies are useful because of the power of manipulation. We can investigate things experimentally in Drosophila that we can only examine indirectly in most organisms,” Moore said.

The tiny creatures share nearly 60% of our genes, including those responsible for Alzheimer’s, Parkinson’s, cancer and heart disease. Research involving fruit flies has previously shed light on the mechanisms of inheritance, circadian rhythms and mutation-causing X-rays.

Here’s a link to and a citation for the paper,

The gene “degrees of kevin bacon” (dokb) regulates a social network behaviour in Drosophila melanogaster by Rebecca Rooke, Joshua J. Krupp, Amara Rasool, Mireille Golemiec, Megan Stewart, Jonathan Schneider & Joel D. Levine. Nature Communications volume 15, Article number: 3339 (2024)
DOI: https://doi.org/10.1038/s41467-024-47499-8 Published online: 30 April 2024

This paper is open access.

h/t Rae Hodge’s May 30, 2024 article on Salon.com. Otherwise, I would have missed this ‘science meets pop culture’ story.

Happy Canada Day! Breakdancing at the 2024 Paris Summer Olympics: physics in action + heat, mosquitoes, and sports

Happy July 1, 2024, also known as, Canada Day!

Onto breakdancing (or breaking), which for the first time will be an official event at the 2024 Paris Summer Olympics. Amy Pope, principal lecturer, physics and astronomy, Clemson University (South Carolina, US), has written a June 12, 2024 essay for The Conversation that describes breakdancing as physics in action, (h/t June 13, 2024 news item in phys.org), Note: Links have been removed,

Two athletes square off for an intense dance battle. The DJ starts spinning tunes, and the athletes begin twisting, spinning and seemingly defying gravity, respectfully watching each other and taking turns showing off their skill.

The athletes converse through their movements, speaking through a dance that celebrates both athleticism and creativity. While the athletes probably aren’t consciously thinking about the physics behind their movements, these complex and mesmerizing dances demonstrate a variety of different scientific principles.

Breaking, also known as breakdancing, originated in the late 1970s in the New York City borough of the Bronx. Debuting as an Olympic sport in the 2024 Summer Olympics, breaking will showcase its dynamic moves on a global stage. This urban dance style combines hip-hop culture, acrobatic moves and expressive footwork.

Since its inception, breaking has evolved into a competitive art form. An MC narrates the movements, while a DJ mixes songs to create a dynamic atmosphere. The Olympics will feature two events: one for men, called B-boys, and one for women, called B-girls. In these events, athletes will face off in dance battles.

… Success in this sport requires combining dance moves from three basic categories: top rock, down rock and freeze.

And now for the physics of it all, from Pope’s June 12, 2024 essay, Note: Links have been removed,

Top rock moves [emphasis mine] are performed while standing up, focusing on fancy footwork and hand movements. These movements are reminiscent of hip-hop dancing.

Top rock moves rely on having lots of friction between an athlete’s shoes and the floor. Friction is the force [emphasis miine] that resists when you slide something across a surface.

This friction allows the athlete to take very quick steps and to stop abruptly. The dancers must intuitively understand inertia, or the fact that their bodies will continue in the direction they’re moving unless they are acted upon by an external force. To stop abruptly, athletes need to engage their muscles, getting their shoes to grip the ground to stop themselves from continuing forward.

Down rock moves [emphasis mine] are performed while on the floor. Athletes may spin in circles with their head, back, elbows or shoulders touching the ground and their feet in the air. B-boys and B-girls rely heavily on an internal knowledge of physics to complete these moves.

Consider the physics of a backspin. A backspin occurs when the athlete is on their back with their feet lifted in the air, rotating around a specific area of their back.

Sitting on the floor, the athlete’s left foot stays in contact with the floor while they spread their right leg wide, gathering linear momentum [emphasis mine] as they sweep their right leg toward their left foot in a wide arc. Then, they release their left leg from contact with the ground and roll onto their back.

Now that only their back is in contact with the ground, the linear momentum from their leg turns into angular momentum [emphasis mine], which rotates the athlete around an axis that extends upward from their back’s contact point with the ground. This move turns magical when they bring their legs and arms inward, toward the axis of rotation. This principal is called conservation of angular momentum.

When an athlete brings their mass in more closely to the axis of rotation, the athlete’s rotations speed up. Extending their legs and arms once again and moving their mass away from the axis of rotation will cause the competitor to slow their rotation speed down. Once they slow down, they can transition to another move.

Freeze [emphasis mine] occurs when athletes come to a stop in a funky pose, often occurring in time to the music and in an upside-down position. To freeze effectively, the athlete must have full control over their center of mass, placing it right above the point of their body that is in contact with the floor. The center of mass is the average position of all the parts of an athlete, weighted according to their masses. The “balance point” where the entire mass of the athlete seems to be concentrated is the center of mass.

Athletes are most stable when their center of mass is as close to the ground as possible. You will see many competitors freeze with arms bent in an effort to lower their center of mass. This lowered center of mass reduces their distance from the floor and minimizes the tendency of their body to rock to one side or the other due to torque.

Torque is a twisting force [emphasis mine], like the force used to turn a wrench. The torque depends on two things: the amount of force you apply, and how far from the pivot point you apply the force. With an athlete’s center of mass closer to the ground, the athlete decreases the distance between the pivot point – the ground – and where the force of gravity is applied – the athlete’s center of mass.

Athletes need great strength to halt their motion mid-movement because they have to apply a force to resist the change in inertia.

It’s not just about the moves, clothing is a factor, Pope’s June 12, 2024 essay,

Many sports require a specific uniform. Breaking doesn’t – an athlete can wear whatever they want – but the right outfit will maximize their chance of success.

The athlete wants a shirt that minimizes the friction between their body and the ground during a spin. Lettering or images on the back of the shirt will add friction, which hinders an athlete’s ability to perform some down rock moves. An athlete may choose to wear long sleeves if they plan to slide on their elbows, as bare skin in contact with the floor provides more friction.

Athletes also have to think about the headgear they wear. …

There’s a bit more information about the breakdancing competition on the 2024 Olympics website.I cannot find a full list of athletes for the August 9, 2024 (B-Girls) and August 10, 2024 (B-Boys) competitions. There is this June 2, 2024 article (from the Associated Press) on the CBC (Canadian Broadcasting Corporation) online news website,

Victor Montalvo (B-boy Victor), United States: A breaker who describes himself as a student of old school b-boys from the founding era of hip-hop, the 30-year-old Montalvo, who is from Kissimmee, Florida, qualified for Paris by besting all other b-boys at the 2023 WDSF World Breaking Championship in Belgium.

Sunny Choi (B-girl Sunny), United States: The 35-year-old Choi, a cheerful Queens, New York-bred breaker, has long been an ambassador for b-girls globally. She qualified for the Paris Games with her win at the 2023 Pan American Games in Chile.

Philip Kim (B-boy Phil Wizard), Vancouver, Canada: Consistently ranked in the top three b-boys in the international breaking competitive community, Kim secured a spot for Paris when he came out on top at last year’s Pan American Games.

Dominika Banevič (B-girl Nicka), Lithuania: Banevič was the youngest in her category at last year’s WDSF World Breaking Championship, when she punched her ticket to Paris. Banevič turns 17 this month.

I thought the competition would be dominated by Americans and certainly wasn’t expecting to see a Lithuanian (Dominika Banevič or ‘Nicka’) listed as a competitor to watch. The Canadian (Philip Kim or ‘Phil Wizard’) is also a surprise. Who knew Vancouver was home to a leading B-boy?

Two comments: heat and mosquitoes (dengue and other fevers)

The organizers of the Paris 2024 Summer Olympics are to be complimented for their work towards making the games ‘green’ but that is a complex process.

Heat

For example, the Canadian Broadcasting Corporation (CBC) ran a news item on The National news telecast on June 17, 2024 (see telecast for embedded video clip) regarding concerns about and preparations for heat,

Preparing for extreme heat at the Paris Olympics

Paris Olympic organizers plan to make this summer’s games the greenest ever, but that includes offering less air conditioning to cut down on energy use. [emphases mine] As temperatures rise globally, some suggest the organizers should take extreme heat into account when awarding cities with the next big Olympic games.

Some of the reporting in the CBC news item is based on information from a June 18, 2024 University of Portsmouth (UK) press release, Note: Links have been removed,

Leading athletes are warning that intense heat at the Paris Olympics in July-August 2024 could lead to competitors collapsing and in worst case scenarios dying during the Games. [emphasis mine]

Eleven Olympians, including winners of five World Championships and six Olympic medals, have come together with climate scientists and leading heat physiologists Professor Mike Tipton and Dr Jo Corbett from the University of Portsmouth to unpack the serious threat extreme heat poses for athletes in a new Rings of Fire report.

Dr Corbett, Associate Professor of Environmental Physiology in the School of Sport, Health and Exercise Science at the University of Portsmouth, said: “A warming planet will present an additional challenge to athletes, which can adversely impact on their performance and diminish the sporting spectacle of the Olympic Games,. Hotter conditions also increase the potential for heat illness amongst all individuals exposed to high thermal stress, including officials and spectators, as well as athletes.”

“For athletes, from smaller performance-impacting issues like sleep disruption and last-minute changes to event timings, to exacerbated health impacts and heat related stress and injury, the consequences can be varied and wide-ranging. With global temperatures continuing to rise, climate change should increasingly be viewed as an existential threat to sport,” said Lord Sebastian Coe, President of World Athletics and four-time Olympic medallist.

The Tokyo Games became known as the “hottest in history,” with temperatures exceeding 34°C and humidity reaching nearly 70 per cent, leading to severe health risks for competitors. The Paris Games have the potential to surpass that, with climate change driven by the burning of fossil fuels contributing to record heat streaks during the past months.

2023 was the hottest year on record according to the EU’s [European Union] Copernicus Climate Change Service and 2024 has continued this streak. April 2024 was warmer globally than any previous April in the record books, said experts at Copernicus.

The Rings of Fire report discusses the deadly heatwave in France in 2003 – which killed over 14,000 people – and subsequent years of record-breaking temperatures, exceeding 42°C. It underscores the heightened risk of extreme heat during the Paris Olympics, especially considering the significant rise in the region’s temperatures since the city last hosted the Games a century ago.

You can find the Rings of Fire report here and the Corpernicus Climate Change Service here.

Mosquitoes and dengue and other fevers

Obviously, the world is changing as you can see in this June 18, 2024 Institut Pasteur press release (also on EurekAlert),

Olympics: how many days does it take for mosquitoes in Greater Paris to transmit arboviruses, and what preventive measures are needed?

The number of imported cases of dengue in the Greater Paris region increased significantly in the first few months of 2024. In the run-up to the Olympic Games, with huge numbers of international visitors set to come to Paris – especially from endemic dengue countries –, we need to be vigilant. Scientists from the Institut Pasteur, in collaboration with the Regional Mosquito Control Agency (ARD) and the National Reference Center for Arboviruses (Inserm-Irba), have demonstrated that the tiger mosquito, now present in Greater Paris, is capable of transmitting five viruses (West Nile, chikungunya, Usutu, Zika and dengue) within different time frames ranging from 3 to 21 days, at an external temperature of 28°C. These results highlight the importance of stepping up surveillance of imported cases of arboviruses this summer. The study was published on May 16 [2024] in Eurosurveillance.

Between January 1 and April 19, 2024, 1,679 imported dengue cases were reported in mainland France, 13 times more than the number reported over the same period the previous year (source SPF). It is likely that this number will increase during the Olympic Games, as more people come to Paris from countries that are endemic regions for other arboviruses. The vector for dengue transmission is Aedes albopictus, more commonly known as the tiger mosquito. Arboviruses are transmitted when a female mosquito bites a virus carrier and ingests viral particles. One particular feature of arboviruses is that they can replicate in mosquitoes (unlike other viruses such as influenza, which are destroyed when ingested by mosquitoes). The viral particles multiply and spread within the mosquito, reaching the salivary glands in a few days. When the female mosquito bites another human, she injects the virus while taking her blood meal.

The tiger mosquito is now present in 78 départements in mainland France, and this together with other climate change-related factors is facilitating vector-borne transmission. Scientists from the Institut Pasteur’s Arboviruses and Insect Vectors Unit, in collaboration with the Regional Vector Control Agency (ARD) and the National Reference Center for Arboviruses (Inserm-Irba), therefore decided to analyze the ability of Aedes albopictus in Greater Paris to transmit five arboviruses at a temperature of 28°C, which is likely in the region at this time of year, and counted the number of days between initial infection and the possibility of the virus being transmitted through a further mosquito bite. As well as the dengue, chikungunya and Zika viruses, which we already know can be transmitted by the tiger mosquito, the scientists studied the Usutu and West Nile viruses, which are naturally transmitted by another mosquito species, Culex pipiens (known as the “common mosquito”). Culex pipiens mosquitoes transmit viruses to humans after feeding on birds, which act as viral reservoirs.

Tiger mosquito susceptible to five arboviruses

Working in a BSL3 laboratory, the scientists studied the ability of tiger mosquitoes to transmit these five viruses and determined the extrinsic incubation period required for the virus to reach the mosquito’s salivary glands in sufficient quantities to infect a human. At 28°C, West Nile virus needs three days before it can be transmitted to humans by mosquitoes. The incubation period is 3 to 7 days for chikungunya and Usutu, and 14 to 21 days for dengue and Zika.(1) 

This information is crucial to gage the additional risk represented by the upcoming Olympic Games in Paris, which will see significant intermingling of populations combined with the return of travelers from endemic regions and a season conducive to mosquito proliferation. The findings can also be used to develop suitable control strategies.

“If a case of dengue is detected in the Greater Paris region, we now know that disinsection is required within 21 days. We can use these results to adjust our time frame for action and optimize our approach,” explains Anna-Bella Failloux, Head of the Institut Pasteur’s Arboviruses and Insect Vectors Unit, who led the study. “Depending on the temperatures we experience in and around Paris this summer, our findings will be essential for adjusting control measures as needed.”

What precautions should be taken in the run-up to the Olympics?

Health care professionals are trained to detect the symptoms of arboviruses if people indicate that they have recently been to an endemic country. The difficulty of surveillance is that many cases are asymptomatic: although dengue is a notifiable disease, up to 80% of cases lead to few or no symptoms. If a diagnosis of one of these diseases is confirmed, an inquiry is carried out by France’s Regional Health Agencies to determine where the individuals live or spent time in the days before the diagnosis, so that they can identify the areas where disinsection is needed. Anyone coming back from a foreign trip who experiences fever or aches is advised to see their family physician immediately and indicate the region they recently returned from.

“The alert system in France is effective. The applicable procedure and measures are already well established because France’s overseas territories in endemic regions have provided us with expertise in these diseases and know-how on epidemiological monitoring. My team is affiliated with the Arbo-France network, and we are contacted as soon as an arbovirus is detected,” continues Anna-Bella Failloux.

Since 2006, vector control measures in France have led to increased surveillance of tiger mosquitoes between May 1 and November 30 each year. This involves monitoring mosquito populations in areas where they are likely to be present; disease surveillance coordinated by Santé publique France based on reporting of viruses such as dengue, chikungunya and Zika by health care professionals; and raising awareness among people living in areas where mosquitoes have been reported. France’s Regional Health Agencies (ARS) and their operators are responsible for managing reporting, monitoring the presence of mosquitoes and taking rapid action in response to human cases of infection (vector control).

This research, which focused on mosquitoes in the Greater Paris region for this first study, will soon be extended to the rest of mainland France. Extrinsic incubation periods vary from one tiger mosquito population to the next because of differences in their genetic makeup and in local temperatures. 

Find out more:

Video: “We are going to have to learn to live with tiger mosquitoes” – Anna-Bella Failloux

Disease-carrying mosquitoes – French Ministry of Employment, Health and Solidarity (sante.gouv.fr)

  1. It is important to point out that for Usutu and West Nile, the ability of tiger mosquitoes to transmit these viruses to humans in real-life conditions, outside the experimental setting, is yet to be demonstrated, as they are naturally transmitted by Culex pipiens, another mosquito species.

Here’s a link to and a citation for the paper,

Aedes albopictus is a competent vector of five arboviruses affecting human health, greater Paris, France, 2023 by Chloé Bohers, Marie Vazeille, Lydia Bernaoui, Luidji Pascalin, Kevin Meignan, Laurence Mousson, Georges Jakerian, Anaïs Karchh, Xavier de Lamballerie, Anna-Bella Failloux. Euro Surveill. 2024; 29 (20): pii=2400271. DOI: https://doi.org/10.2807/1560-7917.ES.2024.29.20.2400271

This paper is open access.

I covered the movement of dengue fever and malaria into the Northern Hemisphere in an August 10, 2023 posting,

The World Health Organization (WHO) notes that dengue fever cases have increased exponentially since 2000 (from the March 17, 2023 version of the WHO’s “Dengue and severe dengue” fact sheet),

Global burden

The incidence of dengue has grown dramatically around the world in recent decades, with cases reported to WHO increased from 505 430 cases in 2000 to 5.2 million in 2019. A vast majority of cases are asymptomatic or mild and self-managed, and hence the actual numbers of dengue cases are under-reported. Many cases are also misdiagnosed as other febrile illnesses (1).

One modelling estimate indicates 390 million dengue virus infections per year of which 96 million manifest clinically (2). Another study on the prevalence of dengue estimates that 3.9 billion people are at risk of infection with dengue viruses.

The disease is now endemic in more than 100 countries in the WHO Regions of Africa, the Americas, the Eastern Mediterranean, South-East Asia and the Western Pacific. The Americas, South-East Asia and Western Pacific regions are the most seriously affected, with Asia representing around 70% of the global disease burden.

Dengue is spreading to new areas including Europe, [emphasis mine] and explosive outbreaks are occurring. Local transmission was reported for the first time in France and Croatia in 2010 [emphasis mine] and imported cases were detected in 3 other European countries.

The researchers from the University of Central Florida (UCF) couldn’t have known when they began their project to study mosquito bites and disease that Florida would register its first malaria cases in 20 years this summer, …

It seems pretty clear that there’s increasing concern about mosquito-borne diseases no matter where you live.

It looks like mega-sports events attract more visitors than you might expect.

Hype, hype, hype: Vancouver’s Frontier Collective represents local tech community at SxWS (South by Southwest®) 2024 + an aside

I wonder if Vancouver’s Mayor Ken Sim will be joining the folks at the giant culture/tech event known as South by Southwest® (SxSW) later in 2024. Our peripatetic mayor seems to enjoy traveling to sports events (FIFA 2023 in Qatar), to Los Angeles to convince producers of a hit television series, “The Last of Us,” that they film the second season in Vancouver, and, to Austin, Texas for SxSW 2023. Note: FIFA is Fédération internationale de football association or ‘International Association Football Federation’.

It’s not entirely clear why Mayor Sim’s presence was necessary at any of these events. In October 2023, he finished his first year in office; a business owner and accountant, Sim is best known for his home care business, “Nurse Next Door” and his bagel business, “Rosemary Rocksalt,” meaning he wouldn’t seem to have much relevant experience with sports and film events.

I gather Mayor Sim’s presence was part of the 2023 hype (for those who don’t know, it’s from ‘hyperbole’) where SxSW was concerned, from the Vancouver Day at SxSW 2023 event page,

Vancouver Day

Past(03/12/2023) 12:00PM – 6:00PM

FREE W/ RSVP | ALL AGES

Swan Dive

The momentum and vibrancy of Vancouver’s innovation industry can’t be stopped!

The full day event will see the Canadian city’s premier technology innovators, creative tech industries, and musical artists show why Vancouver is consistently voted one of the most desirable places to live in the world.

We will have talks/panels with the biggest names in VR/AR/Metaverse, AI, Web3, premier technology innovators, top startups, investors and global thought-leaders. We will keep Canada House buzzing throughout the day with activations/demos from top companies from Vancouver and based on our unique culture of wellness and adventure will keep guests entertained, and giveaways will take place across the afternoon.

The Canadian city is showing why Vancouver has become the second largest AR/VR/Metaverse ecosystem globally (with the highest concentration of 3D talent than anywhere in the world), a leader in Web3 with companies like Dapper Labs leading the way and becoming a hotbed in technology like artificial intelligence.

The Frontier Collective’s Vancouver’s Takeover of SXSW is a signature event that will enhance Vancouver as the Innovation and Creative Tech leader on the world stage.It is an opportunity for the global community to encounter cutting-edge ideas, network with other professionals who share a similar appetite for a forward focused experience and define their next steps.

Some of our special guests include City of Vancouver Mayor Ken Sim [emphasis mine], Innovation Commissioner of the Government of BC- Gerri Sinclair, Amy Peck of Endeavor XR, Tony Parisi of Lamina1 and many more.

In the evening, guests can expect a special VIP event with first-class musical acts, installations, wellness activations and drinks, and the chance to mingle with investors, top brands, and top business leaders from around the world.

To round out the event, a hand-picked roster of Vancouver musicians will keep guests dancing late into the night.

This is from Mayor Sim’s Twitter (now X) feed, Note: The photographs have not been included,

Mayor Ken Sim@KenSimCity Another successful day at #SXSW2023 showcasing Vancouver and British Columbia while connecting with creators, innovators, and entrepreneurs from around the world! #vanpoli#SXSW

Last edited from Austin, TX·13.3K Views

Did he really need to be there?

2024 hype at SxSW and Vancouver’s Frontier Collective

New year and same hype but no Mayor Sim? From a January 22, 2024 article by Daniel Chai for the Daily Hive, Note: A link has been removed,

Frontier Collective, a coalition of Vancouver business leaders, culture entrepreneurs, and community builders, is returning to the South by Southwest (SXSW) Conference next month to showcase the city’s tech innovation on the global stage.

The first organization to formally represent and promote the region’s fastest-growing tech industries, Frontier Collective is hosting the Vancouver Takeover: Frontiers of Innovation from March 8 to 12 [2024].

According to Dan Burgar, CEO and co-founder of Frontier Collective, the showcase is not just about presenting new advancements but is also an invitation to the world to be part of a boundary-transcending journey.

“This year’s Vancouver Takeover is more than an event; it’s a beacon for the brightest minds and a celebration of the limitless possibilities that emerge when we dare to innovate together.”

Speakers lined up for the SXSW Vancouver Takeover in Austin, Texas, include executives from Google, Warner Bros, Amazon, JP Morgan, Amazon, LG, NTT, Newlab, and the Wall Street Journal.

“The Frontier Collective is excited to showcase a new era of technological innovation at SXSW 2024, building on the success of last year’s Takeover,” added Natasha Jaswal, VP of operations and events of Frontier Collective, in a statement. “Beyond creating a captivating event; its intentional and curated programming provides a great opportunity for local companies to gain exposure on an international stage, positioning Vancouver as a global powerhouse in frontier tech innovation.

Here’s the registration page if you want to attend the Frontiers of Innovation Vancouver Takeover at SxSW 2024,

Join us for a curated experience of music, art, frontier technologies and provocative panel discussions. We are organizing three major events, designed to ignite conversation and turn ideas into action.

We’re excited to bring together leaders from Vancouver and around the world to generate creative thinking at the biggest tech festival.

Let’s create the future together!

You have a choice of two parties and a day long event. Enjoy!

Who is the Frontier Collective?

The group announced itself in 2022, from a February 17, 2022 article in techcouver, Note: Links have been removed,

The Frontier Collective is the first organization to formally represent and advance the interests of the region’s fastest-growing industries, including Web3, the metaverse, VR/AR [virtual reality/augmented reality], AI [artificial intelligence], climate tech, and creative industries such as eSports [electronic sports], NFTs [non-fungible tokens], VFX [visual effects], and animation.

Did you know the Vancouver area currently boasts the world’s second largest virtual and augmented reality sector and hosts the globe’s biggest cluster of top VFX, video games and animation studios, as well as the highest concentration of 3D talent?

Did you know NFT technology was created in Vancouver and the city remains a top destination for blockchain and Web3 development?

Frontier Collective’s coalition of young entrepreneurs and business leaders wants to raise awareness of Vancouver’s greatness by promoting the region’s innovative tech industry on the world stage, growing investment and infrastructure for early-stage companies, and attracting diverse talent to Vancouver.

“These technologies move at an exponential pace. With the right investment and support, Vancouver has an immense opportunity to lead the world in frontier tech, ushering in a new wave of transformation, economic prosperity and high-paying jobs. Without backing from governments and leaders, these companies may look elsewhere for more welcoming environments.” said Dan Burgar, Co-founder and Head of the Frontier Collective. Burgar heads the local chapter of the VR/AR Association.

Their plan includes the creation of a 100,000-square-foot innovation hub in Vancouver to help incubate startups in Web3, VR/AR, and AI, and to establish the region as a centre for metaverse technology.

Frontier Collective’s team includes industry leaders at the Vancouver Economic Commission [emphasis mine; Under Mayor Sim and his majority City Council, the commission has been dissolved; see September 21, 2023 Vancouver Sun article “Vancouver scraps economic commission” by Tiffany Crawford], Collision Conference, Canadian incubator Launch, Invest Vancouver, and the BDC Deep Tech Fund.  These leaders continue to develop and support frontier technology in their own organizations and as part of the Collective.

Interestingly, a February 7, 2023 article by the editors of BC Business magazine seems to presage the Vancouver Economic Commission’s demise. Note: Links have been removed,

Last year, tech coalition Frontier Collective announced plans to position Vancouver as Canada’s tech capital by 2030. Specializing in subjects like Web3, the metaverse, VR/AR, AI and animation, it seems to be following through on its ambition, as the group is about to place Vancouver in front of a global audience at SXSW 2023, a major conference and festival celebrating tech, innovation and entertainment.  

Taking place in Austin, Texas from March 10-14 [2023], Vancouver Takeover is going to feature speakers, stories and activations, as well as opportunities for companies to connect with industry leaders and investors. Supported by local businesses like YVR Airport, Destination Vancouver, Low Tide Properties and others, Frontier is also working with partners from Trade and Invest BC, Telefilm and the Canadian Consulate. Attendees will spot familiar faces onstage, including the likes of Minister of Jobs, Economic Development and Innovation Brenda Bailey, Vancouver mayor Ken Sim [emphasis mine] and B.C. Innovation Commissioner Gerri Sinclair. 

That’s right, no mention of the Vancouver Economic Commission.

As for the Frontier Collective Team (accessed January 29, 2024), the list of ‘industry leaders’ (18 people with a gender breakdown that appears to be 10 male and 8 female) and staff members (a Senior VP who appears to be male and the other seven staff members who appear to be female) can be found here. (Should there be a more correct way to do the gender breakdown, please let me know in the Comments.)

i find the group’s name a bit odd, ‘frontier’ is something I associate with the US. Americans talk about frontiers, Canadians not so much.

If you are interested in attending the daylong (11 am – 9 pm) Vancouver Takeover at SxSW 2024 event on March 10, 2024, just click here.

Aside: swagger at Vancouver City Hall, economic prosperity, & more?

What follows is not germane to the VR/AR community, SxSW of any year, or the Frontier Collective but it may help to understand why the City of Vancouver’s current mayor is going to events where he would seem to have no useful role to play.

Matt O’Grady’s October 4, 2023 article for Vancouver Magazine offers an eyeopening review of Mayor Ken Sim’s first year in office.

Ken Sim swept to power a year ago promising to reduce waste, make our streets safer and bring Vancouver’s “swagger” back. But can his open-book style win over the critics?

I’m sitting on a couch in the mayor’s third-floor offices, and Ken Sim is walking over to his turntable to put on another record. “How about the Police? I love this album.”

With the opening strains of  “Every Breath You Take” crackling to life, Sim is explaining his approach to conflict resolution, and how he takes inspiration from the classic management tome Getting to Yes: Negotiating Agreement Without Giving In.

Odd choice for a song to set the tone for an interview. Here’s more about the song and its origins according to the song’s Wikipedia entry,

To escape the public eye, Sting retreated to the Caribbean. He started writing the song at Ian Fleming’s writing desk on the Goldeneye estate in Oracabessa, Jamaica.[14] The lyrics are the words of a possessive lover who is watching “every breath you take; every move you make”. Sting recalled:

“I woke up in the middle of the night with that line in my head, sat down at the piano and had written it in half an hour. The tune itself is generic, an aggregate of hundreds of others, but the words are interesting. It sounds like a comforting love song. I didn’t realise at the time how sinister it is. I think I was thinking of Big Brother, surveillance and control.”[15][emphasis mine]

The interview gets odder, from O’Grady’s October 4, 2023 article,

Suddenly, the office door swings open and Sim’s chief of staff, Trevor Ford, pokes his head in (for the third time in the past 10 minutes). “We have to go. Now.”

“Okay, okay,” says Sim, turning back to address me. “Do you mind if I change while we’re talking?” And so the door closes again—and, without further ado, the Mayor of Vancouver drops trou [emphasis mine] and goes in search of a pair of shorts, continuing with a story about how some of his west-side friends are vocally against the massive Jericho Lands development promising to reshape their 4th and Alma neighbourhood.

“And I’m like, ‘Let me be very clear: I 100-percent support it, this is why—and we’ll have to agree to disagree,’” he says, trading his baby-blue polo for a fitted charcoal grey T-shirt. Meanwhile, as Sim does his wardrobe change, I’m doing everything I can to keep my eyes on my keyboard—and hoping the mayor finds his missing shorts.

It’s fair to assume that previous mayors weren’t in the habit of getting naked in front of journalists. At least, I can’t quite picture Kennedy Stewart doing so, or Larry or Gordon Campbell either. 

But it also fits a pattern that’s developing with Ken Sim as a leader entirely comfortable in his own skin. He’s in a hurry to accomplish big things—no matter who’s watching and what they might say (or write). And he eagerly embraces the idea of bringing Vancouver’s “swagger” back—outlined in his inaugural State of the City address, and underlined when he shotgunned a beer at July’s [2023] Khatsahlano Street Party.

O’Grady’s October 4, 2023 article goes on to mention some of the more practical initiatives undertaken by Mayor Sim and his supermajority of ABC (Sim’s party, A Better City) city councillors in their efforts to deal with some of the city’s longstanding and intractable problems,

For a reminder of Sim’s key priorities, you need only look at the whiteboard in the mayor’s office. At the top, there’s a row labelled “Daily Focus (Top 4)”—which are, in order, 3-3-3-1 (ABC’s housing program); Chinatown; Business Advocacy; and Mental Health/Safety.

On some files, like Chinatown, there have been clear advances: council unanimously approved the Uplifting Chinatown Action Plan in January, which devotes more resources to cleaning and sanitation services, graffiti removal, beautification and other community supports. The plan also includes a new flat rate of $2 per hour for parking meters throughout Chinatown (to encourage more people to visit and shop in the area) and a new satellite City Hall office, to improve representation. And on mental health and public safety, the ABC council moved quickly in November to take action on its promise to fund 100 new police officers and 100 new mental health professionals [emphasis mine]—though the actual hiring will take time.

O’Grady likely wrote his article a few months before its October 2023 publication date (a standard practice for magazine articles), which may explain why he didn’t mention this, from an October 10, 2023 article by Michelle Gamage and Jen St. Denis for The Tyee,

100 Cops, Not Even 10 Nurses

One year after Mayor Ken Sim and the ABC party swept into power on a promise to hire 100 cops and 100 mental health nurses to address fears about crime and safety in Vancouver, only part of that campaign pledge has been fulfilled.

At a police board meeting in September, Chief Adam Palmer announced that 100 new police officers have now joined the Vancouver Police Department.

But just 9.5 full-time equivalent positions have been filled to support the mental health [emphasis mine] side of the promise.

In fact, Vancouver Coastal Health says it’s no longer aiming [emphasis mine] to hire 100 nurses. Instead, it’s aiming for 58 staff and specialists [emphasis mine], including social workers, community liaison workers and peers, as well as other disciplines alongside nurses to deliver care.

At the police board meeting on Sept. 21 [2023], Palmer said the VPD has had no trouble recruiting new police officers and has now hired 70 new recruits who are first-time officers, as well as at least 24 experienced officers from other police services.

In contrast, it’s been a struggle for VCH to recruit nurses specializing in mental health.

BC Nurses’ Union president Adriane Gear said she remembers wondering where Sim was planning on finding 100 nurses [emphasis mine] when he first made the campaign pledge. In B.C. there are around 5,000 full-time nursing vacancies, she said. Specialized nurses are an even more “finite resource,” she added.

I haven’t seen any information as to why the number was reduced from 100 mental health positions to 58. I’m also curious as to how Mayor Ken Sim whose business is called ‘Nurse Next Door’ doesn’t seem to know there’s a shortage of nurses in the province and elsewhere.

Last year, the World Economic Forum in collaboration with Quartz published a January 28, 2022 article by Aurora Almendral about the worldwide nursing shortage and the effects of COVID pandemic,

The report’s [from the International Council of Nurses (ICN)] survey of nurse associations around the world painted a grim picture of strained workforce. In Spain, nurses reported a chronic lack of PPE, and 30% caught covid. In Canada, 52% of nurses reported inadequate staffing, and 47% met the diagnostic cut-off for potential PTSD [emphasis mine].

Burnout plagued nurses around the world: 40% in Uganda, 60% in Belgium, and 63% in the US. In Oman, 38% nurses said they were depressed, and 73% had trouble sleeping. Fifty-seven percent of UK nurses planned to leave their jobs in 2021, up from 36% in 2020. Thirty-eight percent of nurses in Lebanon did not want to be nurses anymore, but stayed in their jobs because their families needed the money.

In Australia, 17% of nurses had sought mental health support. In China, 6.5% of nurses reported suicidal thoughts.

Moving on from Mayor Sim’s odd display of ignorance (or was it cynical calculation from a candidate determined to win over a more centrist voting population?), O’Grady’s October 4, 2023 article ends on this note,

When Sim runs for reelection in 2026, as he promises to do, he’ll have a great backdrop for his campaign—the city having just hosted several games for the FIFA World Cup, which is expected to bring in $1 billion and 900,000 visitors over five years.

The renewed swagger of Sim’s city will be on full display for the world to see. So too—if left unresolved—will some of Vancouver’s most glaring and intractable social problems.

I was born in Vancouver and don’t recall the city as having swagger, at any time. As for the economic prosperity that’s always promised with big events like the FIFA world cup, I’d like to see how much the 2010 Olympic Games held in Vancouver cost taxpayers and whether or not there were long lasting economic benefits. From a July 9, 2022 posting on Bob Mackin’s thebreaker.news,

The all-in cost to build and operate the Vancouver 2010 Games was as much as $8 billion, but the B.C. Auditor General never conducted a final report. The organizing committee, VANOC, was not covered by the freedom of information law and its records were transferred to the Vancouver Archives after the Games with restrictions not to open the board minutes and financial ledgers before fall 2025.

Mayor Sim will have two more big opportunities to show off his swagger in 2025 . (1) The Invictus Games come to Vancouver and Whistler in February 2025 and will likely bring Prince Harry and the Duchess of Sussex, Meghan Markle to the area (see the April 22, 2022 Associated Press article by Gemma Karstens-Smith on the Canadian Broadcasting Corporation website) and (2) The 2025 Junos (the Canadian equivalent to the Grammys) from March 26 – 30, 2025 with the awards show being held on March 30, 2025 (see the January 25, 2024 article by Daniel Chai for the Daily Hive website).

While he waits, Sim may have a ‘swagger’ opportunity later this month (February 2024) when Prince Harry and the Duchess of Sussex (Meghan Markle) visit the Vancouver and Whistler for a “a three-day Invictus Games’ One Year to Go event in Vancouver and Whistler,” see Daniel Chai’s February 2, 2024 article for more details.

Don’t forget, should you be in Austin, Texas for the 2024 SxSW, the daylong (11 am – 9 pm) Vancouver Takeover at SxSW 2024 event is on March 10, 2024, just click here to register. Who knows? You might get to meet Vancouver’s, Mayor Ken Sim. Or, if you can’t make it to Austin, Texas, O’Grady’s October 4, 2023 article offer an unusual political profile.

Superheroes in college/university anatomy classes

Credit: Pixabay/CC0 Public Domain [downloaded from https://phys.org/news/2023-08-anatomy-superheroic-science-class.html]

An August 9, 2023 news item on phys.org highlights how superhero anatomy is being employed in human anatomy courses, Note: A link has been removed,

What do superheroes Deadpool and Elastigirl have in common? Each was used in a college anatomy class to add relevance to course discussions—Deadpool to illustrate tissue repair, and Elastigirl, aka Mrs. Incredible, as an example of hyperflexibility.

Instructors at The Ohio State University College of Medicine created a “SuperAnatomy” course in an attempt to improve the experience of undergraduate students learning the notoriously difficult—and for some, scary or gross—subject matter of human anatomy.

An August 9, 2023 Ohio State University news release (also on EurekAlert), which originated the news item, delves further into the topic, Note: Links have been removed,

Surveys showed that most students who took the class found the use of superheroes increased their motivation to learn, fostered deeper understanding of the material, and made the content more approachable and enjoyable.

A few of the many content examples also included considering how Wolverine’s claws would affect his musculoskeletal system and citing Groot in a discussion of skin disorders.The effort was aimed at bringing creativity to the classroom – in the form of outside-the-box instruction and as a way to inspire students’ imagination and keep them engaged, said Melissa Quinn, associate professor of anatomy at Ohio State and senior author of a study on the course’s effectiveness.

“In these introductory courses, it’s a little tougher to talk about clinical relevance because students don’t fully understand a lot of the mechanics,” Quinn said. “But if you bring in pop culture, which everybody is inundated with in some way, shape or form, and tie it to the foundational sciences, then that becomes a way to apply it a little bit more.”

The study was published recently in the journal Anatomical Sciences Education.

First author Jeremy Grachan, the mastermind behind the course’s creation, led design of the curriculum as an Ohio State PhD student and is now an assistant professor of anatomy at Rutgers New Jersey Medical School.

SuperAnatomy was created as a 1000-level three-credit-hour undergraduate course open to students of all majors. The class consisted of three 55-minute lectures each week and lab sessions offered twice in the semester. The course’s curriculum borrowed heavily from Human Anatomy 2300, a four-credit-hour course taken primarily by pre-health profession majors, consisting of live and recorded lectures, review sessions and one lab per week.

Students from both classes were invited to join the study over three semesters in 2021 and 2022; 36 students in SuperAnatomy and 442 students in Human Anatomy participated. Researchers collected data from 50-question quizzes given during the first week of classes and at the end of the semester intended to gauge how well students learned and applied course content. The students also completed pre- and post-course surveys.

The quiz results showed that student learning and application of material in the two courses was essentially the same. And to be clear, the SuperAnatomy content was not all cartoons and comic books.

“We looked at courses already running in our anatomy curriculum and took the relevant parts of those courses and added in the superheroes,” Quinn said. “So we actually elevated the curriculum.”

The follow-up survey of SuperAnatomy participants suggested the inclusion of superheroes strengthened their class experience, with nearly all students reporting that pop culture and superhero references expanded their understanding of course material and boosted their motivation to do well in the class.

“Collectively, if the students are enjoying the course and motivated to learn the material it could be better not only for their academic success, but their mental health and social wellbeing too,” the authors wrote.

Human anatomy is tough stuff – on top of the high volume of unfamiliar medical terms rooted in Latin, it can be unsettling to learn about the body in such a scientific, yet intimate, way.

“If you don’t have a good tour guide to help you, you might be inclined to give up pretty quickly,” Quinn said. “And none of us wants to be stale in our teaching.

“Here, we’ve seen that you can take a course like anatomy, which has been around forever, and bring it very much to whatever generation that we’re going to be teaching. And it’s not just about having fun – but a way to really make anatomy very interesting.”

Mason Marek and James Cray Jr. of Ohio State also co-authored the study.

Here’s a link to and a citation for the paper,

Effects of using superheroes in an undergraduate human anatomy curriculum by
Jeremy J. Grachan, Mason Marek, James Cray Jr., Melissa M. Quinn. Anatomical Sciences Education DOI: https://doi.org/10.1002/ase.2312 First published: 25 June 2023

This paper is open access.

Graphic novels for teaching math, physics, and more

I’m going to start with the fun, i.e., “Max the Demon Vs Entropy of Doom”,

Found on Assa Auerbach @AssaAuerbach·Twitter feed: 6:57 AM · Feb 17, 2018 [downloaded from: https://theconversation.com/3-reasons-we-use-graphic-novels-to-teach-math-and-physics-211171]

Engaging introduction to James Clerk Maxwell’s and his thought experiment concerning entropy, “Maxwell’s demon.”

It’s one of the points that Sarah Klanderman and Josha Ho (both from Marian University; Indiana, US) make in their co-authored August 17, 2023 essay (on The Conversation) about using graphic novels to teach STEM (science, technology, engineering, and mathematics) topics in the classroom, Note: Links have been removed,

Graphic novels – offering visual information married with text – provide a means to engage students without losing all of the rigor of textbooks. As two educators in math and physics, we have found graphic novels to be effective at teaching students of all ability levels. We’ve used graphic novels in our own classes, and we’ve also inspired and encouraged other teachers to use them. And we’re not alone: Other teachers are rejuvenating this analog medium with a high level of success.

In addition to covering a wide range of topics and audiences, graphic novels can explain tough topics without alienating student averse to STEM – science, technology, engineering and math. Even for students who already like math and physics, graphic novels provide a way to dive into topics beyond what is possible in a time-constrained class. In our book “Using Graphic Novels in the STEM Classroom,” we discuss the many reasons why graphic novels have a unique place in math and physics education. …

Klanderman and Ho share some information that was new to me, from the August 17, 2023 essay, Note: Links have been removed,

Increasingly, schools are moving away from textbooks, even though studies show that students learn better using print rather than digital formats [emphasis mine]. Graphic novels offer the best of both worlds: a hybrid between modern and traditional media.

This integration of text with images and diagrams is especially useful in STEM disciplines that require quantitative reading and data analysis skills, like math and physics.

For example, our collaborator Jason Ho, an assistant professor at Dordt University, uses “Max the Demon Vs Entropy of Doom” to teach his physics students about entropy. This topic can be particularly difficult for students because it’s one of the first times when they can’t physically touch something in physics. Instead, students have to rely on math and diagrams to fill in their knowledge.

Rather than stressing over equations, Ho’s students focus on understanding the subject more conceptually. This approach helps build their intuition before diving into the algebra. They get a feeling for the fundamentals before they have to worry about equations.

After having taken Ho’s class, more than 85% of his students agreed that they would recommend using graphic novels in STEM classes, and 90% found this particular use of “Max the Demon” helpful for their learning. When strategically used, graphic novels can create a dynamic, engaging teaching environment even with nuanced, quantitative topics.

I encourage you to read the essay in its entirety if you have the time and the interest.

Here’s a link to the publisher’s website, a citation for and description of the book along with a Table of Contents, Note: it seems to be available in the UK only,

Using Graphic Novels in the STEM Classroom by William Boerman-Cornell, Josha Ho, David Klanderman, Sarah Klanderman. Published: 30 Nov 2023 Format: Paperback Edition: 1st Extent: 168 [pp?] ISBN: 9781350279186 Imprint: Bloomsbury Academic Illustrations: 5 bw illus. Dimensions: 234 x 156 mm Publisher: Bloomsbury Publishing Pre-order. Available 30 Nov 2023

Description

This book provides everything STEM teachers need to use graphic novels in order to engage students, explain difficult concepts, and enrich learning. Drawing upon the latest educational research and over 60 years of combined teaching experience, the authors describe the multimodal affordances and constraints of each element of the STEM curriculum. Useful for new and seasoned teachers alike, the chapters provide practical guidance for teaching with graphic novels, with a section each for Science, Technology, Engineering, and Mathematics. An appendix provides nearly 100 short reviews of graphic novels arranged by topic, such as cryptography, evolution, computer coding, skyscraper design, nuclear physics, auto repair, meteorology, and human physiology, allowing the teacher to find multiple graphic novels to enhance almost any unit. These include graphic novel biographies of Stephen Hawking, Jane Goodall, Alan Turing, Rosalind Franklin, as well as popular titles such as T-Minus by Jim Ottaviani, Brooke Gladstone’s The Influencing Machine, Theodoris Andropoulos’s Who Killed Professor X, and Gene [Luen] Yang’s Secret Coders series.

Table of Contents

List of Figures
Foreword, Jay Hosler
Acknowledgements
1. What Research Tells us about Teaching Science, Technology, Engineering, and Mathematics with Graphic Novels
2. Teaching Life Science and Earth Science with Graphic Novels
3. Teaching Physical Science with Graphic Novels
4. Teaching Technology with Graphic Novels
5. Using Graphic Novels to Teach Engineering
6. Teaching Mathematics with Graphic Novels
7. Unanswered Questions and Concluding Thoughts
Appendix: List of STEM Graphic Novels
References
Notes
Index

Finally, h/t August 20, 2023 news item on phys.org

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.