Category Archives: social implications

Night of ideas/Nuit des idées 2022: (Re)building Together on January 27, 2022 (7th edition in Canada)

Vancouver and other Canadian cities are participating in an international culture event, Night of ideas/Nuit des idées, organized by the French Institute (Institut de France), a French Learned society first established in 1795 (during the French Revolution, which ran from 1789 to 1799 [Wikipedia entry]).

Before getting to the Canadian event, here’s more about the Night of Ideas from the event’s About Us page,

Initiated in 2016 during an exceptional evening that brought together in Paris foremost French and international thinkers invited to discuss the major issues of our time, the Night of Ideas has quickly become a fixture of the French and international agenda. Every year, on the last Thursday of January, the French Institute invites all cultural and educational institutions in France and on all five continents to celebrate the free flow of ideas and knowledge by offering, on the same evening, conferences, meetings, forums and round tables, as well as screenings, artistic performances and workshops, around a theme each one of them revisits in its own fashion.

“(Re)building together

For the 7th Night of Ideas, which will take place on 27 January 2022, the theme “(Re)building together” has been chosen to explore the resilience and reconstruction of societies faced with singular challenges, solidarity and cooperation between individuals, groups and states, the mobilisation of civil societies and the challenges of building and making our objects. This Nuit des Idées will also be marked by the beginning of the French Presidency of the Council of the European Union.

According to the About Us page, the 2021 event counted participants in 104 countries/190 cities/with other 200 events.

The French embassy in Canada (Ambassade de France au Canada) has a Night of Ideas/Nuit des idées 2022 webpage listing the Canadian events (Note: The times are local, e.g., 5 pm in Ottawa),

Ottawa: (Re)building through the arts, together

Moncton: (Re)building Together: How should we (re)think and (re)habilitate the post-COVID world?

Halifax: (Re)building together: Climate change — Building bridges between the present and future

Toronto: A World in Common

Edmonton: Introduction of the neutral pronoun “iel” — Can language influence the construction of identity?

Vancouver: (Re)building together with NFTs

Victoria: Committing in a time of uncertainty

Here’s a little more about the Vancouver event, from the Night of Ideas/Nuit des idées 2022 webpage,

Vancouver: (Re)building together with NFTs [non-fungible tokens]

NFTs, or non-fungible tokens, can be used as blockchain-based proofs of ownership. The new NFT “phenomenon” can be applied to any digital object: photos, videos, music, video game elements, and even tweets or highlights from sporting events.

Millions of dollars can be on the line when it comes to NFTs granting ownership rights to “crypto arts.” In addition to showing the signs of being a new speculative bubble, the market for NFTs could also lead to new experiences in online video gaming or in museums, and could revolutionize the creation and dissemination of works of art.

This evening will be an opportunity to hear from artists and professionals in the arts, technology and academia and to gain a better understanding of the opportunities that NFTs present for access to and the creation and dissemination of art and culture. Jesse McKee, Head of Strategy at 221A, Philippe Pasquier, Professor at School of Interactive Arts & Technology (SFU) and Rhea Myers, artist, hacker and writer will share their experiences in a session moderated by Dorothy Woodend, cultural editor for The Tyee.

- 7 p.m on Zoom (registration here) Event broadcast online on France Canada Culture’s Facebook. In English.

Not all of the events are in both languages.

One last thing, if you have some French and find puppets interesting, the event in Victoria, British Columbia features both, “Catherine Léger, linguist and professor at the University of Victoria, with whom we will discover and come to accept the diversity of French with the help of marionnettes [puppets]; … .”

Congratulations! Noēma magazine’s first year anniversary

Apparently, I am an idiot—if the folks at Expunct and other organizations passionately devoted to their own viewpoints are to be believed.

To be specific, Berggruen Institute (which publishes Noēma magazine) has attracted remarkably sharp criticism and, by implication, that seems to include anyone examining, listening, or reading the institute’s various communication efforts.

Perhaps you’d like to judge the quality of the ideas for yourself?

Abut the Institute and about the magazine

The institute is a think tank founded by Nicolas Berggruen, US-based billionaire investor and philanthropist, and Nathan Gardels, journalist and editor-in-chief of Noēma magazine, in 2010. Before moving onto the magazine’s first anniversary, here’s more about the Institute from its About webpage,

Ideas for a Changing World

We live in a time of great transformations. From capitalism, to democracy, to the global order, our institutions are faltering. The very meaning of the human is fragmenting.

The Berggruen Institute was established in 2010 to develop foundational ideas about how to reshape political and social institutions in the face of these great transformations. We work across cultures, disciplines and political boundaries, engaging great thinkers to develop and promote long-term answers to the biggest challenges of the 21st Century.

As the for the magazine, here’s more from the About Us webpage (Note: I have rearranged the paragraph order),

In ancient Greek, noēma means “thinking” or the “object of thought.” And that is our intention: to delve deeply into the critical issues transforming the world today, at length and with historical context, in order to illuminate new pathways of thought in a way not possible through the immediacy of daily media. In this era of accelerated social change, there is a dire need for new ideas and paradigms to frame the world we are moving into.

Noema is a magazine exploring the transformations sweeping our world. We publish essays, interviews, reportage, videos and art on the overlapping realms of philosophy, governance, geopolitics, economics, technology and culture. In doing so, our unique approach is to get out of the usual lanes and cross disciplines, social silos and cultural boundaries. From artificial intelligence and the climate crisis to the future of democracy and capitalism, Noema Magazine seeks a deeper understanding of the most pressing challenges of the 21st century.

Published online and in print by the Berggruen Institute, Noema grew out of a previous publication called The WorldPost, which was first a partnership with HuffPost and later with The Washington Post. Noema publishes thoughtful, rigorous, adventurous pieces by voices from both inside and outside the institute. While committed to using journalism to help build a more sustainable and equitable world, we do not promote any particular set of national, economic or partisan interests.

First anniversary

Noēma’s anniversary is being marked by its second paper publication (the first was produced for the magazine’s launch). From a July 1, 2021 announcement received via email,

June 2021 marked one year since the launch of Noema Magazine, a crucial milestone for the new publication focused on exploring and amplifying transformative ideas. Noema is working to attract audiences through longform perspectives and contemporary artwork that weave together threads in philosophy, governance, geopolitics, economics, technology, and culture.

“What began more than seven years ago as a news-driven global voices platform for The Huffington Post known as The WorldPost, and later in partnership with The Washington Post, has been reimagined,” said Nathan Gardels, editor-in-chief of Noema. “It has evolved into a platform for expansive ideas through a visual lens, and a timely and provocative portal to plumb the deeper issues behind present events.”

The magazine’s editorial board, involved in the genesis and as content drivers of the magazine, includes Orhan Pamuk, Arianna Huffington, Fareed Zakaria, Reid Hoffman, Dambisa Moyo, Walter Isaacson, Pico Iyer, and Elif Shafak. Pieces by thinkers cracking the calcifications of intellectual domains include, among many others:

·      Francis Fukuyama on the future of the nation-state

·      A collage of commentary on COVID with Yuval Harari and Jared Diamond 

·      An interview with economist Mariana Mazzucato on “mission-oriented government”

·      Taiwan’s Digital Minister Audrey Tang on digital democracy

·      Hedge-fund giant Ray Dalio in conversation with Nobel laureate Joe Stiglitz

·      Shannon Vallor on how AI is making us less intelligent and more artificial

·      Former Governor Jerry Brown in conversation with Stewart Brand 

·      Ecologist Suzanne Simard on the intelligence of forest ecosystems

·      A discussion on protecting the biosphere with Bill Gates’s guru Vaclav Smil 

·      An original story by Chinese science-fiction writer Hao Jingfang

Noema seeks to highlight how the great transformations of the 21st century are reflected in the work of today’s artistic innovators. Most articles are accompanied by an original illustration, melding together an aesthetic experience with ideas in social science and public policy. Among others, in the past year, the magazine has featured work from multimedia artist Pierre Huyghe, illustrator Daniel Martin Diaz, painter Scott Listfield, graphic designer and NFT artist Jonathan Zawada, 3D motion graphics artist Kyle Szostek, illustrator Moonassi, collage artist Lauren Lakin, and aerial photographer Brooke Holm. Additional contributions from artists include Berggruen Fellows Agnieszka Kurant and Anicka Yi discussing how their work explores the myth of the self.

Noema is available online and annually in print; the magazine’s second print issue will be released on July13, 2021. The theme of this issue is “planetary realism,” which proposes to go beyond the exhausted notions of globalization and geopolitical competition among nation-states to a new “Gaiapolitik.” It addresses the existential challenge of climate change across all borders and recognizes that human civilization is but one part of the ecology of being that encompasses multiple intelligences from microbes to forests to the emergent global exoskeleton of AI and internet connectivity (more on this in the letter from the editors below).

Published by the Berggruen Institute, Noema is an incubator for the Institute’s core ideas, such as “participation without populism,” “pre-distribution” and universal basic capital (vs. income), and the need for dialogue between the U.S. and China to avoid an AI arms race or inadvertent war.

“The world needs divergent thinking on big questions if we’re going to meet the challenges of the 21st century; Noema publishes bold and experimental ideas,” said Kathleen Miles, executive editor of Noema. “The magazine cross-fertilizes ideas across boundaries and explores correspondences among them in order to map out the terrain of the great transformations underway.”  

I notice Suzanne Simard (from the University of British Columbia and author of “Finding the Mother Tree: Discovering the Wisdom of the Forest”) on the list of essayists along with a story by Chinese science fiction writer, Hao Jingfang.

Simard was mentioned here in a May 12, 2021 posting (scroll down to the “UBC forestry professor, Suzanne Simard’s memoir going to the movies?” subhead) when it was announced that her then not yet published memoir will be a film starring Amy Adams (or so they hope).

Hao Jingfang was mentioned here in a November 16, 2020 posting titled: “Telling stories about artificial intelligence (AI) and Chinese science fiction; a Nov. 17, 2020 virtual event” (co-hosted by the Berggruen Institute and University of Cambridge’s Leverhulme Centre for the Future of Intelligence [CFI]).

A month after Noēma’s second paper issue on July 13, 2021, the theme and topics appear especially timely in light of the extensive news coverage in Canada and many other parts of the world given to the Monday, August, 9, 2021 release of the sixth UN Climate report raising alarms over irreversible impacts. (Emily Chung’s August 12, 2021 analysis for the Canadian Broadcasting Corporation [CBC] offers a little good news for those severely alarmed by the report.) Note: The Intergovernmental Panel on Climate Change (IPCC) is the UN body tasked with assessing the science related to climate change.

New US regulations exempt many gene-edited crops from government oversight

A June 1, 2020 essay by Maywa Montenegro (Postdoctoral Fellow, University of California at Davis) for The Conversation posits that new regulations (which in fact result in deregulation) are likely to create problems,

In May [2020], federal regulators finalized a new biotechnology policy that will bring sweeping changes to the U.S. food system. Dubbed “SECURE,” the rule revises U.S. Department of Agriculture regulations over genetically engineered plants, automatically exempting many gene-edited crops from government oversight. Companies and labs will be allowed to “self-determine” whether or not a crop should undergo regulatory review or environmental risk assessment.

Initial responses to this new policy have followed familiar fault lines in the food community. Seed industry trade groups and biotech firms hailed the rule as “important to support continuing innovation.” Environmental and small farmer NGOs called the USDA’s decision “shameful” and less attentive to public well-being than to agribusiness’s bottom line.

But the gene-editing tool CRISPR was supposed to break the impasse in old GM wars by making biotechnology more widely affordable, accessible and thus democratic.

In my research, I study how biotechnology affects transitions to sustainable food systems. It’s clear that since 2012 the swelling R&D pipeline of gene-edited grains, fruits and vegetables, fish and livestock has forced U.S. agencies to respond to the so-called CRISPR revolution.

Yet this rule change has a number of people in the food and scientific communities concerned. To me, it reflects the lack of accountability and trust between the public and government agencies setting policies.

Is there a better way?

… I have developed a set of principles and practices for governing CRISPR based on dialogue with front-line communities who are most affected by the technologies others usher in. Communities don’t just have to adopt or refuse technology – they can co-create [emphasis mine] it.

One way to move forward in the U.S. is to take advantage of common ground between sustainable agriculture movements and CRISPR scientists. The struggle over USDA rules suggests that few outside of industry believe self-regulation is fair, wise or scientific.

h/t: June 1, 2020 news item on phys.org

If you have the time and the inclination, do read the essay in its entirety.

Anyone who has read my COVID-19 op-ed for the Canadian Science Policy may see some similarity between Montenegro’s “co-create” and this from my May 15, 2020 posting which included my reference materials or this version on the Canadian Science Policy Centre where you can find many other COVID-19 op-eds)

In addition to engaging experts as we navigate our way into the future, we can look to artists, writers, citizen scientists, elders, indigenous communities, rural and urban communities, politicians, philosophers, ethicists, religious leaders, and bureaucrats of all stripes for more insight into the potential for collateral and unintended consequences.

To be clear, I think times of crises are when a lot of people call for more co-creation and input. Here’s more about Montenegro’s work on her profile page (which includes her academic credentials, research interests and publications) on the University of California at Berkeley’s Department of Environmental Science, Policy, and Management webspace. She seems to have been making the call for years.

I am a US-Dutch-Peruvian citizen who grew up in Appalachia, studied molecular biology in the Northeast, worked as a journalist in New York City, and then migrated to the left coast to pursue a PhD. My indigenous ancestry, smallholder family history, and the colonizing/decolonizing experiences of both the Netherlands and Peru informs my personal and professional interests in seeds and agrobiodiversity. My background engenders a strong desire to explore synergies between western science and the indigenous/traditional knowledge systems that have historically been devalued and marginalized.

Trained in molecular biology, science writing, and now, a range of critical social and ecological theory, I incorporate these perspectives into research on seeds.

I am particularly interested in the relationship between formal seed systems – characterized by professional breeding, certification, intellectual property – and commercial sale and informal seed systems through which farmers traditionally save, exchange, and sell seeds. …

You can find more on her Twitter feed, which is where I discovered a call for papers for a “Special Feature: Gene Editing the Food System” in the journal, Elementa: Science of the Anthropocene. They have a rolling deadline, which started in February 2020. At this time, there is one paper in the series,

Democratizing CRISPR? Stories, practices, and politics of science and governance on the agricultural gene editing frontier by Maywa Montenegro de Wit. Elem Sci Anth, 8(1), p.9. DOI: http://doi.org/10.1525/elementa.405 Published February 25, 2020

The paper is open access. Interestingly, the guest editor is Elizabeth Fitting of Dalhousie University in Nova Scotia, Canada.

Ghosts, mechanical turks, and pseudo-AI (artificial intelligence)—Is it all a con game?

There’s been more than one artificial intelligence (AI) story featured here on this blog but the ones featured in this posting are the first I’ve stumbled across that suggest the hype is even more exaggerated than even the most cynical might have thought. (BTW, the 2019 material is later as I have taken a chronological approach to this posting.)

It seems a lot of companies touting their AI algorithms and capabilities are relying on human beings to do the work, from a July 6, 2018 article by Olivia Solon for the Guardian (Note: A link has been removed),

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. …

The Turk

Fooling people with machines that seem intelligent is not new according to a Sept. 10, 2018 article by Seth Stevenson for Slate.com (Note: Links have been removed),

It’s 1783, and Paris is gripped by the prospect of a chess match. One of the contestants is François-André Philidor, who is considered the greatest chess player in Paris, and possibly the world. Everyone is so excited because Philidor is about to go head-to-head with the other biggest sensation in the chess world at the time.

But his opponent isn’t a man. And it’s not a woman, either. It’s a machine.

This story may sound a lot like Garry Kasparov taking on Deep Blue, IBM’s chess-playing supercomputer. But that was only a couple of decades ago, and this chess match in Paris happened more than 200 years ago. It doesn’t seem like a robot that can play chess would even be possible in the 1780s. This machine playing against Philidor was making an incredible technological leap—playing chess, and not only that, but beating humans at chess.

In the end, it didn’t quite beat Philidor, but the chess master called it one of his toughest matches ever. It was so hard for Philidor to get a read on his opponent, which was a carved wooden figure—slightly larger than life—wearing elaborate garments and offering a cold, mean stare.

It seems like the minds of the era would have been completely blown by a robot that could nearly beat a human chess champion. Some people back then worried that it was black magic, but many folks took the development in stride. …

Debates about the hottest topic in technology today—artificial intelligence—didn’t starts in the 1940s, with people like Alan Turing and the first computers. It turns out that the arguments about AI go back much further than you might imagine. The story of the 18th-century chess machine turns out to be one of those curious tales from history that can help us understand technology today, and where it might go tomorrow.

[In future episodes our podcast, Secret History of the Future] we’re going to look at the first cyberattack, which happened in the 1830s, and find out how the Victorians invented virtual reality.

Philidor’s opponent was known as The Turk or Mechanical Turk and that ‘machine’ was in fact a masterful hoax as The Turk held a hidden compartment from which a human being directed his moves.

People pretending to be AI agents

It seems that today’s AI has something in common with the 18th century Mechanical Turk, there are often humans lurking in the background making things work. From a Sept. 4, 2018 article by Janelle Shane for Slate.com (Note: Links have been removed),

Every day, people are paid to pretend to be bots.

In a strange twist on “robots are coming for my job,” some tech companies that boast about their artificial intelligence have found that at small scales, humans are a cheaper, easier, and more competent alternative to building an A.I. that can do the task.

Sometimes there is no A.I. at all. The “A.I.” is a mockup powered entirely by humans, in a “fake it till you make it” approach used to gauge investor interest or customer behavior. Other times, a real A.I. is combined with human employees ready to step in if the bot shows signs of struggling. These approaches are called “pseudo-A.I.” or sometimes, more optimistically, “hybrid A.I.”

Although some companies see the use of humans for “A.I.” tasks as a temporary bridge, others are embracing pseudo-A.I. as a customer service strategy that combines A.I. scalability with human competence. They’re advertising these as “hybrid A.I.” chatbots, and if they work as planned, you will never know if you were talking to a computer or a human. Every remote interaction could turn into a form of the Turing test. So how can you tell if you’re dealing with a bot pretending to be a human or a human pretending to be a bot?

One of the ways you can’t tell anymore is by looking for human imperfections like grammar mistakes or hesitations. In the past, chatbots had prewritten bits of dialogue that they could mix and match according to built-in rules. Bot speech was synonymous with precise formality. In early Turing tests, spelling mistakes were often a giveaway that the hidden speaker was a human. Today, however, many chatbots are powered by machine learning. Instead of using a programmer’s rules, these algorithms learn by example. And many training data sets come from services like Amazon’s Mechanical Turk, which lets programmers hire humans from around the world to generate examples of tasks like asking and answering questions. These data sets are usually full of casual speech, regionalisms, or other irregularities, so that’s what the algorithms learn. It’s not uncommon these days to get algorithmically generated image captions that read like text messages. And sometimes programmers deliberately add these things in, since most people don’t expect imperfections of an algorithm. In May, Google’s A.I. assistant made headlines for its ability to convincingly imitate the “ums” and “uhs” of a human speaker.

Limited computing power is the main reason that bots are usually good at just one thing at a time. Whenever programmers try to train machine learning algorithms to handle additional tasks, they usually get algorithms that can do many tasks rather badly. In other words, today’s algorithms are artificial narrow intelligence, or A.N.I., rather than artificial general intelligence, or A.G.I. For now, and for many years in the future, any algorithm or chatbot that claims A.G.I-level performance—the ability to deal sensibly with a wide range of topics—is likely to have humans behind the curtain.

Another bot giveaway is a very poor memory. …

Bringing AI to life: ghosts

Sidney Fussell’s April 15, 2019 article for The Atlantic provides more detail about the human/AI interface as found in some Amazon products such as Alexa ( a voice-control system),

… Alexa-enabled speakers can and do interpret speech, but Amazon relies on human guidance to make Alexa, well, more human—to help the software understand different accents, recognize celebrity names, and respond to more complex commands. This is true of many artificial intelligence–enabled products. They’re prototypes. They can only approximate their promised functions while humans help with what Harvard researchers have called “the paradox of automation’s last mile.” Advancements in AI, the researchers write, create temporary jobs such as tagging images or annotating clips, even as the technology is meant to supplant human labor. In the case of the Echo, gig workers are paid to improve its voice-recognition software—but then, when it’s advanced enough, it will be used to replace the hostess in a hotel lobby.

A 2016 paper by researchers at Stanford University used a computer vision system to infer, with 88 percent accuracy, the political affiliation of 22 million people based on what car they drive and where they live. Traditional polling would require a full staff, a hefty budget, and months of work. The system completed the task in two weeks. But first, it had to know what a car was. The researchers paid workers through Amazon’s Mechanical Turk [emphasis mine] platform to manually tag thousands of images of cars, so the system would learn to differentiate between shapes, styles, and colors.

It may be a rude awakening for Amazon Echo owners, but AI systems require enormous amounts of categorized data, before, during, and after product launch. ..,

Isn’t interesting that Amazon also has a crowdsourcing marketplace for its own products. Calling it ‘Mechanical Turk’ after a famous 18th century hoax would suggest a dark sense of humour somewhere in the corporation. (You can find out more about the Amazon Mechanical Turk on this Amazon website and in its Wikipedia entry.0

Anthropologist, Mary L. Gray has coined the phrase ‘ghost work’ for the work that humans perform but for which AI gets the credit. Angela Chan’s May 13, 2019 article for The Verge features Gray as she promotes her latest book with Siddarth Suri ‘Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass’ (Note: A link has been removed),

“Ghost work” is anthropologist Mary L. Gray’s term for the invisible labor that powers our technology platforms. When Gray, a senior researcher at Microsoft Research, first arrived at the company, she learned that building artificial intelligence requires people to manage and clean up data to feed to the training algorithms. “I basically started asking the engineers and computer scientists around me, ‘Who are the people you pay to do this task work of labeling images and classification tasks and cleaning up databases?’” says Gray. Some people said they didn’t know. Others said they didn’t want to know and were concerned that if they looked too closely they might find unsavory working conditions.

So Gray decided to find out for herself. Who are the people, often invisible, who pick up the tasks necessary for these platforms to run? Why do they do this work, and why do they leave? What are their working conditions?

The interview that follows is interesting although it doesn’t seem to me that the question about working conditions is answered in any great detail. However, there is this rather interesting policy suggestion,

If companies want to happily use contract work because they need to constantly churn through new ideas and new aptitudes, the only way to make that a good thing for both sides of that enterprise is for people to be able to jump into that pool. And people do that when they have health care and other provisions. This is the business case for universal health care, for universal education as a public good. It’s going to benefit all enterprise.

I want to get across to people that, in a lot of ways, we’re describing work conditions. We’re not describing a particular type of work. We’re describing today’s conditions for project-based task-driven work. This can happen to everybody’s jobs, and I hate that that might be the motivation because we should have cared all along, as this has been happening to plenty of people. For me, the message of this book is: let’s make this not just manageable, but sustainable and enjoyable. Stop making our lives wrap around work, and start making work serve our lives.

Puts a different spin on AI and work, doesn’t it?

S.NET (Society for the Study of New and Emerging Technologies) 2019 conference in Quito, Ecuador: call for abstracts

Why isn’t the S.NET abbreviation SSNET? That’s what it should be, given the organization’s full name: Society for the Study of New and Emerging Technologies. S.NET smacks of a compromise or consensus decision of some kind. Also, the ‘New’ in its name was ‘Nanoscience’ at one time (see my Oct. 22, 2013 posting).

Now onto 2019 and the conference, which, for the first time ever, is being held in Latin America. Here’s more from a February 4, 2019 S.Net email about the call for abstracts,

2019 Annual S.NET Meeting
Contrasting Visions of Technological Change

The 11th Annual S.NET meeting will take place November 18-20, 2019, at the Latin American Faculty of Social Sciences in Quito, Ecuador.

This year’s meeting will provide rich opportunities to reflect on technological change by establishing a dialogue between contrasting visions on how technology becomes closely intertwined with social orders.  We aim to open the black box of technological change by exploring the sociotechnical agreements that help to explain why societies follow certain technological trajectories. Contributors are invited to explore the ramifications of technological change, reflect on the policy process of technology, and debate whether or why technological innovation is a matter for democracy.

Following the transnational nature of S.NET, the meeting will highlight the diverse geographical and cultural approaches to technological innovation, the forces driving sociotechnical change, and social innovation.  It is of paramount importance to question the role of technology in the shaping of society and the outcomes of these configurations.  What happens when these arrangements come into being, are transformed or fall apart?  Does technology create contestation?  Why and how should we engage with contested visions of technology change?

This is the first time that the S.NET Meeting will take place in Latin America and we encourage panels and presentations with contrasting voices from both the Global North and the Global South. 

Topics of interest include, but are not limited to:

Sociotechnical imaginaries of innovation
The role of technology on shaping nationhood and nation identities
Decision-making processes on science and technology public policies
Co-creation approaches to promote public innovation
Grassroots innovation, sustainability and democracy
Visions and cultural imaginaries
Role of social sciences and humanities in processes technological change
In addition, we welcome contributions on:
Research dynamics and organization Innovation and use
Governance and regulation
Politics and ethics
Roles of publics and stakeholders

Keynote Speakers
TBA (check the conference website for updates!)

Deadlines & Submission Instructions
The program committee invites contributions from scholars, technology developers and practitioners, and welcome presentations from a range of disciplines spanning the humanities, social and natural sciences.  We invite individual paper submissions, open panel and closed session proposals, student posters, and special format sessions, including events that are innovative in form and content. 

The deadline for abstract submissions is *April 18, 2019* [extended to May 12, 2019].  Abstracts should be approximately 250 words in length, emailed in PDF format to 2019snet@gmail.com.  Notifications of acceptance can be expected by May 30, 2019.

Junior scholars and those with limited resources are strongly encouraged to apply, as the organizing committee is actively investigating potential sources of financial support.

Details on the conference can be found here: https://www.flacso.edu.ec/snet2019/

Local Organizing Committee
María Belén Albornoz, Isarelis Pérez, Javier Jiménez, Mónica Bustamante, Jorge Núñez, Maka Suárez.

Venue
FLACSO Ecuador is located in the heart of Quito.  Most hotels, museums, shopping centers and other cultural hotspots in the city are located near the campus and are easily accessible by public or private transportation.  Due to its proximity and easy access, Meeting participants would be able to enjoy Quito’s rich cultural life during their stay.  

About S.NET
S.NET is an international association that promotes intellectual exchange and critical inquiry about the advancement of new and emerging technologies in society.  The aim of the association is to advance critical reflection from various perspectives on developments in a broad range of new and emerging fields, including, but not limited to, nanoscale science and engineering, biotechnology, synthetic biology, cognitive science, ICT and Big Data, and geo-engineering.  Current S.NET board members are: Michael Bennett (President), Maria Belen Albornoz, Claire Shelley-Egan, Ana Delgado, Ana Viseu, Nora Vaage, Chris Toumey, Poonam Pandey, Sylvester Johnson, Lotte Krabbenborg, and Maria Joao Ferreira Maia.

Don’t forget, the deadline for your abstract is *April 18, 2019* [extended to May 12, 2019].

For anyone curious about what Quito might look like, there’s this from Quito’s Wikipedia entry,

Clockwise from top: Calle La Ronda, Iglesia de la Compañía de Jesús, El Panecillo as seen from Northern Quito, Carondelet Palace, Central-Northern Quito, Parque La Carolina and Iglesia y Monasterio de San Francisco. Credit: various authors – montage of various important landmarks of the City of Quito, Ecuador taken from files found in Wikimedia Commons. CC BY-SA 3.0 File:Montaje Quito.png Created: 24 December 2012

Good luck to all everyone submitting an abstract.

*Date for abstract submissions changed from April 18, 2019 to May 12, 2019 on April 24, 2019

Summer (2019) Institute on AI (artificial intelligence) Societal Impacts, Governance, and Ethics. Summer Institute In Alberta, Canada

The deadline for applications is April 7, 2019. As for whether or not you might like to attend, here’s more from a joint March 11, 2019 Alberta Machine Intelligence Institute (Amii)/
Canadian Institute for Advanced Research (CIFAR)/University of California at Los Angeles (UCLA) Law School news release
(also on globalnewswire.com),

What will Artificial Intelligence (AI) mean for society? That’s the question scholars from a variety of disciplines will explore during the inaugural Summer Institute on AI Societal Impacts, Governance, and Ethics. Summer Institute, co-hosted by the Alberta Machine Intelligence Institute (Amii) and CIFAR, with support from UCLA School of Law, takes place July 22-24, 2019 in Edmonton, Canada.

“Recent advances in AI have brought a surge of attention to the field – both excitement and concern,” says co-organizer and UCLA professor, Edward Parson. “From algorithmic bias to autonomous vehicles, personal privacy to automation replacing jobs. Summer Institute will bring together exceptional people to talk about how humanity can receive the benefits and not get the worst harms from these rapid changes.”

Summer Institute brings together experts, grad students and researchers from multiple backgrounds to explore the societal, governmental, and ethical implications of AI. A combination of lectures, panels, and participatory problem-solving, this comprehensive interdisciplinary event aims to build understanding and action around these high-stakes issues.

“Machine intelligence is opening transformative opportunities across the world,” says John Shillington, CEO of Amii, “and Amii is excited to bring together our own world-leading researchers with experts from areas such as law, philosophy and ethics for this important discussion. Interdisciplinary perspectives will be essential to the ongoing development of machine intelligence and for ensuring these opportunities have the broadest reach possible.”

Over the three-day program, 30 graduate-level students and early-career researchers will engage with leading experts and researchers including event co-organizers: Western University’s Daniel Lizotte, Amii’s Alona Fyshe and UCLA’s Edward Parson. Participants will also have a chance to shape the curriculum throughout this uniquely interactive event.

Summer Institute takes place prior to Deep Learning and Reinforcement Learning Summer School, and includes a combined event on July 24th [2019] for both Summer Institute and Summer School participants.

Visit dlrlsummerschool.ca/the-summer-institute to apply; applications close April 7, 2019.

View our Summer Institute Biographies & Boilerplates for more information on confirmed faculty members and co-hosting organizations. Follow the conversation through social media channels using the hashtag #SI2019.

Media Contact: Spencer Murray, Director of Communications & Public Relations, Amii
t: 587.415.6100 | c: 780.991.7136 | e: spencer.murray@amii.ca

There’s a bit more information on The Summer Institute on AI and Society webpage (on the Deep Learning and Reinforcement Learning Summer School 2019 website) such as this more complete list of speakers,

Confirmed speakers at Summer Institute include:

Alona Fyshe, University of Alberta/Amii (SI co-organizer)
Edward Parson, UCLA (SI co-organizer)
Daniel Lizotte, Western University (SI co-organizer)
Geoffrey Rockwell, University of Alberta
Graham Taylor, University of Guelph/Vector Institute
Rob Lempert, Rand Corporation
Gary Marchant, Arizona State University
Richard Re, UCLA
Evan Selinger, Rochester Institute of Technology
Elana Zeide, UCLA

Two questions, why are all the summer school faculty either Canada- or US-based? What about South American, Asian, Middle Eastern, etc. thinkers?

One last thought, I wonder if this ‘AI & ethics summer institute’ has anything to do with the Pan-Canadian Artificial Intelligence Strategy, which CIFAR administers and where both the University of Alberta and Vector Institute are members.

Scientometrics and science typologies

Caption: As of 2013, there were 7.8 million researchers globally, according to UNESCO. This means that 0.1 percent of the people in the world professionally do science. Their work is largely financed by governments, yet public officials are not themselves researchers. To help governments make sense of the scientific community, Russian mathematicians have devised a researcher typology. The authors initially identified three clusters, which they tentatively labeled as “leaders,” “successors,” and “toilers.” Credit: Lion_on_helium/MIPT Press Office

A June 28, 2018 Moscow Institute of Physics and Technology (MIPT; Russia) press release (also on EurekAlert) announces some intriguing research,

Researchers in various fields, from psychology to economics, build models of human behavior and reasoning to categorize people. But it does not happen as often that scientists undertake an analysis to classify their own kind.

However, research evaluation, and therefore scientist stratification as well, remain highly relevant. Six years ago, the government outlined the objective that Russian scientists should have 50 percent more publications in Web of Science- and Scopus-indexed journals. As of 2011, papers by researchers from Russia accounted for 1.66 percent of publications globally. By 2015, this number was supposed to reach 2.44%. It did grow but this has also sparked a discussion in the scientific community about the criteria used for evaluating research work.

The most common way of gauging the impact of a researcher is in terms of his or her publications. Namely, whether they are in a prestigious journal and how many times they have been cited. As with any good idea, however, one runs the risk of overdoing it. In 2005, U.S. physicist Jorge Hirsch proposed his h-index, which takes into account the number of publications by a given researcher and the number of times they have been cited. Now, scientists are increasingly doubting the adequacy of using bibliometric data as the sole independent criterion for evaluating research work. One obvious example of a flaw of this metric is that a paper can be frequently cited to point out a mistake in it.

Scientists are increasingly under pressure to publish more often. Research that might have reasonably been published in one paper is being split up into stages for separate publication. This calls for new approaches to the evaluation of work done by research groups and individual authors. Similarly, attempts to systematize the existing methods in scientometrics and stratify scientists are becoming more relevant, too. This is arguably even more important for Russia, where the research reform has been stretching for years.

One of the challenges in scientometrics is identifying the prominent types of researchers in different fields. A typology of scientists has been proposed by Moscow Institute of Physics and Technology Professor Pavel Chebotarev, who also heads the Laboratory of Mathematical Methods for Multiagent Systems Analysis at the Institute of Control Sciences of the Russian Academy of Sciences, and Ilya Vasilyev, a master’s student at MIPT.

In their paper, the two authors determined distinct types of scientists based on an indirect analysis of the style of research work, how papers are received by colleagues, and what impact they make. A further question addressed by the authors is to what degree researcher typology is affected by the scientific discipline.

“Each science has its own style of work. Publication strategies and citation practices vary, and leaders are distinguished in different ways,” says Chebotarev. “Even within a given discipline, things may be very different. This means that it is, unfortunately, not possible to have a universal system that would apply to anyone from a biologist to a philologist.”

“All of the reasonable systems that already exist are adjusted to particular disciplines,” he goes on. “They take into account the criteria used by the researchers themselves to judge who is who in their field. For example, scientists at the Institute for Nuclear Research of the Russian Academy of Sciences are divided into five groups based on what research they do, and they see a direct comparison of members of different groups as inadequate.”

The study was based on the citation data from the Google Scholar bibliographic database. To identify researcher types, the authors analyzed citation statistics for a large number of scientists, isolating and interpreting clusters of similar researchers.

Chebotarev and Vasilyev looked at the citation statistics for four groups of researchers returned by a Google Scholar search using the tags “Mathematics,” “Physics,” and “Psychology.” The first 515 and 556 search hits were considered in the case of physicists and psychologists, respectively. The authors studied two sets of mathematicians: the top 500 hits and hit Nos. 199-742. The four sets thus included frequently cited scientists from three disciplines indicating their general field of research in their profiles. Citation dynamics over each scientist’s career were examined using a range of indexes.

The authors initially identified three clusters, which they tentatively labeled as “leaders,” “successors,” and “toilers.” The leaders are experienced scientists widely recognized in their fields for research that has secured an annual citation count increase for them. The successors are young scientists who have more citations than toilers. The latter earn their high citation metrics owing to yearslong work, but they lack the illustrious scientific achievements.

Among the top 500 researchers indicating mathematics as their field of interest, 52 percent accounted for toilers, with successors and leaders making up 25.8 and 22.2 percent, respectively.

For physicists, the distribution was slightly different, with 48.5 percent of the set classified as toilers, 31.7 percent as successors, and 19.8 percent as leaders. That is, there were more successful young scientists, at the expense of leaders and toilers. This may be seen as a confirmation of the solitary nature of mathematical research, as compared with physics.

Finally, in the case of psychologists, toilers made up 47.7 percent of the set, with successors and leaders accounting for 18.3 and 34 percent. Comparing the distributions for the three disciplines investigated in the study, the authors conclude that there are more young achievers among those doing mathematical research.

A closer look enabled the authors to determine a more fine-grained cluster structure, which turned out to be remarkably similar for mathematicians and physicists. In particular, they identified a cluster of the youngest and most successful researchers, dubbed “precocious,” making up 4 percent of the mathematicians and 4.3 percent of the physicists in the set, along with the “youth” — successful researchers whose debuts were somewhat less dramatic: 29 and 31.7 percent of scientists doing math and physics research, respectively. Two further clusters were interpreted as recognized scientific authorities, or “luminaries,” and experienced researchers who have not seen an appreciable growth in the number of citations recently. Luminaries and the so-called inertia accounted for 52 and 15 percent of mathematicians and 50 and 14 percent of physicists, respectively.

There is an alternative way of clustering physicists, which recognizes a segment of researchers, who “caught the wave.” The authors suggest this might happen after joining major international research groups.

Among psychologists, 18.3 percent have been classified as precocious, though not as young as the physicists and mathematicians in the corresponding group. The most experienced and respected psychology researchers account for 22.5 percent, but there is no subdivision into luminaries and inertia, because those actively cited generally continue to be. Relatively young psychologists make up 59.2 percent of the set. The borders between clusters are relatively blurred in the case of psychology, which might be a feature of the humanities, according to the authors.

“Our pilot study showed even more similarity than we’d expected in how mathematicians and physicists are clustered,” says Chebotarev. “Whereas with psychology, things are noticeably different, yet the breakdown is slightly closer to math than physics. Perhaps, there is a certain connection between psychology and math after all, as some people say.”

“The next stage of this research features more disciplines. Hopefully, we will be ready to present the new results soon,” he concludes.

I think that they are attempting to create a new way of measuring scientific progress (scientometrics) by establishing a more representative means of measuring individual contributions based on the analysis they provide of the ways in which these ‘typologies’ are expressed across various disciplines.

For anyone who wants to investigate further, you will need to be able to read Russian. You can download the paper from here on MathNet.ru,.

Here’s my best attempt at a citation for the paper,

Making a typology of scientists on the basis of bibliometric data by I. Vasilyev, P. Yu. Chebotarev. Large-scale System Control (UBS), 2018, Issue 72, Pages 138–195 (Mi ubs948)

I’m glad to see this as there is a fair degree of dissatisfaction about the current measures for scientific progress used in any number of reports on the topic. As far as I can tell, this dissatisfaction is felt internationally.

The Center for Nanotechnology in Society at the University of California at Santa Barbara offers a ‘swan song’ in three parts

I gather the University of California at Santa Barbara’s (UCSB) Center for Nanotechnology in Society is ‘sunsetting’ as its funding runs out. A Nov. 9, 2016 UCSB news release by Brandon Fastman describes the center’s ‘swan song’,

After more than a decade, the UCSB Center for Nanotechnology in Society research has provided new and deep knowledge of how technological innovation and social change impact one another. Now, as the national center reaches the end of its term, its three primary research groups have published synthesis reports that bring together important findings from their 11 years of activity.

The reports, which include policy recommendations, are available for free download at the CNS web site at

http://www.cns.ucsb.edu/irg-synthesis-reports.

The ever-increasing ability of scientists to manipulate matter on the molecular level brings with it the potential for science fiction-like technologies such as nanoelectronic sensors that would entail “merging tissue with electronics in a way that it becomes difficult to determine where the tissue ends and the electronics begin,” according to a Harvard chemist in a recent CQ Researcher report. While the life-altering ramifications of such technologies are clear, it is less clear how they might impact the larger society to which they are introduced.

CNS research, as detailed the reports, addresses such gaps in knowledge. For instance, when anthropologist Barbara Herr Harthorn and her collaborators at the UCSB Center for Nanotechnology in Society (CNS-UCSB), convened public deliberations to discuss the promises and perils of health and human enhancement nanotechnologies, they thought that participants might be concerned about medical risks. However, that is not exactly what they found.

Participants were less worried about medical or technological mishaps than about the equitable distribution of the risks and benefits of new technologies and fair procedures for addressing potential problems. That is, they were unconvinced that citizens across the socioeconomic spectrum would share equal access to the benefits of therapies or equal exposure to their pitfalls.

In describing her work, Harthorn explained, “Intuitive assumptions of experts and practitioners about public perceptions and concerns are insufficient to understanding the societal contexts of technologies. Relying on intuition often leads to misunderstandings of social and institutional realities. CNS-UCSB has attempted to fill in the knowledge gaps through methodologically sophisticated empirical and theoretical research.”

In her role as Director of CNS-UCSB, Harthorn has overseen a larger effort to promote the responsible development of sophisticated materials and technologies seen as central to the nation’s economic future. By pursuing this goal, researchers at CNS-UCSB, which closed its doors at the end of the summer, have advanced the role for the social, economic, and behavioral sciences in understanding technological innovation.

Harthorn has spent the past 11 years trying to understand public expectations, values, beliefs, and perceptions regarding nanotechnologies. Along with conducting deliberations, she has worked with toxicologists and engineers to examine the environmental and occupational risks of nanotechnologies, determine gaps in the U.S. regulatory system, and survey nanotechnology experts. Work has also expanded to comparative studies of other emerging technologies such as shale oil and gas extraction (fracking).

Along with Harthorn’s research group on risk perception and social response, CNS-UCSB housed two other main research groups. One, led by sociologist Richard Appelbaum, studied the impacts of nanotechnology on the global economy. The other, led by historian Patrick McCray, studied the technologies, communities, and individuals that have shaped the direction of nanotechnology research.

Appelbaum’s research program included studying how state policies regarding nanotechnology – especially in China and Latin America – has impacted commercialization. Research trips to China elicited a great understanding of that nation’s research culture and its capacity to produce original intellectual property. He also studied the role of international collaboration in spurring technological innovation. As part of this research, his collaborators surveyed and interviewed international STEM graduate students in the United States in order to understand the factors that influence their choice whether to remain abroad or return home.

In examining the history of nanotechnology, McCray’s group explained how the microelectronics industry provided a template for what became known as nanotechnology, examined educational policies aimed at training a nano-workforce, and produced a history of the scanning tunneling microscope. They also penned award-winning monographs including McCray’s book, The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies, and Limitless Future.

Reaching the Real World

Funded as a National Center by the US National Science Foundation in 2005, CNS-UCSB was explicitly intended to enhance the understanding of the relationship between new technologies and their societal context. After more than a decade of funding, CNS-UCSB research has provided a deep understanding of the relationship between technological innovation and social change.

New developments in nanotechnology, an area of research that has garnered $24 billion in funding from the U.S. federal government since 2001, impact sectors as far ranging as agriculture, medicine, energy, defense, and construction, posing great challenges for policymakers and regulators who must consider questions of equity, sustainability, occupational and environmental health and safety, economic and educational policy, disruptions to privacy, security and even what it means to be human. (A nanometer is roughly 10,000 times smaller than the diameter of a human hair.)  Nanoscale materials are already integrated into food packaging, electronics, solar cells, cosmetics, and pharmaceuticals. They are far in development for drugs that can target specific cells, microscopic spying devices, and quantum computers.

Given such real-world applications, it was important to CNS researchers that the results of their work not remain confined within the halls of academia. Therefore, they have delivered testimony to Congress, federal and state agencies (including the National Academies of Science, the Centers for Disease Control and Prevention, the Presidential Council of Advisors on Science and Technology, the U.S. Presidential Bioethics Commission and the National Nanotechnology Initiative), policy outfits (including the Washington Center for Equitable Growth), and international agencies (including the World Bank, European Commission, and World Economic Forum). They’ve collaborated with nongovernmental organizations. They’ve composed policy briefs and op eds, and their work has been covered by numerous news organizations including, recently, NPR, The New Yorker, and Forbes. They have also given many hundreds of lectures to audiences in community groups, schools, and museums.

Policy Options

Most notably, in their final act before the center closed, each of the three primary research groups published synthesis reports that bring together important findings from their 11 years of activity. Their titles are:

Exploring Nanotechnology’s Origins, Institutions, and Communities: A Ten Year Experiment in Large Scale Collaborative STS Research

Globalization and Nanotechnology: The Role of State Policy and International Collaboration

Understanding Nanotechnologies’ Risks and Benefits: Emergence, Expertise and Upstream Participation.

A sampling of key policy recommendations follows:

1.     Public acceptability of nanotechnologies is driven by: benefit perception, the type of application, and the risk messages transmitted from trusted sources and their stability over time; therefore transparent and responsible risk communication is a critical aspect of acceptability.

2.     Social risks, particularly issues of equity and politics, are primary, not secondary, drivers of perception and need to be fully addressed in any new technology development. We have devoted particular attention to studying how gender and race/ethnicity affect both public and expert risk judgments.

3.     State policies aimed at fostering science and technology development should clearly continue to emphasize basic research, but not to the exclusion of supporting promising innovative payoffs. The National Nanotechnology Initiative, with its overwhelming emphasis on basic research, would likely achieve greater success in spawning thriving businesses and commercialization by investing more in capital programs such as the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, self-described as “America’s seed fund.”

4.     While nearly half of all international STEM graduate students would like to stay in the U.S. upon graduation, fully 40 percent are undecided — and a main barrier is current U.S. immigration policy.

5.     Although representatives from the nanomaterials industry demonstrate relatively high perceived risk regarding engineered nanomaterials, they likewise demonstrate low sensitivity to variance in risks across type of engineered nanomaterials, and a strong disinclination to regulation. This situation puts workers at significant risk and probably requires regulatory action now (beyond the currently favored voluntary or ‘soft law’ approaches).

6.     The complex nature of technological ecosystems translates into a variety of actors essential for successful innovation. One species is the Visioneer, a person who blends engineering experience with a transformative vision of the technological future and a willingness to promote this vision to the public and policy makers.

Leaving a Legacy

Along with successful outreach efforts, CNS-UCSB also flourished when measured by typical academic metrics, including nearly 400 publications and 1,200 talks.

In addition to producing groundbreaking interdisciplinary research, CNS-UCSB also produced innovative educational programs, reaching 200 professionals-in-training from the undergraduate to postdoctoral levels. The Center’s educational centerpiece was a graduate fellowship program, referred to as “magical” by an NSF reviewer, that integrated doctoral students from disciplines across the UCSB campus into ongoing social science research projects.

For social scientists, working side-by-side with science and engineering students gave them an appreciation for the methods, culture, and ethics of their colleagues in different disciplines. It also led to methodological innovation. For their part, scientists and engineers were able to understand the larger context of their work at the bench.

UCSB graduates who participated in CNS’s educational programs have gone on to work as postdocs and professors at universities (including MIT, Stanford, U Penn), policy experts (at organizations like the Science Technology and Policy Institute and the Canadian Institute for Advanced Research), researchers at government agencies (like the National Institute for Standards and Technology), nonprofits (like the Kauffman Foundation), and NGOs. Others work in industry, and some have become entrepreneurs, starting their own businesses.

CNS has spawned lines of research that will continue at UCSB and the institutions of collaborators around the world, but its most enduring legacy will be the students it trained. They bring a true understanding of the complex interconnections between technology and society — along with an intellectual toolkit for examining them — to every sector of the economy, and they will continue to pursue a world that is as just as it technologically advanced.

I found the policy recommendations interesting especially this one:

5.     Although representatives from the nanomaterials industry demonstrate relatively high perceived risk regarding engineered nanomaterials, they likewise demonstrate low sensitivity to variance in risks across type of engineered nanomaterials, and a strong disinclination to regulation. This situation puts workers at significant risk and probably requires regulatory action now (beyond the currently favored voluntary or ‘soft law’ approaches).

Without having read the documents, I’m not sure how to respond but I do have a question.  Just how much regulation are they suggesting?

I offer all of the people associated with the center my thanks for all their hard work and my gratitude for the support I received from the center when I presented at the Society for the Study of Nanotechnologies and Other Emerging Technology (S.Net) in 2012. I’m glad to see they’re going out with a bang.