Category Archives: social implications

Ghosts, mechanical turks, and pseudo-AI (artificial intelligence)—Is it all a con game?

There’s been more than one artificial intelligence (AI) story featured here on this blog but the ones featured in this posting are the first I’ve stumbled across that suggest the hype is even more exaggerated than even the most cynical might have thought. (BTW, the 2019 material is later as I have taken a chronological approach to this posting.)

It seems a lot of companies touting their AI algorithms and capabilities are relying on human beings to do the work, from a July 6, 2018 article by Olivia Solon for the Guardian (Note: A link has been removed),

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. …

The Turk

Fooling people with machines that seem intelligent is not new according to a Sept. 10, 2018 article by Seth Stevenson for Slate.com (Note: Links have been removed),

It’s 1783, and Paris is gripped by the prospect of a chess match. One of the contestants is François-André Philidor, who is considered the greatest chess player in Paris, and possibly the world. Everyone is so excited because Philidor is about to go head-to-head with the other biggest sensation in the chess world at the time.

But his opponent isn’t a man. And it’s not a woman, either. It’s a machine.

This story may sound a lot like Garry Kasparov taking on Deep Blue, IBM’s chess-playing supercomputer. But that was only a couple of decades ago, and this chess match in Paris happened more than 200 years ago. It doesn’t seem like a robot that can play chess would even be possible in the 1780s. This machine playing against Philidor was making an incredible technological leap—playing chess, and not only that, but beating humans at chess.

In the end, it didn’t quite beat Philidor, but the chess master called it one of his toughest matches ever. It was so hard for Philidor to get a read on his opponent, which was a carved wooden figure—slightly larger than life—wearing elaborate garments and offering a cold, mean stare.

It seems like the minds of the era would have been completely blown by a robot that could nearly beat a human chess champion. Some people back then worried that it was black magic, but many folks took the development in stride. …

Debates about the hottest topic in technology today—artificial intelligence—didn’t starts in the 1940s, with people like Alan Turing and the first computers. It turns out that the arguments about AI go back much further than you might imagine. The story of the 18th-century chess machine turns out to be one of those curious tales from history that can help us understand technology today, and where it might go tomorrow.

[In future episodes our podcast, Secret History of the Future] we’re going to look at the first cyberattack, which happened in the 1830s, and find out how the Victorians invented virtual reality.

Philidor’s opponent was known as The Turk or Mechanical Turk and that ‘machine’ was in fact a masterful hoax as The Turk held a hidden compartment from which a human being directed his moves.

People pretending to be AI agents

It seems that today’s AI has something in common with the 18th century Mechanical Turk, there are often humans lurking in the background making things work. From a Sept. 4, 2018 article by Janelle Shane for Slate.com (Note: Links have been removed),

Every day, people are paid to pretend to be bots.

In a strange twist on “robots are coming for my job,” some tech companies that boast about their artificial intelligence have found that at small scales, humans are a cheaper, easier, and more competent alternative to building an A.I. that can do the task.

Sometimes there is no A.I. at all. The “A.I.” is a mockup powered entirely by humans, in a “fake it till you make it” approach used to gauge investor interest or customer behavior. Other times, a real A.I. is combined with human employees ready to step in if the bot shows signs of struggling. These approaches are called “pseudo-A.I.” or sometimes, more optimistically, “hybrid A.I.”

Although some companies see the use of humans for “A.I.” tasks as a temporary bridge, others are embracing pseudo-A.I. as a customer service strategy that combines A.I. scalability with human competence. They’re advertising these as “hybrid A.I.” chatbots, and if they work as planned, you will never know if you were talking to a computer or a human. Every remote interaction could turn into a form of the Turing test. So how can you tell if you’re dealing with a bot pretending to be a human or a human pretending to be a bot?

One of the ways you can’t tell anymore is by looking for human imperfections like grammar mistakes or hesitations. In the past, chatbots had prewritten bits of dialogue that they could mix and match according to built-in rules. Bot speech was synonymous with precise formality. In early Turing tests, spelling mistakes were often a giveaway that the hidden speaker was a human. Today, however, many chatbots are powered by machine learning. Instead of using a programmer’s rules, these algorithms learn by example. And many training data sets come from services like Amazon’s Mechanical Turk, which lets programmers hire humans from around the world to generate examples of tasks like asking and answering questions. These data sets are usually full of casual speech, regionalisms, or other irregularities, so that’s what the algorithms learn. It’s not uncommon these days to get algorithmically generated image captions that read like text messages. And sometimes programmers deliberately add these things in, since most people don’t expect imperfections of an algorithm. In May, Google’s A.I. assistant made headlines for its ability to convincingly imitate the “ums” and “uhs” of a human speaker.

Limited computing power is the main reason that bots are usually good at just one thing at a time. Whenever programmers try to train machine learning algorithms to handle additional tasks, they usually get algorithms that can do many tasks rather badly. In other words, today’s algorithms are artificial narrow intelligence, or A.N.I., rather than artificial general intelligence, or A.G.I. For now, and for many years in the future, any algorithm or chatbot that claims A.G.I-level performance—the ability to deal sensibly with a wide range of topics—is likely to have humans behind the curtain.

Another bot giveaway is a very poor memory. …

Bringing AI to life: ghosts

Sidney Fussell’s April 15, 2019 article for The Atlantic provides more detail about the human/AI interface as found in some Amazon products such as Alexa ( a voice-control system),

… Alexa-enabled speakers can and do interpret speech, but Amazon relies on human guidance to make Alexa, well, more human—to help the software understand different accents, recognize celebrity names, and respond to more complex commands. This is true of many artificial intelligence–enabled products. They’re prototypes. They can only approximate their promised functions while humans help with what Harvard researchers have called “the paradox of automation’s last mile.” Advancements in AI, the researchers write, create temporary jobs such as tagging images or annotating clips, even as the technology is meant to supplant human labor. In the case of the Echo, gig workers are paid to improve its voice-recognition software—but then, when it’s advanced enough, it will be used to replace the hostess in a hotel lobby.

A 2016 paper by researchers at Stanford University used a computer vision system to infer, with 88 percent accuracy, the political affiliation of 22 million people based on what car they drive and where they live. Traditional polling would require a full staff, a hefty budget, and months of work. The system completed the task in two weeks. But first, it had to know what a car was. The researchers paid workers through Amazon’s Mechanical Turk [emphasis mine] platform to manually tag thousands of images of cars, so the system would learn to differentiate between shapes, styles, and colors.

It may be a rude awakening for Amazon Echo owners, but AI systems require enormous amounts of categorized data, before, during, and after product launch. ..,

Isn’t interesting that Amazon also has a crowdsourcing marketplace for its own products. Calling it ‘Mechanical Turk’ after a famous 18th century hoax would suggest a dark sense of humour somewhere in the corporation. (You can find out more about the Amazon Mechanical Turk on this Amazon website and in its Wikipedia entry.0

Anthropologist, Mary L. Gray has coined the phrase ‘ghost work’ for the work that humans perform but for which AI gets the credit. Angela Chan’s May 13, 2019 article for The Verge features Gray as she promotes her latest book with Siddarth Suri ‘Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass’ (Note: A link has been removed),

“Ghost work” is anthropologist Mary L. Gray’s term for the invisible labor that powers our technology platforms. When Gray, a senior researcher at Microsoft Research, first arrived at the company, she learned that building artificial intelligence requires people to manage and clean up data to feed to the training algorithms. “I basically started asking the engineers and computer scientists around me, ‘Who are the people you pay to do this task work of labeling images and classification tasks and cleaning up databases?’” says Gray. Some people said they didn’t know. Others said they didn’t want to know and were concerned that if they looked too closely they might find unsavory working conditions.

So Gray decided to find out for herself. Who are the people, often invisible, who pick up the tasks necessary for these platforms to run? Why do they do this work, and why do they leave? What are their working conditions?

The interview that follows is interesting although it doesn’t seem to me that the question about working conditions is answered in any great detail. However, there is this rather interesting policy suggestion,

If companies want to happily use contract work because they need to constantly churn through new ideas and new aptitudes, the only way to make that a good thing for both sides of that enterprise is for people to be able to jump into that pool. And people do that when they have health care and other provisions. This is the business case for universal health care, for universal education as a public good. It’s going to benefit all enterprise.

I want to get across to people that, in a lot of ways, we’re describing work conditions. We’re not describing a particular type of work. We’re describing today’s conditions for project-based task-driven work. This can happen to everybody’s jobs, and I hate that that might be the motivation because we should have cared all along, as this has been happening to plenty of people. For me, the message of this book is: let’s make this not just manageable, but sustainable and enjoyable. Stop making our lives wrap around work, and start making work serve our lives.

Puts a different spin on AI and work, doesn’t it?

S.NET (Society for the Study of New and Emerging Technologies) 2019 conference in Quito, Ecuador: call for abstracts

Why isn’t the S.NET abbreviation SSNET? That’s what it should be, given the organization’s full name: Society for the Study of New and Emerging Technologies. S.NET smacks of a compromise or consensus decision of some kind. Also, the ‘New’ in its name was ‘Nanoscience’ at one time (see my Oct. 22, 2013 posting).

Now onto 2019 and the conference, which, for the first time ever, is being held in Latin America. Here’s more from a February 4, 2019 S.Net email about the call for abstracts,

2019 Annual S.NET Meeting
Contrasting Visions of Technological Change

The 11th Annual S.NET meeting will take place November 18-20, 2019, at the Latin American Faculty of Social Sciences in Quito, Ecuador.

This year’s meeting will provide rich opportunities to reflect on technological change by establishing a dialogue between contrasting visions on how technology becomes closely intertwined with social orders.  We aim to open the black box of technological change by exploring the sociotechnical agreements that help to explain why societies follow certain technological trajectories. Contributors are invited to explore the ramifications of technological change, reflect on the policy process of technology, and debate whether or why technological innovation is a matter for democracy.

Following the transnational nature of S.NET, the meeting will highlight the diverse geographical and cultural approaches to technological innovation, the forces driving sociotechnical change, and social innovation.  It is of paramount importance to question the role of technology in the shaping of society and the outcomes of these configurations.  What happens when these arrangements come into being, are transformed or fall apart?  Does technology create contestation?  Why and how should we engage with contested visions of technology change?

This is the first time that the S.NET Meeting will take place in Latin America and we encourage panels and presentations with contrasting voices from both the Global North and the Global South. 

Topics of interest include, but are not limited to:

Sociotechnical imaginaries of innovation
The role of technology on shaping nationhood and nation identities
Decision-making processes on science and technology public policies
Co-creation approaches to promote public innovation
Grassroots innovation, sustainability and democracy
Visions and cultural imaginaries
Role of social sciences and humanities in processes technological change
In addition, we welcome contributions on:
Research dynamics and organization Innovation and use
Governance and regulation
Politics and ethics
Roles of publics and stakeholders

Keynote Speakers
TBA (check the conference website for updates!)

Deadlines & Submission Instructions
The program committee invites contributions from scholars, technology developers and practitioners, and welcome presentations from a range of disciplines spanning the humanities, social and natural sciences.  We invite individual paper submissions, open panel and closed session proposals, student posters, and special format sessions, including events that are innovative in form and content. 

The deadline for abstract submissions is *April 18, 2019* [extended to May 12, 2019].  Abstracts should be approximately 250 words in length, emailed in PDF format to 2019snet@gmail.com.  Notifications of acceptance can be expected by May 30, 2019.

Junior scholars and those with limited resources are strongly encouraged to apply, as the organizing committee is actively investigating potential sources of financial support.

Details on the conference can be found here: https://www.flacso.edu.ec/snet2019/

Local Organizing Committee
María Belén Albornoz, Isarelis Pérez, Javier Jiménez, Mónica Bustamante, Jorge Núñez, Maka Suárez.

Venue
FLACSO Ecuador is located in the heart of Quito.  Most hotels, museums, shopping centers and other cultural hotspots in the city are located near the campus and are easily accessible by public or private transportation.  Due to its proximity and easy access, Meeting participants would be able to enjoy Quito’s rich cultural life during their stay.  

About S.NET
S.NET is an international association that promotes intellectual exchange and critical inquiry about the advancement of new and emerging technologies in society.  The aim of the association is to advance critical reflection from various perspectives on developments in a broad range of new and emerging fields, including, but not limited to, nanoscale science and engineering, biotechnology, synthetic biology, cognitive science, ICT and Big Data, and geo-engineering.  Current S.NET board members are: Michael Bennett (President), Maria Belen Albornoz, Claire Shelley-Egan, Ana Delgado, Ana Viseu, Nora Vaage, Chris Toumey, Poonam Pandey, Sylvester Johnson, Lotte Krabbenborg, and Maria Joao Ferreira Maia.

Don’t forget, the deadline for your abstract is *April 18, 2019* [extended to May 12, 2019].

For anyone curious about what Quito might look like, there’s this from Quito’s Wikipedia entry,

Clockwise from top: Calle La Ronda, Iglesia de la Compañía de Jesús, El Panecillo as seen from Northern Quito, Carondelet Palace, Central-Northern Quito, Parque La Carolina and Iglesia y Monasterio de San Francisco. Credit: various authors – montage of various important landmarks of the City of Quito, Ecuador taken from files found in Wikimedia Commons. CC BY-SA 3.0 File:Montaje Quito.png Created: 24 December 2012

Good luck to all everyone submitting an abstract.

*Date for abstract submissions changed from April 18, 2019 to May 12, 2019 on April 24, 2019

Summer (2019) Institute on AI (artificial intelligence) Societal Impacts, Governance, and Ethics. Summer Institute In Alberta, Canada

The deadline for applications is April 7, 2019. As for whether or not you might like to attend, here’s more from a joint March 11, 2019 Alberta Machine Intelligence Institute (Amii)/
Canadian Institute for Advanced Research (CIFAR)/University of California at Los Angeles (UCLA) Law School news release
(also on globalnewswire.com),

What will Artificial Intelligence (AI) mean for society? That’s the question scholars from a variety of disciplines will explore during the inaugural Summer Institute on AI Societal Impacts, Governance, and Ethics. Summer Institute, co-hosted by the Alberta Machine Intelligence Institute (Amii) and CIFAR, with support from UCLA School of Law, takes place July 22-24, 2019 in Edmonton, Canada.

“Recent advances in AI have brought a surge of attention to the field – both excitement and concern,” says co-organizer and UCLA professor, Edward Parson. “From algorithmic bias to autonomous vehicles, personal privacy to automation replacing jobs. Summer Institute will bring together exceptional people to talk about how humanity can receive the benefits and not get the worst harms from these rapid changes.”

Summer Institute brings together experts, grad students and researchers from multiple backgrounds to explore the societal, governmental, and ethical implications of AI. A combination of lectures, panels, and participatory problem-solving, this comprehensive interdisciplinary event aims to build understanding and action around these high-stakes issues.

“Machine intelligence is opening transformative opportunities across the world,” says John Shillington, CEO of Amii, “and Amii is excited to bring together our own world-leading researchers with experts from areas such as law, philosophy and ethics for this important discussion. Interdisciplinary perspectives will be essential to the ongoing development of machine intelligence and for ensuring these opportunities have the broadest reach possible.”

Over the three-day program, 30 graduate-level students and early-career researchers will engage with leading experts and researchers including event co-organizers: Western University’s Daniel Lizotte, Amii’s Alona Fyshe and UCLA’s Edward Parson. Participants will also have a chance to shape the curriculum throughout this uniquely interactive event.

Summer Institute takes place prior to Deep Learning and Reinforcement Learning Summer School, and includes a combined event on July 24th [2019] for both Summer Institute and Summer School participants.

Visit dlrlsummerschool.ca/the-summer-institute to apply; applications close April 7, 2019.

View our Summer Institute Biographies & Boilerplates for more information on confirmed faculty members and co-hosting organizations. Follow the conversation through social media channels using the hashtag #SI2019.

Media Contact: Spencer Murray, Director of Communications & Public Relations, Amii
t: 587.415.6100 | c: 780.991.7136 | e: spencer.murray@amii.ca

There’s a bit more information on The Summer Institute on AI and Society webpage (on the Deep Learning and Reinforcement Learning Summer School 2019 website) such as this more complete list of speakers,

Confirmed speakers at Summer Institute include:

Alona Fyshe, University of Alberta/Amii (SI co-organizer)
Edward Parson, UCLA (SI co-organizer)
Daniel Lizotte, Western University (SI co-organizer)
Geoffrey Rockwell, University of Alberta
Graham Taylor, University of Guelph/Vector Institute
Rob Lempert, Rand Corporation
Gary Marchant, Arizona State University
Richard Re, UCLA
Evan Selinger, Rochester Institute of Technology
Elana Zeide, UCLA

Two questions, why are all the summer school faculty either Canada- or US-based? What about South American, Asian, Middle Eastern, etc. thinkers?

One last thought, I wonder if this ‘AI & ethics summer institute’ has anything to do with the Pan-Canadian Artificial Intelligence Strategy, which CIFAR administers and where both the University of Alberta and Vector Institute are members.

Scientometrics and science typologies

Caption: As of 2013, there were 7.8 million researchers globally, according to UNESCO. This means that 0.1 percent of the people in the world professionally do science. Their work is largely financed by governments, yet public officials are not themselves researchers. To help governments make sense of the scientific community, Russian mathematicians have devised a researcher typology. The authors initially identified three clusters, which they tentatively labeled as “leaders,” “successors,” and “toilers.” Credit: Lion_on_helium/MIPT Press Office

A June 28, 2018 Moscow Institute of Physics and Technology (MIPT; Russia) press release (also on EurekAlert) announces some intriguing research,

Researchers in various fields, from psychology to economics, build models of human behavior and reasoning to categorize people. But it does not happen as often that scientists undertake an analysis to classify their own kind.

However, research evaluation, and therefore scientist stratification as well, remain highly relevant. Six years ago, the government outlined the objective that Russian scientists should have 50 percent more publications in Web of Science- and Scopus-indexed journals. As of 2011, papers by researchers from Russia accounted for 1.66 percent of publications globally. By 2015, this number was supposed to reach 2.44%. It did grow but this has also sparked a discussion in the scientific community about the criteria used for evaluating research work.

The most common way of gauging the impact of a researcher is in terms of his or her publications. Namely, whether they are in a prestigious journal and how many times they have been cited. As with any good idea, however, one runs the risk of overdoing it. In 2005, U.S. physicist Jorge Hirsch proposed his h-index, which takes into account the number of publications by a given researcher and the number of times they have been cited. Now, scientists are increasingly doubting the adequacy of using bibliometric data as the sole independent criterion for evaluating research work. One obvious example of a flaw of this metric is that a paper can be frequently cited to point out a mistake in it.

Scientists are increasingly under pressure to publish more often. Research that might have reasonably been published in one paper is being split up into stages for separate publication. This calls for new approaches to the evaluation of work done by research groups and individual authors. Similarly, attempts to systematize the existing methods in scientometrics and stratify scientists are becoming more relevant, too. This is arguably even more important for Russia, where the research reform has been stretching for years.

One of the challenges in scientometrics is identifying the prominent types of researchers in different fields. A typology of scientists has been proposed by Moscow Institute of Physics and Technology Professor Pavel Chebotarev, who also heads the Laboratory of Mathematical Methods for Multiagent Systems Analysis at the Institute of Control Sciences of the Russian Academy of Sciences, and Ilya Vasilyev, a master’s student at MIPT.

In their paper, the two authors determined distinct types of scientists based on an indirect analysis of the style of research work, how papers are received by colleagues, and what impact they make. A further question addressed by the authors is to what degree researcher typology is affected by the scientific discipline.

“Each science has its own style of work. Publication strategies and citation practices vary, and leaders are distinguished in different ways,” says Chebotarev. “Even within a given discipline, things may be very different. This means that it is, unfortunately, not possible to have a universal system that would apply to anyone from a biologist to a philologist.”

“All of the reasonable systems that already exist are adjusted to particular disciplines,” he goes on. “They take into account the criteria used by the researchers themselves to judge who is who in their field. For example, scientists at the Institute for Nuclear Research of the Russian Academy of Sciences are divided into five groups based on what research they do, and they see a direct comparison of members of different groups as inadequate.”

The study was based on the citation data from the Google Scholar bibliographic database. To identify researcher types, the authors analyzed citation statistics for a large number of scientists, isolating and interpreting clusters of similar researchers.

Chebotarev and Vasilyev looked at the citation statistics for four groups of researchers returned by a Google Scholar search using the tags “Mathematics,” “Physics,” and “Psychology.” The first 515 and 556 search hits were considered in the case of physicists and psychologists, respectively. The authors studied two sets of mathematicians: the top 500 hits and hit Nos. 199-742. The four sets thus included frequently cited scientists from three disciplines indicating their general field of research in their profiles. Citation dynamics over each scientist’s career were examined using a range of indexes.

The authors initially identified three clusters, which they tentatively labeled as “leaders,” “successors,” and “toilers.” The leaders are experienced scientists widely recognized in their fields for research that has secured an annual citation count increase for them. The successors are young scientists who have more citations than toilers. The latter earn their high citation metrics owing to yearslong work, but they lack the illustrious scientific achievements.

Among the top 500 researchers indicating mathematics as their field of interest, 52 percent accounted for toilers, with successors and leaders making up 25.8 and 22.2 percent, respectively.

For physicists, the distribution was slightly different, with 48.5 percent of the set classified as toilers, 31.7 percent as successors, and 19.8 percent as leaders. That is, there were more successful young scientists, at the expense of leaders and toilers. This may be seen as a confirmation of the solitary nature of mathematical research, as compared with physics.

Finally, in the case of psychologists, toilers made up 47.7 percent of the set, with successors and leaders accounting for 18.3 and 34 percent. Comparing the distributions for the three disciplines investigated in the study, the authors conclude that there are more young achievers among those doing mathematical research.

A closer look enabled the authors to determine a more fine-grained cluster structure, which turned out to be remarkably similar for mathematicians and physicists. In particular, they identified a cluster of the youngest and most successful researchers, dubbed “precocious,” making up 4 percent of the mathematicians and 4.3 percent of the physicists in the set, along with the “youth” — successful researchers whose debuts were somewhat less dramatic: 29 and 31.7 percent of scientists doing math and physics research, respectively. Two further clusters were interpreted as recognized scientific authorities, or “luminaries,” and experienced researchers who have not seen an appreciable growth in the number of citations recently. Luminaries and the so-called inertia accounted for 52 and 15 percent of mathematicians and 50 and 14 percent of physicists, respectively.

There is an alternative way of clustering physicists, which recognizes a segment of researchers, who “caught the wave.” The authors suggest this might happen after joining major international research groups.

Among psychologists, 18.3 percent have been classified as precocious, though not as young as the physicists and mathematicians in the corresponding group. The most experienced and respected psychology researchers account for 22.5 percent, but there is no subdivision into luminaries and inertia, because those actively cited generally continue to be. Relatively young psychologists make up 59.2 percent of the set. The borders between clusters are relatively blurred in the case of psychology, which might be a feature of the humanities, according to the authors.

“Our pilot study showed even more similarity than we’d expected in how mathematicians and physicists are clustered,” says Chebotarev. “Whereas with psychology, things are noticeably different, yet the breakdown is slightly closer to math than physics. Perhaps, there is a certain connection between psychology and math after all, as some people say.”

“The next stage of this research features more disciplines. Hopefully, we will be ready to present the new results soon,” he concludes.

I think that they are attempting to create a new way of measuring scientific progress (scientometrics) by establishing a more representative means of measuring individual contributions based on the analysis they provide of the ways in which these ‘typologies’ are expressed across various disciplines.

For anyone who wants to investigate further, you will need to be able to read Russian. You can download the paper from here on MathNet.ru,.

Here’s my best attempt at a citation for the paper,

Making a typology of scientists on the basis of bibliometric data by I. Vasilyev, P. Yu. Chebotarev. Large-scale System Control (UBS), 2018, Issue 72, Pages 138–195 (Mi ubs948)

I’m glad to see this as there is a fair degree of dissatisfaction about the current measures for scientific progress used in any number of reports on the topic. As far as I can tell, this dissatisfaction is felt internationally.

The Center for Nanotechnology in Society at the University of California at Santa Barbara offers a ‘swan song’ in three parts

I gather the University of California at Santa Barbara’s (UCSB) Center for Nanotechnology in Society is ‘sunsetting’ as its funding runs out. A Nov. 9, 2016 UCSB news release by Brandon Fastman describes the center’s ‘swan song’,

After more than a decade, the UCSB Center for Nanotechnology in Society research has provided new and deep knowledge of how technological innovation and social change impact one another. Now, as the national center reaches the end of its term, its three primary research groups have published synthesis reports that bring together important findings from their 11 years of activity.

The reports, which include policy recommendations, are available for free download at the CNS web site at

http://www.cns.ucsb.edu/irg-synthesis-reports.

The ever-increasing ability of scientists to manipulate matter on the molecular level brings with it the potential for science fiction-like technologies such as nanoelectronic sensors that would entail “merging tissue with electronics in a way that it becomes difficult to determine where the tissue ends and the electronics begin,” according to a Harvard chemist in a recent CQ Researcher report. While the life-altering ramifications of such technologies are clear, it is less clear how they might impact the larger society to which they are introduced.

CNS research, as detailed the reports, addresses such gaps in knowledge. For instance, when anthropologist Barbara Herr Harthorn and her collaborators at the UCSB Center for Nanotechnology in Society (CNS-UCSB), convened public deliberations to discuss the promises and perils of health and human enhancement nanotechnologies, they thought that participants might be concerned about medical risks. However, that is not exactly what they found.

Participants were less worried about medical or technological mishaps than about the equitable distribution of the risks and benefits of new technologies and fair procedures for addressing potential problems. That is, they were unconvinced that citizens across the socioeconomic spectrum would share equal access to the benefits of therapies or equal exposure to their pitfalls.

In describing her work, Harthorn explained, “Intuitive assumptions of experts and practitioners about public perceptions and concerns are insufficient to understanding the societal contexts of technologies. Relying on intuition often leads to misunderstandings of social and institutional realities. CNS-UCSB has attempted to fill in the knowledge gaps through methodologically sophisticated empirical and theoretical research.”

In her role as Director of CNS-UCSB, Harthorn has overseen a larger effort to promote the responsible development of sophisticated materials and technologies seen as central to the nation’s economic future. By pursuing this goal, researchers at CNS-UCSB, which closed its doors at the end of the summer, have advanced the role for the social, economic, and behavioral sciences in understanding technological innovation.

Harthorn has spent the past 11 years trying to understand public expectations, values, beliefs, and perceptions regarding nanotechnologies. Along with conducting deliberations, she has worked with toxicologists and engineers to examine the environmental and occupational risks of nanotechnologies, determine gaps in the U.S. regulatory system, and survey nanotechnology experts. Work has also expanded to comparative studies of other emerging technologies such as shale oil and gas extraction (fracking).

Along with Harthorn’s research group on risk perception and social response, CNS-UCSB housed two other main research groups. One, led by sociologist Richard Appelbaum, studied the impacts of nanotechnology on the global economy. The other, led by historian Patrick McCray, studied the technologies, communities, and individuals that have shaped the direction of nanotechnology research.

Appelbaum’s research program included studying how state policies regarding nanotechnology – especially in China and Latin America – has impacted commercialization. Research trips to China elicited a great understanding of that nation’s research culture and its capacity to produce original intellectual property. He also studied the role of international collaboration in spurring technological innovation. As part of this research, his collaborators surveyed and interviewed international STEM graduate students in the United States in order to understand the factors that influence their choice whether to remain abroad or return home.

In examining the history of nanotechnology, McCray’s group explained how the microelectronics industry provided a template for what became known as nanotechnology, examined educational policies aimed at training a nano-workforce, and produced a history of the scanning tunneling microscope. They also penned award-winning monographs including McCray’s book, The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies, and Limitless Future.

Reaching the Real World

Funded as a National Center by the US National Science Foundation in 2005, CNS-UCSB was explicitly intended to enhance the understanding of the relationship between new technologies and their societal context. After more than a decade of funding, CNS-UCSB research has provided a deep understanding of the relationship between technological innovation and social change.

New developments in nanotechnology, an area of research that has garnered $24 billion in funding from the U.S. federal government since 2001, impact sectors as far ranging as agriculture, medicine, energy, defense, and construction, posing great challenges for policymakers and regulators who must consider questions of equity, sustainability, occupational and environmental health and safety, economic and educational policy, disruptions to privacy, security and even what it means to be human. (A nanometer is roughly 10,000 times smaller than the diameter of a human hair.)  Nanoscale materials are already integrated into food packaging, electronics, solar cells, cosmetics, and pharmaceuticals. They are far in development for drugs that can target specific cells, microscopic spying devices, and quantum computers.

Given such real-world applications, it was important to CNS researchers that the results of their work not remain confined within the halls of academia. Therefore, they have delivered testimony to Congress, federal and state agencies (including the National Academies of Science, the Centers for Disease Control and Prevention, the Presidential Council of Advisors on Science and Technology, the U.S. Presidential Bioethics Commission and the National Nanotechnology Initiative), policy outfits (including the Washington Center for Equitable Growth), and international agencies (including the World Bank, European Commission, and World Economic Forum). They’ve collaborated with nongovernmental organizations. They’ve composed policy briefs and op eds, and their work has been covered by numerous news organizations including, recently, NPR, The New Yorker, and Forbes. They have also given many hundreds of lectures to audiences in community groups, schools, and museums.

Policy Options

Most notably, in their final act before the center closed, each of the three primary research groups published synthesis reports that bring together important findings from their 11 years of activity. Their titles are:

Exploring Nanotechnology’s Origins, Institutions, and Communities: A Ten Year Experiment in Large Scale Collaborative STS Research

Globalization and Nanotechnology: The Role of State Policy and International Collaboration

Understanding Nanotechnologies’ Risks and Benefits: Emergence, Expertise and Upstream Participation.

A sampling of key policy recommendations follows:

1.     Public acceptability of nanotechnologies is driven by: benefit perception, the type of application, and the risk messages transmitted from trusted sources and their stability over time; therefore transparent and responsible risk communication is a critical aspect of acceptability.

2.     Social risks, particularly issues of equity and politics, are primary, not secondary, drivers of perception and need to be fully addressed in any new technology development. We have devoted particular attention to studying how gender and race/ethnicity affect both public and expert risk judgments.

3.     State policies aimed at fostering science and technology development should clearly continue to emphasize basic research, but not to the exclusion of supporting promising innovative payoffs. The National Nanotechnology Initiative, with its overwhelming emphasis on basic research, would likely achieve greater success in spawning thriving businesses and commercialization by investing more in capital programs such as the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, self-described as “America’s seed fund.”

4.     While nearly half of all international STEM graduate students would like to stay in the U.S. upon graduation, fully 40 percent are undecided — and a main barrier is current U.S. immigration policy.

5.     Although representatives from the nanomaterials industry demonstrate relatively high perceived risk regarding engineered nanomaterials, they likewise demonstrate low sensitivity to variance in risks across type of engineered nanomaterials, and a strong disinclination to regulation. This situation puts workers at significant risk and probably requires regulatory action now (beyond the currently favored voluntary or ‘soft law’ approaches).

6.     The complex nature of technological ecosystems translates into a variety of actors essential for successful innovation. One species is the Visioneer, a person who blends engineering experience with a transformative vision of the technological future and a willingness to promote this vision to the public and policy makers.

Leaving a Legacy

Along with successful outreach efforts, CNS-UCSB also flourished when measured by typical academic metrics, including nearly 400 publications and 1,200 talks.

In addition to producing groundbreaking interdisciplinary research, CNS-UCSB also produced innovative educational programs, reaching 200 professionals-in-training from the undergraduate to postdoctoral levels. The Center’s educational centerpiece was a graduate fellowship program, referred to as “magical” by an NSF reviewer, that integrated doctoral students from disciplines across the UCSB campus into ongoing social science research projects.

For social scientists, working side-by-side with science and engineering students gave them an appreciation for the methods, culture, and ethics of their colleagues in different disciplines. It also led to methodological innovation. For their part, scientists and engineers were able to understand the larger context of their work at the bench.

UCSB graduates who participated in CNS’s educational programs have gone on to work as postdocs and professors at universities (including MIT, Stanford, U Penn), policy experts (at organizations like the Science Technology and Policy Institute and the Canadian Institute for Advanced Research), researchers at government agencies (like the National Institute for Standards and Technology), nonprofits (like the Kauffman Foundation), and NGOs. Others work in industry, and some have become entrepreneurs, starting their own businesses.

CNS has spawned lines of research that will continue at UCSB and the institutions of collaborators around the world, but its most enduring legacy will be the students it trained. They bring a true understanding of the complex interconnections between technology and society — along with an intellectual toolkit for examining them — to every sector of the economy, and they will continue to pursue a world that is as just as it technologically advanced.

I found the policy recommendations interesting especially this one:

5.     Although representatives from the nanomaterials industry demonstrate relatively high perceived risk regarding engineered nanomaterials, they likewise demonstrate low sensitivity to variance in risks across type of engineered nanomaterials, and a strong disinclination to regulation. This situation puts workers at significant risk and probably requires regulatory action now (beyond the currently favored voluntary or ‘soft law’ approaches).

Without having read the documents, I’m not sure how to respond but I do have a question.  Just how much regulation are they suggesting?

I offer all of the people associated with the center my thanks for all their hard work and my gratitude for the support I received from the center when I presented at the Society for the Study of Nanotechnologies and Other Emerging Technology (S.Net) in 2012. I’m glad to see they’re going out with a bang.

Ageing population could drive progress in nanotechnology and robotics

A couple of theoreticians are proposing a generational gap as being a key source of conflict and technological process in the near future. From a July 27, 2016 news item on Nanotechnology Now,

The UN estimates that the number of people aged 65 and older will have reached almost a billion by 2030. The proportion of those aged over 80 will grow at particularly high rates, and their numbers are expected to reach 200 million by 2030 and triple that forty years later.

Due to a combination of an ageing population and declining birthrates, the demographic structure of most countries will change towards lower proportions of children and young people. As a result, the global division will no longer be between first- and third-world nations [also called developed and developing nations], but between old and young ones.

A July 25, 2016 National Research University Higher School of Economics [Russia] press release (also on EurekAlert), which originated the news item, expands on the theme,

According to the report of Senior Research Fellow of the HSE [Higher School of Economics] Laboratory for Monitoring the Risks of Socio-Political Destabilization Leonid Grinin and Senior Research Fellow of the International Centre for Education, Social and Humanitarian Studies Anton Grinin “Global Population Ageing and the Threat of Political Risks in the Light of Radical Technological Innovation in the Coming Decades.”, an increase in the number of older people will:

  • encourage societies facing workforce shortages to seek solutions to improve older people’s employability by helping them stay healthy, fit and full of energy for much longer than today;
  • encourage societies to focus more on rehabilitation of people with disabilities and provide them with new technology to support their employment;
  • encourage the development of labour-saving technologies, such as robotics, to assist caregivers;
  • lead to breakthroughs in medicine. Indeed, medical services will be the first to enter a new phase of technological revolution, radically changing the structure of production and people’s lives. Such a breakthrough will be associated what the authors call MANBRIC, i.e. a technological paradigm based on medicine, additive, nano- and bio- technologies, robotic, IT, and cognitive technologies;
  • boost government spending on healthcare, which today accounts for at least 10% of global GDP and can vary vastly across countries, e.g. reaching 17% in the U.S.;
  • promote the development of peripheral countries through higher spending on health care, leading to the emergence of a middle class, poverty reduction, literacy, and a better quality of life;
  • increase the demand for innovation and its financing from accumulated funds such as pensions and public allocations to medical and social needs;
  • lead to higher investment in supporting the health of ageing populations and the growing middle class.

Longevity Comes at a Cost

A confrontation between generations in the labor market and the weakening of democracy are the key risks associated with longer life expectancy.

Longer life spans and a lower proportion of young people in society may lead to the predominance of ‘third age’ voters. Politicians will need to tailor their messages to older and perhaps more conservative electorates. According to the researchers, “democracy can transform into a form of gerontocracy which may be hard to overcome; under such circumstances, competition for voters may lead to a crisis of democratic governance.”

A conflict between generations is another potential risk. As the retirement age increases, older employees will stay in the workforce longer – a situation which may hinder younger people’s careers and slow down technological progress.

A tendency towards gerontocracy has been particularly noticeable in Western Europe and the U.S., where democratic traditions are the strongest, but ethnic and cultural imbalances are increasingly visible. As a result, the U.S. may face confrontation between its younger Latinos and older white populations, and Europe may experience tensions between older white Christians and younger Muslims. Hence, globalization will inevitably cause such conflicts to transcend national borders and become global challenges.

I was not able to find the report mentioned in this release but I certainly would have liked to have looked at it. This redraws the conflict map in some interesting ways.

Korea Advanced Institute of Science and Technology (KAIST) at summer 2016 World Economic Forum in China

From the Ideas Lab at the 2016 World Economic Forum at Davos to offering expertise at the 2016 World Economic Forum in Tanjin, China that is taking place from June 26 – 28, 2016.

Here’s more from a June 24, 2016 KAIST news release on EurekAlert,

Scientific and technological breakthroughs are more important than ever as a key agent to drive social, economic, and political changes and advancements in today’s world. The World Economic Forum (WEF), an international organization that provides one of the broadest engagement platforms to address issues of major concern to the global community, will discuss the effects of these breakthroughs at its 10th Annual Meeting of the New Champions, a.k.a., the Summer Davos Forum, in Tianjin, China, June 26-28, 2016.

Three professors from the Korea Advanced Institute of Science and Technology (KAIST) will join the Annual Meeting and offer their expertise in the fields of biotechnology, artificial intelligence, and robotics to explore the conference theme, “The Fourth Industrial Revolution and Its Transformational Impact.” The Fourth Industrial Revolution, a term coined by WEF founder, Klaus Schwab, is characterized by a range of new technologies that fuse the physical, digital, and biological worlds, such as the Internet of Things, cloud computing, and automation.

Distinguished Professor Sang Yup Lee of the Chemical and Biomolecular Engineering Department will speak at the Experts Reception to be held on June 25, 2016 on the topic of “The Summer Davos Forum and Science and Technology in Asia.” On June 27, 2016, he will participate in two separate discussion sessions.

In the first session entitled “What If Drugs Are Printed from the Internet?” Professor Lee will discuss the future of medicine being impacted by advancements in biotechnology and 3D printing technology with Nita A. Farahany, a Duke University professor, under the moderation of Clare Matterson, the Director of Strategy at Wellcome Trust in the United Kingdom. The discussants will note recent developments made in the way patients receive their medicine, for example, downloading drugs directly from the internet and the production of yeast strains to make opioids for pain treatment through systems metabolic engineering, and predicting how these emerging technologies will transform the landscape of the pharmaceutical industry in the years to come.

In the second session, “Lessons for Life,” Professor Lee will talk about how to nurture life-long learning and creativity to support personal and professional growth necessary in an era of the new industrial revolution.

During the Annual Meeting, Professors Jong-Hwan Kim of the Electrical Engineering School and David Hyunchul Shim of the Aerospace Department will host, together with researchers from Carnegie Mellon University and AnthroTronix, an engineering research and development company, a technological exhibition on robotics. Professor Kim, the founder of the internally renowned Robot World Cup, will showcase his humanoid micro-robots that play soccer, displaying their various cutting-edge technologies such as imaging processing, artificial intelligence, walking, and balancing. Professor Shim will present a human-like robotic piloting system, PIBOT, which autonomously operates a simulated flight program, grabbing control sticks and guiding an airplane from take offs to landings.

In addition, the two professors will join Professor Lee, who is also a moderator, to host a KAIST-led session on June 26, 2016, entitled “Science in Depth: From Deep Learning to Autonomous Machines.” Professors Kim and Shim will explore new opportunities and challenges in their fields from machine learning to autonomous robotics including unmanned vehicles and drones.

Since 2011, KAIST has been participating in the World Economic Forum’s two flagship conferences, the January and June Davos Forums, to introduce outstanding talents, share their latest research achievements, and interact with global leaders.

KAIST President Steve Kang said, “It is important for KAIST to be involved in global talks that identify issues critical to humanity and seek answers to solve them, where our skills and knowledge in science and technology could play a meaningful role. The Annual Meeting in China will become another venue to accomplish this.”

I mentioned KAIST and the Ideas Lab at the 2016 Davos meeting in this Nov. 20, 2015 posting and was able to clear up my (and possible other people’s) confusion as to what the Fourth Industrial revolution might be in my Dec. 3, 2015 posting.