Tag Archives: trust

Socially responsible AI—it’s time says University of Manchester (UK) researchers

A May 10, 2018 news item on ScienceDaily describes a report on the ‘fourth industrial revolution’ being released by the University of Manchester,

The development of new Artificial Intelligence (AI) technology is often subject to bias, and the resulting systems can be discriminatory, meaning more should be done by policymakers to ensure its development is democratic and socially responsible.

This is according to Dr Barbara Ribeiro of Manchester Institute of Innovation Research at The University of Manchester, in On AI and Robotics: Developing policy for the Fourth Industrial Revolution, a new policy report on the role of AI and Robotics in society, being published today [May 10, 2018].

Interestingly, the US White House is hosting a summit on AI today, May 10, 2018, according to a May 8, 2018 article by Danny Crichton for TechCrunch (Note: Links have been removed),

Now, it appears the White House itself is getting involved in bringing together key American stakeholders to discuss AI and those opportunities and challenges. …

Among the confirmed guests are Facebook’s Jerome Pesenti, Amazon’s Rohit Prasad, and Intel’s CEO Brian Krzanich. While the event has many tech companies present, a total of 38 companies are expected to be in attendance including United Airlines and Ford.

AI policy has been top-of-mind for many policymakers around the world. French President Emmanuel Macron has announced a comprehensive national AI strategy, as has Canada, which has put together a research fund and a set of programs to attempt to build on the success of notable local AI researchers such as University of Toronto professor George Hinton, who is a major figure in deep learning.

But it is China that has increasingly drawn the attention and concern of U.S. policymakers. The country and its venture capitalists are outlaying billions of dollars to invest in the AI industry, and it has made leading in artificial intelligence one of the nation’s top priorities through its Made in China 2025 program and other reports. …

In comparison, the United States has been remarkably uncoordinated when it comes to AI. …

That lack of engagement from policymakers has been fine — after all, the United States is the world leader in AI research. But with other nations pouring resources and talent into the space, DC policymakers are worried that the U.S. could suddenly find itself behind the frontier of research in the space, with particular repercussions for the defense industry.

Interesting contrast: do we take time to consider the implications or do we engage in a race?

While it’s becoming fashionable to dismiss dichotomous questions of this nature, the two approaches (competition and reflection) are not that compatible and it does seem to be an either/or proposition.

A May 10, 2018 University of Manchester press release (also on EurekAlert), which originated the news item, expands on the theme of responsibility and AI,

Dr Ribeiro adds because investment into AI will essentially be paid for by tax-payers in the long-term, policymakers need to make sure that the benefits of such technologies are fairly distributed throughout society.

She says: “Ensuring social justice in AI development is essential. AI technologies rely on big data and the use of algorithms, which influence decision-making in public life and on matters such as social welfare, public safety and urban planning.”

“In these ‘data-driven’ decision-making processes some social groups may be excluded, either because they lack access to devices necessary to participate or because the selected datasets do not consider the needs, preferences and interests of marginalised and disadvantaged people.”

On AI and Robotics: Developing policy for the Fourth Industrial Revolution is a comprehensive report written, developed and published by Policy@Manchester with leading experts and academics from across the University.

The publication is designed to help employers, regulators and policymakers understand the potential effects of AI in areas such as industry, healthcare, research and international policy.

However, the report doesn’t just focus on AI. It also looks at robotics, explaining the differences and similarities between the two separate areas of research and development (R&D) and the challenges policymakers face with each.

Professor Anna Scaife, Co-Director of the University’s Policy@Manchester team, explains: “Although the challenges that companies and policymakers are facing with respect to AI and robotic systems are similar in many ways, these are two entirely separate technologies – something which is often misunderstood, not just by the general public, but policymakers and employers too. This is something that has to be addressed.”

One particular area the report highlights where robotics can have a positive impact is in the world of hazardous working environments, such a nuclear decommissioning and clean-up.

Professor Barry Lennox, Professor of Applied Control and Head of the UOM Robotics Group, adds: “The transfer of robotics technology into industry, and in particular the nuclear industry, requires cultural and societal changes as well as technological advances.

“It is really important that regulators are aware of what robotic technology is and is not capable of doing today, as well as understanding what the technology might be capable of doing over the next -5 years.”

The report also highlights the importance of big data and AI in healthcare, for example in the fight against antimicrobial resistance (AMR).

Lord Jim O’Neill, Honorary Professor of Economics at The University of Manchester and Chair of the Review on Antimicrobial Resistance explains: “An important example of this is the international effort to limit the spread of antimicrobial resistance (AMR). The AMR Review gave 27 specific recommendations covering 10 broad areas, which became known as the ‘10 Commandments’.

“All 10 are necessary, and none are sufficient on their own, but if there is one that I find myself increasingly believing is a permanent game-changer, it is state of the art diagnostics. We need a ‘Google for doctors’ to reduce the rate of over prescription.”

The versatile nature of AI and robotics is leading many experts to predict that the technologies will have a significant impact on a wide variety of fields in the coming years. Policy@Manchester hopes that the On AI and Robotics report will contribute to helping policymakers, industry stakeholders and regulators better understand the range of issues they will face as the technologies play ever greater roles in our everyday lives.

As far as I can tell, the report has been designed for online viewing only. There are none of the markers (imprint date, publisher, etc.) that I expect to see on a print document. There is no bibliography or list of references but there are links to outside sources throughout the document.

It’s an interesting approach to publishing a report that calls for social justice, especially since the issue of ‘trust’ is increasingly being emphasized where all AI is concerned. With regard to this report, I’m not sure I can trust it. With a print document or a PDF I have markers. I can examine the index, the bibliography, etc. and determine if this material has covered the subject area with reference to well known authorities. It’s much harder to do that with this report. As well, this ‘souped up’ document also looks like it might be easy to change something without my knowledge. With a print or PDF version, I can compare the documents but not with this one.

Cyborg insects and trust

I first mentioned insect cyborgs in a July 27, 2009 posting,

One last thing, I’ve concentrated on people but animals are also being augmented. There was an opinion piece [no longer available on the Courier website] by Geoff Olson (July 24, 2009) in the Vancouver Courier, a community paper, about robotic insects. According to Olson’s research (and I don’t doubt it), scientists are fusing insects with machines so they can be used to sniff out drugs, find survivors after disasters,  and perform surveillance. [emphasis mine]

Today, Nov. 23, 2011, a little over two years later, I caught this news item on Nanowerk, Insect cyborgs may become first responders, search and monitor hazardous environs,

“Through energy scavenging, we could potentially power cameras, microphones and other sensors and communications equipment that an insect could carry aboard a tiny backpack,” Najafi [Professor Khalil Najafi] said. “We could then send these ‘bugged’ bugs into dangerous or enclosed environments where we would not want humans to go.”

The original Nov. 22, 2011 news release by Matt Nixon for the University of Michigan describes some of the technology,

The principal idea is to harvest the insect’s biological energy from either its body heat or movements. The device converts the kinetic energy from wing movements of the insect into electricity, thus prolonging the battery life. The battery can be used to power small sensors implanted on the insect (such as a small camera, a microphone or a gas sensor) in order to gather vital information from hazardous environments.

A spiral piezoelectric generator was designed to maximize the power output by employing a compliant structure in a limited area. The technology developed to fabricate this prototype includes a process to machine high-aspect ratio devices from bulk piezoelectric substrates with minimum damage to the material using a femtosecond laser.

Here’s a model of a cyborg insect,

Through a device invented at the University of Michigan, an insect's wing movements can generate enough electricity to power small sensors such as a tiny camera, microphone or gas sensor. (Credit: Erkan Aktakka)

This project is another example of work being funded by the US Defense Advanced Research Projects Agency (DARPA). (I most recently mentioned the agency in this Nov. 22, 2011 posting which features innovation, DARPA, excerpts from an interview with Regina Dugan, DARPA’s Director, and nanotherapeutics.)

There are many cyborgs around us already. Anybody who’s received a pacemaker, deep brain stimulator, hip replacement, etc. can be considered a cyborg. Just after finding the news item about the insect cyborg, I came across a Nov. 23, 2011 posting by Torie Bosch about cyborgs for Slate Magazine,

Though the word cyborg conjures up images of exoskeletons and computers welded to bodies, the reality is far more mundane: Anyone who has a cochlear implant, for one, could be termed a cyborg.  So is the resourceful fellow who made his prosthetic finger into a USB drive. In the coming decades, we’ll see more of these subtle marriages of technology and body, creating new ethical questions.

At the blog Cyborgology, P.J. Rey, a graduate student who writes about emerging technologies, examines the trust relationships we have with the technologies—and the people who develop them—that become engrained with our daily lives. [emphasis mine]

From P. J. Rey’s Nov. 23, 2011 posting about trust and technology on Cyborgology,

In this essay, I want to continue the discussion about our relationship with the technology we use. Adapting and extending Anthony Giddens’ Consequences of Modernity, I will argue that an essential part of the cyborganic transformation we experience when we equip Modern, sophisticated technology is deeply tied to trust in expert systems. It is no longer feasible to fully comprehend the inner workings of the innumerable devices that we depend on; rather, we are forced to trust that the institutions that deliver these devices to us have designed, tested, and maintained the devices properly. This bargain—trading certainty for convenience—however, means that the Modern cyborg finds herself ever more deeply integrated into the social circuit. In fact, the cyborg’s connection to technology makes her increasingly socially dependent because the technological facets of her being require expert knowledge from others.

It’s a fascinating essay and I encourage you to read it as Rey goes on to explore social dependency, trust, and technology. On a related note, trust and/or dependency issues are likely the source of various technology panics and opposition campaigns, e.g. nuclear, GMOs (genetically modified organisms), telephone, telegraph, electricity, writing, etc.

It’s hard to understand now that literacy is so common but in a society where it is less common, the written word is not necessarily to be trusted. After all, if only one person in the room can read (or claims they can), how do you know they’re telling the truth about what’s written?

As for cyborgs, I think we’re going to have some very interesting discussions about them and these discussions may not all occur in the sanctified halls of academe or in quiet conference rooms stuffed with bureaucrats. As I’ve noted before there is a whole discussion taking place about emerging technologies in the realm of popular culture where our greatest hopes and fears are reflected and, sometimes, intensified.