Tag Archives: trust

US Army researchers’ vision for artificial intelligence and ethics

The US Army peeks into a near future where humans and some forms of artificial intelligence (AI) work together in battle and elsewhere. From a February 3, 2021 U.S. Army Research Laboratory news release (also on EurekAlert but published on February 16, 2021),

The Army of the future will involve humans and autonomous machines working together to accomplish the mission. According to Army researchers, this vision will only succeed if artificial intelligence is perceived to be ethical.

Researchers, based at the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory, Northeastern University and the University of Southern California, expanded existing research to cover moral dilemmas and decision making that has not been pursued elsewhere.

This research, featured in Frontiers in Robotics and AI, tackles the fundamental challenge of developing ethical artificial intelligence, which, according to the researchers, is still mostly understudied.

“Autonomous machines, such as automated vehicles and robots, are poised to become pervasive in the Army,” said DEVCOM ARL researcher Dr. Celso de Melo, who is located at the laboratory’s ARL West regional site in Playa Vista, California. “These machines will inevitably face moral dilemmas where they must make decisions that could very well injure humans.”

For example, de Melo said, imagine that an automated vehicle is driving in a tunnel and suddenly five pedestrians cross the street; the vehicle must decide whether to continue moving forward injuring the pedestrians or swerve towards the wall risking the driver.

What should the automated vehicle do in this situation?

Prior work has framed these dilemmas in starkly simple terms, framing decisions as life and death, de Melo said, neglecting the influence of risk of injury to the involved parties on the outcome.

“By expanding the study of moral dilemmas to consider the risk profile of the situation, we significantly expanded the space of acceptable solutions for these dilemmas,” de Melo said. “In so doing, we contributed to the development of autonomous technology that abides to acceptable moral norms and thus is more likely to be adopted in practice and accepted by the general public.”

The researchers focused on this gap and presented experimental evidence that, in a moral dilemma with automated vehicles, the likelihood of making the utilitarian choice – which minimizes the overall injury risk to humans and, in this case, saves the pedestrians – was moderated by the perceived risk of injury to pedestrians and drivers.

In their study, participants were found more likely to make the utilitarian choice with decreasing risk to the driver and with increasing risk to the pedestrians. However, interestingly, most were willing to risk the driver (i.e., self-sacrifice), even if the risk to the pedestrians was lower than to the driver.

As a second contribution, the researchers also demonstrated that participants’ moral decisions were influenced by what other decision makers do – for instance, participants were less likely to make the utilitarian choice, if others often chose the non-utilitarian choice.

“This research advances the state-of-the-art in the study of moral dilemmas involving autonomous machines by shedding light on the role of risk on moral choices,” de Melo said. “Further, both of these mechanisms introduce opportunities to develop AI that will be perceived to make decisions that meet moral standards, as well as introduce an opportunity to use technology to shape human behavior and promote a more moral society.”

For the Army, this research is particularly relevant to Army modernization, de Melo said.

“As these vehicles become increasingly autonomous and operate in complex and dynamic environments, they are bound to face situations where injury to humans is unavoidable,” de Melo said. “This research informs how to navigate these moral dilemmas and make decisions that will be perceived as optimal given the circumstances; for example, minimizing overall risk to human life.”

Moving in to the future, researchers will study this type of risk-benefit analysis in Army moral dilemmas and articulate the corresponding practical implications for the development of AI systems.

“When deployed at scale, the decisions made by AI systems can be very consequential, in particular for situations involving risk to human life,” de Melo said. “It is critical that AI is able to make decisions that reflect society’s ethical standards to facilitate adoption by the Army and acceptance by the general public. This research contributes to realizing this vision by clarifying some of the key factors shaping these standards. This research is personally important because AI is expected to have considerable impact to the Army of the future; however, what kind of impact will be defined by the values reflected in that AI.”

The last time I had an item on a similar topic from the US Army Research Laboratory (ARL) it was in a March 26, 2018 posting; scroll down to the subhead, US Army (about 50% of the way down),

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

This latest work also revolves around the issue of trust according to the last sentence in the 2021 study paper (link and citation to follow),

… Overall, these questions emphasize the importance of the kind of experimental work presented here, as it has the potential to shed light on people’s preferences about moral behavior in machines, inform the design of autonomous machines people are likely to trust and adopt, and, perhaps, even introduce an opportunity to promote a more moral society. [emphases mine]

From trust to adoption to a more moral society—that’s an interesting progression. For another more optimistic view of how robots and AI can have positive impacts there’s my March 29, 2021 posting, Little Lost Robot and humane visions of our technological future

Here’s a link to and a citation for the paper,

Risk of Injury in Moral Dilemmas With Autonomous Vehicles by Celso M. de Melo, Stacy Marsella, and Jonathan Gratch. Front. Robot. AI [Frontiers in Robotics and AI], 20 January 2021 DOI: https://doi.org/10.3389/frobt.2020.572529

This paper is in an open access journal.

Socially responsible AI—it’s time says University of Manchester (UK) researchers

A May 10, 2018 news item on ScienceDaily describes a report on the ‘fourth industrial revolution’ being released by the University of Manchester,

The development of new Artificial Intelligence (AI) technology is often subject to bias, and the resulting systems can be discriminatory, meaning more should be done by policymakers to ensure its development is democratic and socially responsible.

This is according to Dr Barbara Ribeiro of Manchester Institute of Innovation Research at The University of Manchester, in On AI and Robotics: Developing policy for the Fourth Industrial Revolution, a new policy report on the role of AI and Robotics in society, being published today [May 10, 2018].

Interestingly, the US White House is hosting a summit on AI today, May 10, 2018, according to a May 8, 2018 article by Danny Crichton for TechCrunch (Note: Links have been removed),

Now, it appears the White House itself is getting involved in bringing together key American stakeholders to discuss AI and those opportunities and challenges. …

Among the confirmed guests are Facebook’s Jerome Pesenti, Amazon’s Rohit Prasad, and Intel’s CEO Brian Krzanich. While the event has many tech companies present, a total of 38 companies are expected to be in attendance including United Airlines and Ford.

AI policy has been top-of-mind for many policymakers around the world. French President Emmanuel Macron has announced a comprehensive national AI strategy, as has Canada, which has put together a research fund and a set of programs to attempt to build on the success of notable local AI researchers such as University of Toronto professor George Hinton, who is a major figure in deep learning.

But it is China that has increasingly drawn the attention and concern of U.S. policymakers. The country and its venture capitalists are outlaying billions of dollars to invest in the AI industry, and it has made leading in artificial intelligence one of the nation’s top priorities through its Made in China 2025 program and other reports. …

In comparison, the United States has been remarkably uncoordinated when it comes to AI. …

That lack of engagement from policymakers has been fine — after all, the United States is the world leader in AI research. But with other nations pouring resources and talent into the space, DC policymakers are worried that the U.S. could suddenly find itself behind the frontier of research in the space, with particular repercussions for the defense industry.

Interesting contrast: do we take time to consider the implications or do we engage in a race?

While it’s becoming fashionable to dismiss dichotomous questions of this nature, the two approaches (competition and reflection) are not that compatible and it does seem to be an either/or proposition.

A May 10, 2018 University of Manchester press release (also on EurekAlert), which originated the news item, expands on the theme of responsibility and AI,

Dr Ribeiro adds because investment into AI will essentially be paid for by tax-payers in the long-term, policymakers need to make sure that the benefits of such technologies are fairly distributed throughout society.

She says: “Ensuring social justice in AI development is essential. AI technologies rely on big data and the use of algorithms, which influence decision-making in public life and on matters such as social welfare, public safety and urban planning.”

“In these ‘data-driven’ decision-making processes some social groups may be excluded, either because they lack access to devices necessary to participate or because the selected datasets do not consider the needs, preferences and interests of marginalised and disadvantaged people.”

On AI and Robotics: Developing policy for the Fourth Industrial Revolution is a comprehensive report written, developed and published by Policy@Manchester with leading experts and academics from across the University.

The publication is designed to help employers, regulators and policymakers understand the potential effects of AI in areas such as industry, healthcare, research and international policy.

However, the report doesn’t just focus on AI. It also looks at robotics, explaining the differences and similarities between the two separate areas of research and development (R&D) and the challenges policymakers face with each.

Professor Anna Scaife, Co-Director of the University’s Policy@Manchester team, explains: “Although the challenges that companies and policymakers are facing with respect to AI and robotic systems are similar in many ways, these are two entirely separate technologies – something which is often misunderstood, not just by the general public, but policymakers and employers too. This is something that has to be addressed.”

One particular area the report highlights where robotics can have a positive impact is in the world of hazardous working environments, such a nuclear decommissioning and clean-up.

Professor Barry Lennox, Professor of Applied Control and Head of the UOM Robotics Group, adds: “The transfer of robotics technology into industry, and in particular the nuclear industry, requires cultural and societal changes as well as technological advances.

“It is really important that regulators are aware of what robotic technology is and is not capable of doing today, as well as understanding what the technology might be capable of doing over the next -5 years.”

The report also highlights the importance of big data and AI in healthcare, for example in the fight against antimicrobial resistance (AMR).

Lord Jim O’Neill, Honorary Professor of Economics at The University of Manchester and Chair of the Review on Antimicrobial Resistance explains: “An important example of this is the international effort to limit the spread of antimicrobial resistance (AMR). The AMR Review gave 27 specific recommendations covering 10 broad areas, which became known as the ‘10 Commandments’.

“All 10 are necessary, and none are sufficient on their own, but if there is one that I find myself increasingly believing is a permanent game-changer, it is state of the art diagnostics. We need a ‘Google for doctors’ to reduce the rate of over prescription.”

The versatile nature of AI and robotics is leading many experts to predict that the technologies will have a significant impact on a wide variety of fields in the coming years. Policy@Manchester hopes that the On AI and Robotics report will contribute to helping policymakers, industry stakeholders and regulators better understand the range of issues they will face as the technologies play ever greater roles in our everyday lives.

As far as I can tell, the report has been designed for online viewing only. There are none of the markers (imprint date, publisher, etc.) that I expect to see on a print document. There is no bibliography or list of references but there are links to outside sources throughout the document.

It’s an interesting approach to publishing a report that calls for social justice, especially since the issue of ‘trust’ is increasingly being emphasized where all AI is concerned. With regard to this report, I’m not sure I can trust it. With a print document or a PDF I have markers. I can examine the index, the bibliography, etc. and determine if this material has covered the subject area with reference to well known authorities. It’s much harder to do that with this report. As well, this ‘souped up’ document also looks like it might be easy to change something without my knowledge. With a print or PDF version, I can compare the documents but not with this one.

Cyborg insects and trust

I first mentioned insect cyborgs in a July 27, 2009 posting,

One last thing, I’ve concentrated on people but animals are also being augmented. There was an opinion piece [no longer available on the Courier website] by Geoff Olson (July 24, 2009) in the Vancouver Courier, a community paper, about robotic insects. According to Olson’s research (and I don’t doubt it), scientists are fusing insects with machines so they can be used to sniff out drugs, find survivors after disasters,  and perform surveillance. [emphasis mine]

Today, Nov. 23, 2011, a little over two years later, I caught this news item on Nanowerk, Insect cyborgs may become first responders, search and monitor hazardous environs,

“Through energy scavenging, we could potentially power cameras, microphones and other sensors and communications equipment that an insect could carry aboard a tiny backpack,” Najafi [Professor Khalil Najafi] said. “We could then send these ‘bugged’ bugs into dangerous or enclosed environments where we would not want humans to go.”

The original Nov. 22, 2011 news release by Matt Nixon for the University of Michigan describes some of the technology,

The principal idea is to harvest the insect’s biological energy from either its body heat or movements. The device converts the kinetic energy from wing movements of the insect into electricity, thus prolonging the battery life. The battery can be used to power small sensors implanted on the insect (such as a small camera, a microphone or a gas sensor) in order to gather vital information from hazardous environments.

A spiral piezoelectric generator was designed to maximize the power output by employing a compliant structure in a limited area. The technology developed to fabricate this prototype includes a process to machine high-aspect ratio devices from bulk piezoelectric substrates with minimum damage to the material using a femtosecond laser.

Here’s a model of a cyborg insect,

Through a device invented at the University of Michigan, an insect's wing movements can generate enough electricity to power small sensors such as a tiny camera, microphone or gas sensor. (Credit: Erkan Aktakka)

This project is another example of work being funded by the US Defense Advanced Research Projects Agency (DARPA). (I most recently mentioned the agency in this Nov. 22, 2011 posting which features innovation, DARPA, excerpts from an interview with Regina Dugan, DARPA’s Director, and nanotherapeutics.)

There are many cyborgs around us already. Anybody who’s received a pacemaker, deep brain stimulator, hip replacement, etc. can be considered a cyborg. Just after finding the news item about the insect cyborg, I came across a Nov. 23, 2011 posting by Torie Bosch about cyborgs for Slate Magazine,

Though the word cyborg conjures up images of exoskeletons and computers welded to bodies, the reality is far more mundane: Anyone who has a cochlear implant, for one, could be termed a cyborg.  So is the resourceful fellow who made his prosthetic finger into a USB drive. In the coming decades, we’ll see more of these subtle marriages of technology and body, creating new ethical questions.

At the blog Cyborgology, P.J. Rey, a graduate student who writes about emerging technologies, examines the trust relationships we have with the technologies—and the people who develop them—that become engrained with our daily lives. [emphasis mine]

From P. J. Rey’s Nov. 23, 2011 posting about trust and technology on Cyborgology,

In this essay, I want to continue the discussion about our relationship with the technology we use. Adapting and extending Anthony Giddens’ Consequences of Modernity, I will argue that an essential part of the cyborganic transformation we experience when we equip Modern, sophisticated technology is deeply tied to trust in expert systems. It is no longer feasible to fully comprehend the inner workings of the innumerable devices that we depend on; rather, we are forced to trust that the institutions that deliver these devices to us have designed, tested, and maintained the devices properly. This bargain—trading certainty for convenience—however, means that the Modern cyborg finds herself ever more deeply integrated into the social circuit. In fact, the cyborg’s connection to technology makes her increasingly socially dependent because the technological facets of her being require expert knowledge from others.

It’s a fascinating essay and I encourage you to read it as Rey goes on to explore social dependency, trust, and technology. On a related note, trust and/or dependency issues are likely the source of various technology panics and opposition campaigns, e.g. nuclear, GMOs (genetically modified organisms), telephone, telegraph, electricity, writing, etc.

It’s hard to understand now that literacy is so common but in a society where it is less common, the written word is not necessarily to be trusted. After all, if only one person in the room can read (or claims they can), how do you know they’re telling the truth about what’s written?

As for cyborgs, I think we’re going to have some very interesting discussions about them and these discussions may not all occur in the sanctified halls of academe or in quiet conference rooms stuffed with bureaucrats. As I’ve noted before there is a whole discussion taking place about emerging technologies in the realm of popular culture where our greatest hopes and fears are reflected and, sometimes, intensified.