Tag Archives: US Army Research Laboratory (ARL)

A lobster’s stretch and strength in a hydrogel

An MIT team has fabricated a hydrogel-based material that mimics the structure of the lobster’s underbelly, the toughest known hydrogel found in nature. Credits: Courtesy of the researchers

I love this lobster. In most photos, they’re food. This shows off the lobster as a living entity while showcasing its underbelly, which is what this story is all about. From an April 23, 2021 news item on phys.org (Note: A link has been removed),

A lobster’s underbelly is lined with a thin, translucent membrane that is both stretchy and surprisingly tough. This marine under-armor, as MIT [Massachusetts Institute of Technology] engineers reported in 2019, is made from the toughest known hydrogel in nature, which also happens to be highly flexible. This combination of strength and stretch helps shield a lobster as it scrabbles across the seafloor, while also allowing it to flex back and forth to swim.

Now a separate MIT team has fabricated a hydrogel-based material that mimics the structure of the lobster’s underbelly. The researchers ran the material through a battery of stretch and impact tests, and showed that, similar to the lobster underbelly, the synthetic material is remarkably “fatigue-resistant,” able to withstand repeated stretches and strains without tearing.

If the fabrication process could be significantly scaled up, materials made from nanofibrous hydrogels could be used to make stretchy and strong replacement tissues such as artificial tendons and ligaments.

The team’s results are published in the journal Matter. The paper’s MIT co-authors include postdocs Jiahua Ni and Shaoting Lin; graduate students Xinyue Liu and Yuchen Sun; professor of aeronautics and astronautics Raul Radovitzky; professor of chemistry Keith Nelson; mechanical engineering professor Xuanhe Zhao; and former research scientist David Veysset Ph.D. ’16, now at Stanford University; along with Zhao Qin, assistant professor at Syracuse University, and Alex Hsieh of the Army Research Laboratory.

An April 23, 2021 MIT news release (also on EurekAlert) by Jennifer Chu, which originated the news item, offers an overview of the groundwork for this latest research along with technical detail about the latest work,

Nature’s twist

In 2019, Lin and other members of Zhao’s group developed a new kind of fatigue-resistant material made from hydrogel — a gelatin-like class of materials made primarily of water and cross-linked polymers. They fabricated the material from ultrathin fibers of hydrogel, which aligned like many strands of gathered straw when the material was repeatedly stretched. This workout also happened to increase the hydrogel’s fatigue resistance.

“At that moment, we had a feeling nanofibers in hydrogels were important, and hoped to manipulate the fibril structures so that we could optimize fatigue resistance,” says Lin.

In their new study, the researchers combined a number of techniques to create stronger hydrogel nanofibers. The process starts with electrospinning, a fiber production technique that uses electric charges to draw ultrathin threads out of polymer solutions. The team used high-voltage charges to spin nanofibers from a polymer solution, to form a flat film of nanofibers, each measuring about 800 nanometers — a fraction of the diameter of a human hair.

They placed the film in a high-humidity chamber to weld the individual fibers into a sturdy, interconnected network, and then set the film in an incubator to crystallize the individual nanofibers at high temperatures, further strengthening the material.

They tested the film’s fatigue-resistance by placing it in a machine that stretched it repeatedly over tens of thousands of cycles. They also made notches in some films and observed how the cracks propagated as the films were stretched repeatedly. From these tests, they calculated that the nanofibrous films were 50 times more fatigue-resistant than the conventional nanofibrous hydrogels.

Around this time, they read with interest a study by Ming Guo, associate professor of mechanical engineering at MIT, who characterized the mechanical properties of a lobster’s underbelly. This protective membrane is made from thin sheets of chitin, a natural, fibrous material that is similar in makeup to the group’s hydrogel nanofibers.

Guo found that a cross-section of the lobster membrane revealed sheets of chitin stacked at 36-degree angles, similar to twisted plywood, or a spiral staircase. This rotating, layered configuration, known as a bouligand structure, enhanced the membrane’s properties of stretch and strength.

“We learned that this bouligand structure in the lobster underbelly has high mechanical performance, which motivated us to see if we could reproduce such structures in synthetic materials,” Lin says.

Angled architecture

Ni, Lin, and members of Zhao’s group teamed up with Nelson’s lab and Radovitzky’s group in MIT’s Institute for Soldier Nanotechnologies, and Qin’s lab at Syracuse University, to see if they could reproduce the lobster’s bouligand membrane structure using their synthetic, fatigue-resistant films.

“We prepared aligned nanofibers by electrospinning to mimic the chinic fibers existed in the lobster underbelly,” Ni says.

After electrospinning nanofibrous films, the researchers stacked each of five films in successive, 36-degree angles to form a single bouligand structure, which they then welded and crystallized to fortify the material. The final product measured 9 square centimeters and about 30 to 40 microns thick — about the size of a small piece of Scotch tape.

Stretch tests showed that the lobster-inspired material performed similarly to its natural counterpart, able to stretch repeatedly while resisting tears and cracks — a fatigue-resistance Lin attributes to the structure’s angled architecture.

“Intuitively, once a crack in the material propagates through one layer, it’s impeded by adjacent layers, where fibers are aligned at different angles,” Lin explains.

The team also subjected the material to microballistic impact tests with an experiment designed by Nelson’s group. They imaged the material as they shot it with microparticles at high velocity, and measured the particles’ speed before and after tearing through the material. The difference in velocity gave them a direct measurement of the material’s impact resistance, or the amount of energy it can absorb, which turned out to be a surprisingly tough 40 kilojoules per kilogram. This number is measured in the hydrated state.

“That means that a 5-millimeter steel ball launched at 200 meters per second would be arrested by 13 millimeters of the material,” Veysset says. “It is not as resistant as Kevlar, which would require 1 millimeter, but the material beats Kevlar in many other categories.”

It’s no surprise that the new material isn’t as tough as commercial antiballistic materials. It is, however, significantly sturdier than most other nanofibrous hydrogels such as gelatin and synthetic polymers like PVA. The material is also much stretchier than Kevlar. This combination of stretch and strength suggests that, if their fabrication can be sped up, and more films stacked in bouligand structures, nanofibrous hydrogels may serve as flexible and tough artificial tissues.

“For a hydrogel material to be a load-bearing artificial tissue, both strength and deformability are required,” Lin says. “Our material design could achieve these two properties.”

If you have the time and the interest, do check out the April 23, 2021 MIT news release, which features a couple of informative GIFs.

Here’s a link to and a citation for the paper,

Strong fatigue-resistant nanofibrous hydrogels inspired by lobster underbelly by Jiahua Ni, Shaoting Lin, Zhao Qin, David Veysset, Xinyue Liu, Yuchen Sun, Alex J. Hsieh, Raul Radovitzky, Keith A. Nelson, Xuanhe Zhao. Matter, 2021; DOI: 10.1016/j.matt.2021.03.023 Published April 23, 2021

This paper is behind a paywall.

US Army researchers’ vision for artificial intelligence and ethics

The US Army peeks into a near future where humans and some forms of artificial intelligence (AI) work together in battle and elsewhere. From a February 3, 2021 U.S. Army Research Laboratory news release (also on EurekAlert but published on February 16, 2021),

The Army of the future will involve humans and autonomous machines working together to accomplish the mission. According to Army researchers, this vision will only succeed if artificial intelligence is perceived to be ethical.

Researchers, based at the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory, Northeastern University and the University of Southern California, expanded existing research to cover moral dilemmas and decision making that has not been pursued elsewhere.

This research, featured in Frontiers in Robotics and AI, tackles the fundamental challenge of developing ethical artificial intelligence, which, according to the researchers, is still mostly understudied.

“Autonomous machines, such as automated vehicles and robots, are poised to become pervasive in the Army,” said DEVCOM ARL researcher Dr. Celso de Melo, who is located at the laboratory’s ARL West regional site in Playa Vista, California. “These machines will inevitably face moral dilemmas where they must make decisions that could very well injure humans.”

For example, de Melo said, imagine that an automated vehicle is driving in a tunnel and suddenly five pedestrians cross the street; the vehicle must decide whether to continue moving forward injuring the pedestrians or swerve towards the wall risking the driver.

What should the automated vehicle do in this situation?

Prior work has framed these dilemmas in starkly simple terms, framing decisions as life and death, de Melo said, neglecting the influence of risk of injury to the involved parties on the outcome.

“By expanding the study of moral dilemmas to consider the risk profile of the situation, we significantly expanded the space of acceptable solutions for these dilemmas,” de Melo said. “In so doing, we contributed to the development of autonomous technology that abides to acceptable moral norms and thus is more likely to be adopted in practice and accepted by the general public.”

The researchers focused on this gap and presented experimental evidence that, in a moral dilemma with automated vehicles, the likelihood of making the utilitarian choice – which minimizes the overall injury risk to humans and, in this case, saves the pedestrians – was moderated by the perceived risk of injury to pedestrians and drivers.

In their study, participants were found more likely to make the utilitarian choice with decreasing risk to the driver and with increasing risk to the pedestrians. However, interestingly, most were willing to risk the driver (i.e., self-sacrifice), even if the risk to the pedestrians was lower than to the driver.

As a second contribution, the researchers also demonstrated that participants’ moral decisions were influenced by what other decision makers do – for instance, participants were less likely to make the utilitarian choice, if others often chose the non-utilitarian choice.

“This research advances the state-of-the-art in the study of moral dilemmas involving autonomous machines by shedding light on the role of risk on moral choices,” de Melo said. “Further, both of these mechanisms introduce opportunities to develop AI that will be perceived to make decisions that meet moral standards, as well as introduce an opportunity to use technology to shape human behavior and promote a more moral society.”

For the Army, this research is particularly relevant to Army modernization, de Melo said.

“As these vehicles become increasingly autonomous and operate in complex and dynamic environments, they are bound to face situations where injury to humans is unavoidable,” de Melo said. “This research informs how to navigate these moral dilemmas and make decisions that will be perceived as optimal given the circumstances; for example, minimizing overall risk to human life.”

Moving in to the future, researchers will study this type of risk-benefit analysis in Army moral dilemmas and articulate the corresponding practical implications for the development of AI systems.

“When deployed at scale, the decisions made by AI systems can be very consequential, in particular for situations involving risk to human life,” de Melo said. “It is critical that AI is able to make decisions that reflect society’s ethical standards to facilitate adoption by the Army and acceptance by the general public. This research contributes to realizing this vision by clarifying some of the key factors shaping these standards. This research is personally important because AI is expected to have considerable impact to the Army of the future; however, what kind of impact will be defined by the values reflected in that AI.”

The last time I had an item on a similar topic from the US Army Research Laboratory (ARL) it was in a March 26, 2018 posting; scroll down to the subhead, US Army (about 50% of the way down),

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

This latest work also revolves around the issue of trust according to the last sentence in the 2021 study paper (link and citation to follow),

… Overall, these questions emphasize the importance of the kind of experimental work presented here, as it has the potential to shed light on people’s preferences about moral behavior in machines, inform the design of autonomous machines people are likely to trust and adopt, and, perhaps, even introduce an opportunity to promote a more moral society. [emphases mine]

From trust to adoption to a more moral society—that’s an interesting progression. For another more optimistic view of how robots and AI can have positive impacts there’s my March 29, 2021 posting, Little Lost Robot and humane visions of our technological future

Here’s a link to and a citation for the paper,

Risk of Injury in Moral Dilemmas With Autonomous Vehicles by Celso M. de Melo, Stacy Marsella, and Jonathan Gratch. Front. Robot. AI [Frontiers in Robotics and AI], 20 January 2021 DOI: https://doi.org/10.3389/frobt.2020.572529

This paper is in an open access journal.

Use kombucha to produce bacterial cellulose

The combination of the US Army, bacterial cellulose, and kombucha seems a little unusual. However, this January 26, 2021 U.S. Army Research Laboratory news release (also on EurekAlert) provides some clues as to how this combination makes sense,

Kombucha tea, a trendy fermented beverage, inspired researchers to develop a new way to generate tough, functional materials using a mixture of bacteria and yeast similar to the kombucha mother used to ferment tea.

With Army funding, using this mixture, also called a SCOBY, or symbiotic culture of bacteria and yeast, engineers at MIT [Massachusetts Institute of Technology] and Imperial College London produced cellulose embedded with enzymes that can perform a variety of functions, such as sensing environmental pollutants and self-healing materials.

The team also showed that they could incorporate yeast directly into the cellulose, creating living materials that could be used to purify water for Soldiers in the field or make smart packaging materials that can detect damage.

“This work provides insights into how synthetic biology approaches can harness the design of biotic-abiotic interfaces with biological organization over multiple length scales,” said Dr. Dawanne Poree, program manager, Army Research Office, an element of the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory. “This is important to the Army as this can lead to new materials with potential applications in microbial fuel cells, sense and respond systems, and self-reporting and self-repairing materials.”

The research, published in Nature Materials was funded by ARO [Army Research Office] and the Army’s Institute for Soldier Nanotechnologies [ISN] at the Massachusetts Institute of Technology. The U.S. Army established the ISN in 2002 as an interdisciplinary research center devoted to dramatically improving the protection, survivability, and mission capabilities of the Soldier and Soldier-supporting platforms and systems.

“We foresee a future where diverse materials could be grown at home or in local production facilities, using biology rather than resource-intensive centralized manufacturing,” said Timothy Lu, an MIT associate professor of electrical engineering and computer science and of biological engineering.

Researchers produced cellulose embedded with enzymes, creating living materials that could be used to purify water for Soldiers in the field or make smart packaging materials that can detect damage. These fermentation factories, which usually contain one species of bacteria and one or more yeast species, produce ethanol, cellulose, and acetic acid that gives kombucha tea its distinctive flavor.

Most of the wild yeast strains used for fermentation are difficult to genetically modify, so the researchers replaced them with a strain of laboratory yeast called Saccharomyces cerevisiae. They combined the yeast with a type of bacteria called Komagataeibacter rhaeticus that their collaborators at Imperial College London had previously isolated from a kombucha mother. This species can produce large quantities of cellulose.

Because the researchers used a laboratory strain of yeast, they could engineer the cells to do any of the things that lab yeast can do, such as producing enzymes that glow in the dark, or sensing pollutants or pathogens in the environment. The yeast can also be programmed so that they can break down pollutants/pathogens after detecting them, which is highly relevant to Army for chem/bio defense applications.

“Our community believes that living materials could provide the most effective sensing of chem/bio warfare agents, especially those of unknown genetics and chemistry,” said Dr. Jim Burgess ISN program manager for ARO.

The bacteria in the culture produced large-scale quantities of tough cellulose that served as a scaffold. The researchers designed their system so that they can control whether the yeast themselves, or just the enzymes that they produce, are incorporated into the cellulose structure. It takes only a few days to grow the material, and if left long enough, it can thicken to occupy a space as large as a bathtub.

“We think this is a good system that is very cheap and very easy to make in very large quantities,” said MIT graduate student and the paper’s lead author, Tzu-Chieh Tang. To demonstrate the potential of their microbe culture, which they call Syn-SCOBY, the researchers created a material incorporating yeast that senses estradiol, which is sometimes found as an environmental pollutant. In another version, they used a strain of yeast that produces a glowing protein called luciferase when exposed to blue light. These yeasts could be swapped out for other strains that detect other pollutants, metals, or pathogens.

The researchers are now looking into using the Syn-SCOBY system for biomedical or food applications. For example, engineering the yeast cells to produce antimicrobials or proteins that could benefit human health.

Here’s a link to and a citation for the paper,

Living materials with programmable functionalities grown from engineered microbial co-cultures by Charlie Gilbert, Tzu-Chieh Tang, Wolfgang Ott, Brandon A. Dorr, William M. Shaw, George L. Sun, Timothy K. Lu & Tom Ellis. Nature Materials (2021) DOI: https://doi.org/10.1038/s41563-020-00857-5 Published: 11 January 2021

This paper is behind a paywall.

Bionanomotors for bio-inspired robots on the battlefield

An October 9, 2019 news item on ScienceDaily provides some insight into the latest US Army research into robots,

In an effort to make robots more effective and versatile teammates for Soldiers in combat, Army researchers are on a mission to understand the value of the molecular living functionality of muscle, and the fundamental mechanics that would need to be replicated in order to artificially achieve the capabilities arising from the proteins responsible for muscle contraction.

Caption: Army researchers are on a mission to understand the value of the molecular ‘living’ functionality of muscle, and the fundamental mechanics that would need to be replicated in order to artificially achieve the capabilities arising from the proteins responsible for muscle contraction. Credit: US Army-Shutterstock

An October 8, 2019 US Army Research Laboratory news release (also on EurekAlert but published on October 9, 2019), which originated the news item, delves further into the research,

Bionanomotors, like myosins that move along actin networks, are responsible for most methods of motion in all life forms. Thus, the development of artificial nanomotors could be game-changing in the field of robotics research.

Researchers from the U.S. Army Combat Capabilities Development Command’s [CCDC] Army Research Laboratory [ARL] have been looking to identify a design that would allow the artificial nanomotor to take advantage of Brownian motion, the property of particles to agitatedly move simply because they are warm.

The CCDC ARL researchers believe understanding and developing these fundamental mechanics are a necessary foundational step toward making informed decisions on the viability of new directions in robotics involving the blending of synthetic biology, robotics, and dynamics and controls engineering.

“By controlling the stiffness of different geometrical features of a simple lever-arm design, we found that we could use Brownian motion to make the nanomotor more capable of reaching desirable positions for creating linear motion,” said Dean Culver, a researcher in CCDC ARL’s Vehicle Technology Directorate. “This nano-scale feature translates to more energetically efficient actuation at a macro scale, meaning robots that can do more for the warfighter over a longer amount of time.”

According to Culver, the descriptions of protein interactions in muscle contraction are typically fairly high-level. More specifically, rather than describing the forces that act on an individual protein to seek its counterpart, prescribed or empirical rate functions that dictate the conditions under which a binding or a release event occurs have been used by the research community to replicate this biomechanical process.

“These widely accepted muscle contraction models are akin to a black-box understanding of a car engine,” Culver said. “More gas, more power. It weighs this much and takes up this much space. Combustion is involved. But, you can’t design a car engine with that kind of surface-level information. You need to understand how the pistons work, and how finely injection needs to be tuned. That’s a component-level understanding of the engine. We dive into the component-level mechanics of the built-up protein system and show the design and control value of living functionality as well as a clearer understanding of design parameters that would be key to synthetically reproducing such living functionality.”

Culver stated that the capacity for Brownian motion to kick a tethered particle from a disadvantageous elastic position to an advantageous one, in terms of energy production for a molecular motor, has been illustrated by ARL at a component level, a crucial step in the design of artificial nanomotors that offer the same performance capabilities as biological ones.

“This research adds a key piece of the puzzle for fast, versatile robots that can perform autonomous tactical maneuver and reconnaissance functions,” Culver said. “These models will be integral to the design of distributed actuators that are silent, low thermal signature and efficient – features that will make these robots more impactful in the field.”

Culver noted that they are silent because the muscles don’t make a lot of noise when they actuate, especially compared to motors or servos, cold because the amount of heat generation in a muscle is far less than a comparable motor, and efficient because of the advantages of the distributed chemical energy model and potential escape via Brownian motion.

According to Culver, the breadth of applications for actuators inspired by the biomolecular machines in animal muscles is still unknown, but many of the existing application spaces have clear Army applications such as bio-inspired robotics, nanomachines and energy harvesting.

“Fundamental and exploratory research in this area is therefore a wise investment for our future warfighter capabilities,” Culver said.

Moving forward, there are two primary extensions of this research.

“First, we need to better understand how molecules, like the tethered particle discussed in our paper, interact with each other in more complicated environments,” Culver said. “In the paper, we see how a tethered particle can usefully harness Brownian motion to benefit the contraction of the muscle overall, but the particle in this first model is in an idealized environment. In our bodies, it’s submerged in a fluid carrying many different ions and energy-bearing molecules in solution. That’s the last piece of the puzzle for the single-motor, nano-scale models of molecular motors.”

The second extension, stated Culver, is to repeat this study with a full 3-D model, paving the way to scaling up to practical designs.

Also notable is the fact that because this research is so young, ARL researchers used this project to establish relationships with other investigators in the academic community.

“Leaning on their expertise will be critical in the years to come, and we’ve done a great job of reaching out to faculty members and researchers from places like the University of Washington, Duke University and Carnegie Mellon University,” Culver said.

According to Culver, taking this research project into the next steps with help from collaborative partners will lead to tremendous capabilities for future Soldiers in combat, a critical requirement considering the nature of the ever-changing battlefield.

Here’s a link to and a citation for the paper,

A Dynamic Escape Problem of Molecular Motors by Dean Culver, Bryan Glaz, Samuel Stanton. J Biomech Eng. Paper No: BIO-18-1527 https://doi.org/10.1115/1.4044580 Published Online: August 1, 2019

This paper is behind a paywall.

How to get people to trust artificial intelligence

Vyacheslav Polonski’s (University of Oxford researcher) January 10, 2018 piece (originally published Jan. 9, 2018 on The Conversation) on phys.org isn’t a gossip article although there are parts that could be read that way. Before getting to what I consider the juicy bits (Note: Links have been removed),

Artificial intelligence [AI] can already predict the future. Police forces are using it to map when and where crime is likely to occur [Note: See my Nov. 23, 2017 posting about predictive policing in Vancouver for details about the first Canadian municipality to introduce the technology]. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

The part (juicy bits) that satisfied some of my long held curiosity was this section on Watson and its life as a medical adjunct (Note: Links have been removed),

IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR [public relations] disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.

But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.

On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.

The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. …

It seems to me there might be a bit more to the doctors’ trust issues and I was surprised it didn’t seem to have occurred to Polonski. Then I did some digging (from Polonski’s webpage on the Oxford Internet Institute website),

Vyacheslav Polonski (@slavacm) is a DPhil [PhD] student at the Oxford Internet Institute. His research interests are located at the intersection of network science, media studies and social psychology. Vyacheslav’s doctoral research examines the adoption and use of social network sites, focusing on the effects of social influence, social cognition and identity construction.

Vyacheslav is a Visiting Fellow at Harvard University and a Global Shaper at the World Economic Forum. He was awarded the Master of Science degree with Distinction in the Social Science of the Internet from the University of Oxford in 2013. He also obtained the Bachelor of Science degree with First Class Honours in Management from the London School of Economics and Political Science (LSE) in 2012.

Vyacheslav was honoured at the British Council International Student of the Year 2011 awards, and was named UK’s Student of the Year 2012 and national winner of the Future Business Leader of the Year 2012 awards by TARGETjobs.

Previously, he has worked as a management consultant at Roland Berger Strategy Consultants and gained further work experience at the World Economic Forum, PwC, Mars, Bertelsmann and Amazon.com. Besides, he was involved in several start-ups as part of the 2012 cohort of Entrepreneur First and as part of the founding team of the London office of Rocket Internet. Vyacheslav was the junior editor of the bi-lingual book ‘Inspire a Nation‘ about Barack Obama’s first presidential election campaign. In 2013, he was invited to be a keynote speaker at the inaugural TEDx conference of IE University in Spain to discuss the role of a networked mindset in everyday life.

Vyacheslav is fluent in German, English and Russian, and is passionate about new technologies, social entrepreneurship, philanthropy, philosophy and modern art.

Research interests

Network science, social network analysis, online communities, agency and structure, group dynamics, social interaction, big data, critical mass, network effects, knowledge networks, information diffusion, product adoption

Positions held at the OII

  • DPhil student, October 2013 –
  • MSc Student, October 2012 – August 2013

Polonski doesn’t seem to have any experience dealing with, participating in, or studying the medical community. Getting a doctor to admit that his or her approach to a particular patient’s condition was wrong or misguided runs counter to their training and, by extension, the institution of medicine. Also, one of the biggest problems in any field is getting people to change and it’s not always about trust. In this instance, you’re asking a doctor to back someone else’s opinion after he or she has rendered theirs. This is difficult even when the other party is another human doctor let alone a form of artificial intelligence.

If you want to get a sense of just how hard it is to get someone to back down after they’ve committed to a position, read this January 10, 2018 essay by Lara Bazelon, an associate professor at the University of San Francisco School of Law. This is just one of the cases (Note: Links have been removed),

Davontae Sanford was 14 years old when he confessed to murdering four people in a drug house on Detroit’s East Side. Left alone with detectives in a late-night interrogation, Sanford says he broke down after being told he could go home if he gave them “something.” On the advice of a lawyer whose license was later suspended for misconduct, Sanders pleaded guilty in the middle of his March 2008 trial and received a sentence of 39 to 92 years in prison.

Sixteen days after Sanford was sentenced, a hit man named Vincent Smothers told the police he had carried out 12 contract killings, including the four Sanford had pleaded guilty to committing. Smothers explained that he’d worked with an accomplice, Ernest Davis, and he provided a wealth of corroborating details to back up his account. Smothers told police where they could find one of the weapons used in the murders; the gun was recovered and ballistics matched it to the crime scene. He also told the police he had used a different gun in several of the other murders, which ballistics tests confirmed. Once Smothers’ confession was corroborated, it was clear Sanford was innocent. Smothers made this point explicitly in an 2015 affidavit, emphasizing that Sanford hadn’t been involved in the crimes “in any way.”

Guess what happened? (Note: Links have been removed),

But Smothers and Davis were never charged. Neither was Leroy Payne, the man Smothers alleged had paid him to commit the murders. …

Davontae Sanford, meanwhile, remained behind bars, locked up for crimes he very clearly didn’t commit.

Police failed to turn over all the relevant information in Smothers’ confession to Sanford’s legal team, as the law required them to do. When that information was leaked in 2009, Sanford’s attorneys sought to reverse his conviction on the basis of actual innocence. Wayne County Prosecutor Kym Worthy fought back, opposing the motion all the way to the Michigan Supreme Court. In 2014, the court sided with Worthy, ruling that actual innocence was not a valid reason to withdraw a guilty plea [emphasis mine]. Sanford would remain in prison for another two years.

Doctors are just as invested in their opinions and professional judgments as lawyers  (just like  the prosecutor and the judges on the Michigan Supreme Court) are.

There is one more problem. From the doctor’s (or anyone else’s perspective), if the AI is making the decisions, why do he/she need to be there? At best it’s as if AI were turning the doctor into its servant or, at worst, replacing the doctor. Polonski alludes to the problem in one of his solutions to the ‘trust’ issue (Note: A link has been removed),

Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.

Having input into the AI decision-making process somewhat addresses one of the problems but the commitment to one’s own judgment even when there is overwhelming evidence to the contrary is a perennially thorny problem. The legal case mentioned here earlier is clearly one where the contrarian is wrong but it’s not always that obvious. As well, sometimes, people who hold out against the majority are right.

US Army

Getting back to building trust, it turns out the US Army Research Laboratory is also interested in transparency where AI is concerned (from a January 11, 2018 US Army news release on EurekAlert),

U.S. Army Research Laboratory [ARL] scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency [emphasis mine], which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

The U.S. Defense Science Board, in a 2016 report, identified six barriers to human trust in autonomous systems, with ‘low observability, predictability, directability and auditability’ as well as ‘low mutual understanding of common goals’ being among the key issues.

In order to address these issues, Chen and her colleagues developed the Situation awareness-based Agent Transparency, or SAT, model and measured its effectiveness on human-agent team performance in a series of human factors studies supported by the ARPI. The SAT model deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment. At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, likelihood of success/failure, and any uncertainty associated with the aforementioned projections.

In one of the ARPI projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL’s experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators’ decision making during military scenarios. The results of a series of human factors experiments collectively suggest that transparency on the part of the agent benefits the human’s decision making and thus the overall human-agent team performance. More specifically, researchers said the human’s trust in the agent was significantly better calibrated — accepting the agent’s plan when it is correct and rejecting it when it is incorrect– when the agent had a higher level of transparency.

The other project related to agent transparency that Chen and her colleagues performed under the ARPI was Autonomous Squad Member, on which ARL collaborated with Naval Research Laboratory scientists. The ASM is a small ground robot that interacts with and communicates with an infantry squad. As part of the overall ASM program, Chen’s group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM’s user interface features an at a glance transparency module where user-tested iconographic representations of the agent’s plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. A series of human factors studies on the ASM’s user interface have investigated the effects of agent transparency on the human teammate’s situation awareness, trust in the ASM, and workload. The results, consistent with the IMPACT project’s findings, demonstrated the positive effects of agent transparency on the human’s task performance without increase of perceived workload. The research participants also reported that they felt the ASM as more trustworthy, intelligent, and human-like when it conveyed greater levels of transparency.

Chen and her colleagues are currently expanding the SAT model into bidirectional transparency between the human and the agent.

“Bidirectional transparency, although conceptually straightforward–human and agent being mutually transparent about their reasoning process–can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent’s planning and performance–just as agent transparency can support the human’s situation awareness and task performance, which we have demonstrated in our studies,” Chen hypothesized.

The challenge is to design the user interfaces, which can include visual, auditory, and other modalities, that can support bidirectional transparency dynamically, in real time, while not overwhelming the human with too much information and burden.

Interesting, yes? Here’s a link and a citation for the paper,

Situation Awareness-based Agent Transparency and Human-Autonomy Teaming Effectiveness by Jessie Y.C. Chen, Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael Barnes. Theoretical Issues in Ergonomics Science May 2018. DOI 10.1080/1463922X.2017.1315750

This paper is behind a paywall.