Tag Archives: US Army Research Laboratory (ARL)

Use kombucha to produce bacterial cellulose

The combination of the US Army, bacterial cellulose, and kombucha seems a little unusual. However, this January 26, 2021 U.S. Army Research Laboratory news release (also on EurekAlert) provides some clues as to how this combination makes sense,

Kombucha tea, a trendy fermented beverage, inspired researchers to develop a new way to generate tough, functional materials using a mixture of bacteria and yeast similar to the kombucha mother used to ferment tea.

With Army funding, using this mixture, also called a SCOBY, or symbiotic culture of bacteria and yeast, engineers at MIT [Massachusetts Institute of Technology] and Imperial College London produced cellulose embedded with enzymes that can perform a variety of functions, such as sensing environmental pollutants and self-healing materials.

The team also showed that they could incorporate yeast directly into the cellulose, creating living materials that could be used to purify water for Soldiers in the field or make smart packaging materials that can detect damage.

“This work provides insights into how synthetic biology approaches can harness the design of biotic-abiotic interfaces with biological organization over multiple length scales,” said Dr. Dawanne Poree, program manager, Army Research Office, an element of the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory. “This is important to the Army as this can lead to new materials with potential applications in microbial fuel cells, sense and respond systems, and self-reporting and self-repairing materials.”

The research, published in Nature Materials was funded by ARO [Army Research Office] and the Army’s Institute for Soldier Nanotechnologies [ISN] at the Massachusetts Institute of Technology. The U.S. Army established the ISN in 2002 as an interdisciplinary research center devoted to dramatically improving the protection, survivability, and mission capabilities of the Soldier and Soldier-supporting platforms and systems.

“We foresee a future where diverse materials could be grown at home or in local production facilities, using biology rather than resource-intensive centralized manufacturing,” said Timothy Lu, an MIT associate professor of electrical engineering and computer science and of biological engineering.

Researchers produced cellulose embedded with enzymes, creating living materials that could be used to purify water for Soldiers in the field or make smart packaging materials that can detect damage. These fermentation factories, which usually contain one species of bacteria and one or more yeast species, produce ethanol, cellulose, and acetic acid that gives kombucha tea its distinctive flavor.

Most of the wild yeast strains used for fermentation are difficult to genetically modify, so the researchers replaced them with a strain of laboratory yeast called Saccharomyces cerevisiae. They combined the yeast with a type of bacteria called Komagataeibacter rhaeticus that their collaborators at Imperial College London had previously isolated from a kombucha mother. This species can produce large quantities of cellulose.

Because the researchers used a laboratory strain of yeast, they could engineer the cells to do any of the things that lab yeast can do, such as producing enzymes that glow in the dark, or sensing pollutants or pathogens in the environment. The yeast can also be programmed so that they can break down pollutants/pathogens after detecting them, which is highly relevant to Army for chem/bio defense applications.

“Our community believes that living materials could provide the most effective sensing of chem/bio warfare agents, especially those of unknown genetics and chemistry,” said Dr. Jim Burgess ISN program manager for ARO.

The bacteria in the culture produced large-scale quantities of tough cellulose that served as a scaffold. The researchers designed their system so that they can control whether the yeast themselves, or just the enzymes that they produce, are incorporated into the cellulose structure. It takes only a few days to grow the material, and if left long enough, it can thicken to occupy a space as large as a bathtub.

“We think this is a good system that is very cheap and very easy to make in very large quantities,” said MIT graduate student and the paper’s lead author, Tzu-Chieh Tang. To demonstrate the potential of their microbe culture, which they call Syn-SCOBY, the researchers created a material incorporating yeast that senses estradiol, which is sometimes found as an environmental pollutant. In another version, they used a strain of yeast that produces a glowing protein called luciferase when exposed to blue light. These yeasts could be swapped out for other strains that detect other pollutants, metals, or pathogens.

The researchers are now looking into using the Syn-SCOBY system for biomedical or food applications. For example, engineering the yeast cells to produce antimicrobials or proteins that could benefit human health.

Here’s a link to and a citation for the paper,

Living materials with programmable functionalities grown from engineered microbial co-cultures by Charlie Gilbert, Tzu-Chieh Tang, Wolfgang Ott, Brandon A. Dorr, William M. Shaw, George L. Sun, Timothy K. Lu & Tom Ellis. Nature Materials (2021) DOI: https://doi.org/10.1038/s41563-020-00857-5 Published: 11 January 2021

This paper is behind a paywall.

Bionanomotors for bio-inspired robots on the battlefield

An October 9, 2019 news item on ScienceDaily provides some insight into the latest US Army research into robots,

In an effort to make robots more effective and versatile teammates for Soldiers in combat, Army researchers are on a mission to understand the value of the molecular living functionality of muscle, and the fundamental mechanics that would need to be replicated in order to artificially achieve the capabilities arising from the proteins responsible for muscle contraction.

Caption: Army researchers are on a mission to understand the value of the molecular ‘living’ functionality of muscle, and the fundamental mechanics that would need to be replicated in order to artificially achieve the capabilities arising from the proteins responsible for muscle contraction. Credit: US Army-Shutterstock

An October 8, 2019 US Army Research Laboratory news release (also on EurekAlert but published on October 9, 2019), which originated the news item, delves further into the research,

Bionanomotors, like myosins that move along actin networks, are responsible for most methods of motion in all life forms. Thus, the development of artificial nanomotors could be game-changing in the field of robotics research.

Researchers from the U.S. Army Combat Capabilities Development Command’s [CCDC] Army Research Laboratory [ARL] have been looking to identify a design that would allow the artificial nanomotor to take advantage of Brownian motion, the property of particles to agitatedly move simply because they are warm.

The CCDC ARL researchers believe understanding and developing these fundamental mechanics are a necessary foundational step toward making informed decisions on the viability of new directions in robotics involving the blending of synthetic biology, robotics, and dynamics and controls engineering.

“By controlling the stiffness of different geometrical features of a simple lever-arm design, we found that we could use Brownian motion to make the nanomotor more capable of reaching desirable positions for creating linear motion,” said Dean Culver, a researcher in CCDC ARL’s Vehicle Technology Directorate. “This nano-scale feature translates to more energetically efficient actuation at a macro scale, meaning robots that can do more for the warfighter over a longer amount of time.”

According to Culver, the descriptions of protein interactions in muscle contraction are typically fairly high-level. More specifically, rather than describing the forces that act on an individual protein to seek its counterpart, prescribed or empirical rate functions that dictate the conditions under which a binding or a release event occurs have been used by the research community to replicate this biomechanical process.

“These widely accepted muscle contraction models are akin to a black-box understanding of a car engine,” Culver said. “More gas, more power. It weighs this much and takes up this much space. Combustion is involved. But, you can’t design a car engine with that kind of surface-level information. You need to understand how the pistons work, and how finely injection needs to be tuned. That’s a component-level understanding of the engine. We dive into the component-level mechanics of the built-up protein system and show the design and control value of living functionality as well as a clearer understanding of design parameters that would be key to synthetically reproducing such living functionality.”

Culver stated that the capacity for Brownian motion to kick a tethered particle from a disadvantageous elastic position to an advantageous one, in terms of energy production for a molecular motor, has been illustrated by ARL at a component level, a crucial step in the design of artificial nanomotors that offer the same performance capabilities as biological ones.

“This research adds a key piece of the puzzle for fast, versatile robots that can perform autonomous tactical maneuver and reconnaissance functions,” Culver said. “These models will be integral to the design of distributed actuators that are silent, low thermal signature and efficient – features that will make these robots more impactful in the field.”

Culver noted that they are silent because the muscles don’t make a lot of noise when they actuate, especially compared to motors or servos, cold because the amount of heat generation in a muscle is far less than a comparable motor, and efficient because of the advantages of the distributed chemical energy model and potential escape via Brownian motion.

According to Culver, the breadth of applications for actuators inspired by the biomolecular machines in animal muscles is still unknown, but many of the existing application spaces have clear Army applications such as bio-inspired robotics, nanomachines and energy harvesting.

“Fundamental and exploratory research in this area is therefore a wise investment for our future warfighter capabilities,” Culver said.

Moving forward, there are two primary extensions of this research.

“First, we need to better understand how molecules, like the tethered particle discussed in our paper, interact with each other in more complicated environments,” Culver said. “In the paper, we see how a tethered particle can usefully harness Brownian motion to benefit the contraction of the muscle overall, but the particle in this first model is in an idealized environment. In our bodies, it’s submerged in a fluid carrying many different ions and energy-bearing molecules in solution. That’s the last piece of the puzzle for the single-motor, nano-scale models of molecular motors.”

The second extension, stated Culver, is to repeat this study with a full 3-D model, paving the way to scaling up to practical designs.

Also notable is the fact that because this research is so young, ARL researchers used this project to establish relationships with other investigators in the academic community.

“Leaning on their expertise will be critical in the years to come, and we’ve done a great job of reaching out to faculty members and researchers from places like the University of Washington, Duke University and Carnegie Mellon University,” Culver said.

According to Culver, taking this research project into the next steps with help from collaborative partners will lead to tremendous capabilities for future Soldiers in combat, a critical requirement considering the nature of the ever-changing battlefield.

Here’s a link to and a citation for the paper,

A Dynamic Escape Problem of Molecular Motors by Dean Culver, Bryan Glaz, Samuel Stanton. J Biomech Eng. Paper No: BIO-18-1527 https://doi.org/10.1115/1.4044580 Published Online: August 1, 2019

This paper is behind a paywall.

How to get people to trust artificial intelligence

Vyacheslav Polonski’s (University of Oxford researcher) January 10, 2018 piece (originally published Jan. 9, 2018 on The Conversation) on phys.org isn’t a gossip article although there are parts that could be read that way. Before getting to what I consider the juicy bits (Note: Links have been removed),

Artificial intelligence [AI] can already predict the future. Police forces are using it to map when and where crime is likely to occur [Note: See my Nov. 23, 2017 posting about predictive policing in Vancouver for details about the first Canadian municipality to introduce the technology]. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

The part (juicy bits) that satisfied some of my long held curiosity was this section on Watson and its life as a medical adjunct (Note: Links have been removed),

IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR [public relations] disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.

But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.

On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.

The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. …

It seems to me there might be a bit more to the doctors’ trust issues and I was surprised it didn’t seem to have occurred to Polonski. Then I did some digging (from Polonski’s webpage on the Oxford Internet Institute website),

Vyacheslav Polonski (@slavacm) is a DPhil [PhD] student at the Oxford Internet Institute. His research interests are located at the intersection of network science, media studies and social psychology. Vyacheslav’s doctoral research examines the adoption and use of social network sites, focusing on the effects of social influence, social cognition and identity construction.

Vyacheslav is a Visiting Fellow at Harvard University and a Global Shaper at the World Economic Forum. He was awarded the Master of Science degree with Distinction in the Social Science of the Internet from the University of Oxford in 2013. He also obtained the Bachelor of Science degree with First Class Honours in Management from the London School of Economics and Political Science (LSE) in 2012.

Vyacheslav was honoured at the British Council International Student of the Year 2011 awards, and was named UK’s Student of the Year 2012 and national winner of the Future Business Leader of the Year 2012 awards by TARGETjobs.

Previously, he has worked as a management consultant at Roland Berger Strategy Consultants and gained further work experience at the World Economic Forum, PwC, Mars, Bertelsmann and Amazon.com. Besides, he was involved in several start-ups as part of the 2012 cohort of Entrepreneur First and as part of the founding team of the London office of Rocket Internet. Vyacheslav was the junior editor of the bi-lingual book ‘Inspire a Nation‘ about Barack Obama’s first presidential election campaign. In 2013, he was invited to be a keynote speaker at the inaugural TEDx conference of IE University in Spain to discuss the role of a networked mindset in everyday life.

Vyacheslav is fluent in German, English and Russian, and is passionate about new technologies, social entrepreneurship, philanthropy, philosophy and modern art.

Research interests

Network science, social network analysis, online communities, agency and structure, group dynamics, social interaction, big data, critical mass, network effects, knowledge networks, information diffusion, product adoption

Positions held at the OII

  • DPhil student, October 2013 –
  • MSc Student, October 2012 – August 2013

Polonski doesn’t seem to have any experience dealing with, participating in, or studying the medical community. Getting a doctor to admit that his or her approach to a particular patient’s condition was wrong or misguided runs counter to their training and, by extension, the institution of medicine. Also, one of the biggest problems in any field is getting people to change and it’s not always about trust. In this instance, you’re asking a doctor to back someone else’s opinion after he or she has rendered theirs. This is difficult even when the other party is another human doctor let alone a form of artificial intelligence.

If you want to get a sense of just how hard it is to get someone to back down after they’ve committed to a position, read this January 10, 2018 essay by Lara Bazelon, an associate professor at the University of San Francisco School of Law. This is just one of the cases (Note: Links have been removed),

Davontae Sanford was 14 years old when he confessed to murdering four people in a drug house on Detroit’s East Side. Left alone with detectives in a late-night interrogation, Sanford says he broke down after being told he could go home if he gave them “something.” On the advice of a lawyer whose license was later suspended for misconduct, Sanders pleaded guilty in the middle of his March 2008 trial and received a sentence of 39 to 92 years in prison.

Sixteen days after Sanford was sentenced, a hit man named Vincent Smothers told the police he had carried out 12 contract killings, including the four Sanford had pleaded guilty to committing. Smothers explained that he’d worked with an accomplice, Ernest Davis, and he provided a wealth of corroborating details to back up his account. Smothers told police where they could find one of the weapons used in the murders; the gun was recovered and ballistics matched it to the crime scene. He also told the police he had used a different gun in several of the other murders, which ballistics tests confirmed. Once Smothers’ confession was corroborated, it was clear Sanford was innocent. Smothers made this point explicitly in an 2015 affidavit, emphasizing that Sanford hadn’t been involved in the crimes “in any way.”

Guess what happened? (Note: Links have been removed),

But Smothers and Davis were never charged. Neither was Leroy Payne, the man Smothers alleged had paid him to commit the murders. …

Davontae Sanford, meanwhile, remained behind bars, locked up for crimes he very clearly didn’t commit.

Police failed to turn over all the relevant information in Smothers’ confession to Sanford’s legal team, as the law required them to do. When that information was leaked in 2009, Sanford’s attorneys sought to reverse his conviction on the basis of actual innocence. Wayne County Prosecutor Kym Worthy fought back, opposing the motion all the way to the Michigan Supreme Court. In 2014, the court sided with Worthy, ruling that actual innocence was not a valid reason to withdraw a guilty plea [emphasis mine]. Sanford would remain in prison for another two years.

Doctors are just as invested in their opinions and professional judgments as lawyers  (just like  the prosecutor and the judges on the Michigan Supreme Court) are.

There is one more problem. From the doctor’s (or anyone else’s perspective), if the AI is making the decisions, why do he/she need to be there? At best it’s as if AI were turning the doctor into its servant or, at worst, replacing the doctor. Polonski alludes to the problem in one of his solutions to the ‘trust’ issue (Note: A link has been removed),

Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.

Having input into the AI decision-making process somewhat addresses one of the problems but the commitment to one’s own judgment even when there is overwhelming evidence to the contrary is a perennially thorny problem. The legal case mentioned here earlier is clearly one where the contrarian is wrong but it’s not always that obvious. As well, sometimes, people who hold out against the majority are right.

US Army

Getting back to building trust, it turns out the US Army Research Laboratory is also interested in transparency where AI is concerned (from a January 11, 2018 US Army news release on EurekAlert),

U.S. Army Research Laboratory [ARL] scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency [emphasis mine], which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

The U.S. Defense Science Board, in a 2016 report, identified six barriers to human trust in autonomous systems, with ‘low observability, predictability, directability and auditability’ as well as ‘low mutual understanding of common goals’ being among the key issues.

In order to address these issues, Chen and her colleagues developed the Situation awareness-based Agent Transparency, or SAT, model and measured its effectiveness on human-agent team performance in a series of human factors studies supported by the ARPI. The SAT model deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment. At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, likelihood of success/failure, and any uncertainty associated with the aforementioned projections.

In one of the ARPI projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL’s experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators’ decision making during military scenarios. The results of a series of human factors experiments collectively suggest that transparency on the part of the agent benefits the human’s decision making and thus the overall human-agent team performance. More specifically, researchers said the human’s trust in the agent was significantly better calibrated — accepting the agent’s plan when it is correct and rejecting it when it is incorrect– when the agent had a higher level of transparency.

The other project related to agent transparency that Chen and her colleagues performed under the ARPI was Autonomous Squad Member, on which ARL collaborated with Naval Research Laboratory scientists. The ASM is a small ground robot that interacts with and communicates with an infantry squad. As part of the overall ASM program, Chen’s group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM’s user interface features an at a glance transparency module where user-tested iconographic representations of the agent’s plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. A series of human factors studies on the ASM’s user interface have investigated the effects of agent transparency on the human teammate’s situation awareness, trust in the ASM, and workload. The results, consistent with the IMPACT project’s findings, demonstrated the positive effects of agent transparency on the human’s task performance without increase of perceived workload. The research participants also reported that they felt the ASM as more trustworthy, intelligent, and human-like when it conveyed greater levels of transparency.

Chen and her colleagues are currently expanding the SAT model into bidirectional transparency between the human and the agent.

“Bidirectional transparency, although conceptually straightforward–human and agent being mutually transparent about their reasoning process–can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent’s planning and performance–just as agent transparency can support the human’s situation awareness and task performance, which we have demonstrated in our studies,” Chen hypothesized.

The challenge is to design the user interfaces, which can include visual, auditory, and other modalities, that can support bidirectional transparency dynamically, in real time, while not overwhelming the human with too much information and burden.

Interesting, yes? Here’s a link and a citation for the paper,

Situation Awareness-based Agent Transparency and Human-Autonomy Teaming Effectiveness by Jessie Y.C. Chen, Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael Barnes. Theoretical Issues in Ergonomics Science May 2018. DOI 10.1080/1463922X.2017.1315750

This paper is behind a paywall.