Tag Archives: US Department of Defense

Prioritizing ethical & social considerations in emerging technologies—$16M in US National Science Foundation funding

I haven’t seen this much interest in the ethics and social impacts of emerging technologies in years. It seems that the latest AI (artificial intelligence) panic has stimulated interest not only in regulation but ethics too.

The latest information I have on this topic comes from a January 9, 2024 US National Science Foundation (NSF) news release (also received via email),

NSF and philanthropic partners announce $16 million in funding to prioritize ethical and social considerations in emerging technologies

ReDDDoT is a collaboration with five philanthropic partners and crosses
all disciplines of science and engineering_

The U.S. National Science Foundation today launched a new $16 million
program in collaboration with five philanthropic partners that seeks to
ensure ethical, legal, community and societal considerations are
embedded in the lifecycle of technology’s creation and use. The
Responsible Design, Development and Deployment of Technologies (ReDDDoT)
program aims to help create technologies that promote the public’s
wellbeing and mitigate potential harms.

“The design, development and deployment of technologies have broad
impacts on society,” said NSF Director Sethuraman Panchanathan. “As
discoveries and innovations are translated to practice, it is essential
that we engage and enable diverse communities to participate in this
work. NSF and its philanthropic partners share a strong commitment to
creating a comprehensive approach for co-design through soliciting
community input, incorporating community values and engaging a broad
array of academic and professional voices across the lifecycle of
technology creation and use.”

The ReDDDoT program invites proposals from multidisciplinary,
multi-sector teams that examine and demonstrate the principles,
methodologies and impacts associated with responsible design,
development and deployment of technologies, especially those specified
in the “CHIPS and Science Act of 2022.” In addition to NSF, the
program is funded and supported by the Ford Foundation, the Patrick J.
McGovern Foundation, Pivotal Ventures, Siegel Family Endowment and the
Eric and Wendy Schmidt Fund for Strategic Innovation.

“In recognition of the role responsible technologists can play to
advance human progress, and the danger unaccountable technology poses to
social justice, the ReDDDoT program serves as both a collaboration and a
covenant between philanthropy and government to center public interest
technology into the future of progress,” said Darren Walker, president
of the Ford Foundation. “This $16 million initiative will cultivate
expertise from public interest technologists across sectors who are
rooted in community and grounded by the belief that innovation, equity
and ethics must equally be the catalysts for technological progress.”

The broad goals of ReDDDoT include:  

*Stimulating activity and filling gaps in research, innovation and capacity building in the responsible design, development, and deployment of technologies.
* Creating broad and inclusive communities of interest that bring
together key stakeholders to better inform practices for the design,
development, and deployment of technologies.
* Educating and training the science, technology, engineering, and
mathematics workforce on approaches to responsible design,
development, and deployment of technologies. 
* Accelerating pathways to societal and economic benefits while
developing strategies to avoid or mitigate societal and economic harms.
* Empowering communities, including economically disadvantaged and
marginalized populations, to participate in all stages of technology
development, including the earliest stages of ideation and design.

Phase 1 of the program solicits proposals for Workshops, Planning
Grants, or the creation of Translational Research Coordination Networks,
while Phase 2 solicits full project proposals. The initial areas of
focus for 2024 include artificial intelligence, biotechnology or natural
and anthropogenic disaster prevention or mitigation. Future iterations
of the program may consider other key technology focus areas enumerated
in the CHIPS and Science Act.

For more information about ReDDDoT, visit the program website or register for an informational webinar on Feb. 9, 2024, at 2 p.m. ET.

Statements from NSF’s Partners

“The core belief at the heart of ReDDDoT – that technology should be
shaped by ethical, legal, and societal considerations as well as
community values – also drives the work of the Patrick J. McGovern
Foundation to build a human-centered digital future for all. We’re
pleased to support this partnership, committed to advancing the
development of AI, biotechnology, and climate technologies that advance
equity, sustainability, and justice.” – Vilas Dhar, President, Patrick
J. McGovern Foundation

“From generative AI to quantum computing, the pace of technology
development is only accelerating. Too often, technological advances are
not accompanied by discussion and design that considers negative impacts
or unrealized potential. We’re excited to support ReDDDoT as an
opportunity to uplift new and often forgotten perspectives that
critically examine technology’s impact on civic life, and advance Siegel
Family Endowment’s vision of technological change that includes and
improves the lives of all people.” – Katy Knight, President and
Executive Director of Siegel Family Endowment

Only eight months ago, another big NSF funding project was announced but this time focused on AI and promoting trust, from a May 4, 2023 University of Maryland (UMD) news release (also on EurekAlert), Note: A link has been removed,

The University of Maryland has been chosen to lead a multi-institutional effort supported by the National Science Foundation (NSF) that will develop new artificial intelligence (AI) technologies designed to promote trust and mitigate risks, while simultaneously empowering and educating the public.

The NSF Institute for Trustworthy AI in Law & Society (TRAILS) announced on May 4, 2023, unites specialists in AI and machine learning with social scientists, legal scholars, educators and public policy experts. The multidisciplinary team will work with impacted communities, private industry and the federal government to determine what trust in AI looks like, how to develop technical solutions for AI that can be trusted, and which policy models best create and sustain trust.

Funded by a $20 million award from NSF, the new institute is expected to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“As artificial intelligence continues to grow exponentially, we must embrace its potential for helping to solve the grand challenges of our time, as well as ensure that it is used both ethically and responsibly,” said UMD President Darryll J. Pines. “With strong federal support, this new institute will lead in defining the science and innovation needed to harness the power of AI for the benefit of the public good and all humankind.”

In addition to UMD, TRAILS will include faculty members from George Washington University (GW) and Morgan State University, with more support coming from Cornell University, the National Institute of Standards and Technology (NIST), and private sector organizations like the DataedX Group, Arthur AI, Checkstep, FinRegLab and Techstars.

At the heart of establishing the new institute is the consensus that AI is currently at a crossroads. AI-infused systems have great potential to enhance human capacity, increase productivity, catalyze innovation, and mitigate complex problems, but today’s systems are developed and deployed in a process that is opaque and insular to the public, and therefore, often untrustworthy to those affected by the technology.

“We’ve structured our research goals to educate, learn from, recruit, retain and support communities whose voices are often not recognized in mainstream AI development,” said Hal Daumé III, a UMD professor of computer science who is lead principal investigator of the NSF award and will serve as the director of TRAILS.

Inappropriate trust in AI can result in many negative outcomes, Daumé said. People often “overtrust” AI systems to do things they’re fundamentally incapable of. This can lead to people or organizations giving up their own power to systems that are not acting in their best interest. At the same time, people can also “undertrust” AI systems, leading them to avoid using systems that could ultimately help them.

Given these conditions—and the fact that AI is increasingly being deployed to mediate society’s online communications, determine health care options, and offer guidelines in the criminal justice system—it has become urgent to ensure that people’s trust in AI systems matches those same systems’ level of trustworthiness.

TRAILS has identified four key research thrusts to promote the development of AI systems that can earn the public’s trust through broader participation in the AI ecosystem.

The first, known as participatory AI, advocates involving human stakeholders in the development, deployment and use of these systems. It aims to create technology in a way that aligns with the values and interests of diverse groups of people, rather than being controlled by a few experts or solely driven by profit.

Leading the efforts in participatory AI is Katie Shilton, an associate professor in UMD’s College of Information Studies who specializes in ethics and sociotechnical systems. Tom Goldstein, a UMD associate professor of computer science, will lead the institute’s second research thrust, developing advanced machine learning algorithms that reflect the values and interests of the relevant stakeholders.

Daumé, Shilton and Goldstein all have appointments in the University of Maryland Institute for Advanced Computer Studies, which is providing administrative and technical support for TRAILS.

David Broniatowski, an associate professor of engineering management and systems engineering at GW, will lead the institute’s third research thrust of evaluating how people make sense of the AI systems that are developed, and the degree to which their levels of reliability, fairness, transparency and accountability will lead to appropriate levels of trust. Susan Ariel Aaronson, a research professor of international affairs at GW, will use her expertise in data-driven change and international data governance to lead the institute’s fourth thrust of participatory governance and trust.

Virginia Byrne, an assistant professor of higher education and student affairs at Morgan State, will lead community-driven projects related to the interplay between AI and education. According to Daumé, the TRAILS team will rely heavily on Morgan State’s leadership—as Maryland’s preeminent public urban research university—in conducting rigorous, participatory community-based research with broad societal impacts.

Additional academic support will come from Valerie Reyna, a professor of human development at Cornell, who will use her expertise in human judgment and cognition to advance efforts focused on how people interpret their use of AI.

Federal officials at NIST will collaborate with TRAILS in the development of meaningful measures, benchmarks, test beds and certification methods—particularly as they apply to important topics essential to trust and trustworthiness such as safety, fairness, privacy, transparency, explainability, accountability, accuracy and reliability.

“The ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio.

Today’s announcement [May 4, 2023] is the latest in a series of federal grants establishing a cohort of National Artificial Intelligence Research Institutes. This recent investment in seven new AI institutes, totaling $140 million, follows two previous rounds of awards.

“Maryland is at the forefront of our nation’s scientific innovation thanks to our talented workforce, top-tier universities, and federal partners,” said U.S. Sen. Chris Van Hollen (D-Md.). “This National Science Foundation award for the University of Maryland—in coordination with other Maryland-based research institutions including Morgan State University and NIST—will promote ethical and responsible AI development, with the goal of helping us harness the benefits of this powerful emerging technology while limiting the potential risks it poses. This investment entrusts Maryland with a critical priority for our shared future, recognizing the unparalleled ingenuity and world-class reputation of our institutions.” 

The NSF, in collaboration with government agencies and private sector leaders, has now invested close to half a billion dollars in the AI institutes ecosystem—an investment that expands a collaborative AI research network into almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “[They] are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

As noted in the UMD news release, this funding is part of a ‘bundle’, here’s more from the May 4, 2023 US NSF news release announcing the full $ 140 million funding program, Note: Links have been removed,

The U.S. National Science Foundation, in collaboration with other federal agencies, higher education institutions and other stakeholders, today announced a $140 million investment to establish seven new National Artificial Intelligence Research Institutes. The announcement is part of a broader effort across the federal government to advance a cohesive approach to AI-related opportunities and risks.

The new AI Institutes will advance foundational AI research that promotes ethical and trustworthy AI systems and technologies, develop novel approaches to cybersecurity, contribute to innovative solutions to climate change, expand the understanding of the brain, and leverage AI capabilities to enhance education and public health. The institutes will support the development of a diverse AI workforce in the U.S. and help address the risks and potential harms posed by AI.  This investment means  NSF and its funding partners have now invested close to half a billion dollars in the AI Institutes research network, which reaches almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “These institutes are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

“These strategic federal investments will advance American AI infrastructure and innovation, so that AI can help tackle some of the biggest challenges we face, from climate change to health. Importantly, the growing network of National AI Research Institutes will promote responsible innovation that safeguards people’s safety and rights,” said White House Office of Science and Technology Policy Director Arati Prabhakar.

The new AI Institutes are interdisciplinary collaborations among top AI researchers and are supported by co-funding from the U.S. Department of Commerce’s National Institutes of Standards and Technology (NIST); U.S. Department of Homeland Security’s Science and Technology Directorate (DHS S&T); U.S. Department of Agriculture’s National Institute of Food and Agriculture (USDA-NIFA); U.S. Department of Education’s Institute of Education Sciences (ED-IES); U.S. Department of Defense’s Office of the Undersecretary of Defense for Research and Engineering (DoD OUSD R&E); and IBM Corporation (IBM).

“Foundational research in AI and machine learning has never been more critical to the understanding, creation and deployment of AI-powered systems that deliver transformative and trustworthy solutions across our society,” said NSF Assistant Director for Computer and Information Science and Engineering Margaret Martonosi. “These recent awards, as well as our AI Institutes ecosystem as a whole, represent our active efforts in addressing national economic and societal priorities that hinge on our nation’s AI capability and leadership.”

The new AI Institutes focus on six research themes:

Trustworthy AI

NSF Institute for Trustworthy AI in Law & Society (TRAILS)

Led by the University of Maryland, TRAILS aims to transform the practice of AI from one driven primarily by technological innovation to one driven with attention to ethics, human rights and support for communities whose voices have been marginalized into mainstream AI. TRAILS will be the first institute of its kind to integrate participatory design, technology, and governance of AI systems and technologies and will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness. TRAILS is funded by a partnership between NSF and NIST.

Intelligent Agents for Next-Generation Cybersecurity

AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION)

Led by the University of California, Santa Barbara, this institute will develop novel approaches that leverage AI to anticipate and take corrective actions against cyberthreats that target the security and privacy of computer networks and their users. The team of researchers will work with experts in security operations to develop a revolutionary approach to cybersecurity, in which AI-enabled intelligent security agents cooperate with humans across the cyberdefense life cycle to jointly improve the resilience of security of computer systems over time. ACTION is funded by a partnership between NSF, DHS S&T, and IBM.

Climate Smart Agriculture and Forestry

AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE)

Led by the University of Minnesota Twin Cities, this institute aims to advance foundational AI by incorporating knowledge from agriculture and forestry sciences and leveraging these unique, new AI methods to curb climate effects while lifting rural economies. By creating a new scientific discipline and innovation ecosystem intersecting AI and climate-smart agriculture and forestry, our researchers and practitioners will discover and invent compelling AI-powered knowledge and solutions. Examples include AI-enhanced estimation methods of greenhouse gases and specialized field-to-market decision support tools. A key goal is to lower the cost of and improve accounting for carbon in farms and forests to empower carbon markets and inform decision making. The institute will also expand and diversify rural and urban AI workforces. AI-CLIMATE is funded by USDA-NIFA.

Neural and Cognitive Foundations of Artificial Intelligence

AI Institute for Artificial and Natural Intelligence (ARNI)

Led by Columbia University, this institute will draw together top researchers across the country to focus on a national priority: connecting the major progress made in AI systems to the revolution in our understanding of the brain. ARNI will meet the urgent need for new paradigms of interdisciplinary research between neuroscience, cognitive science and AI. This will accelerate progress in all three fields and broaden the transformative impact on society in the next decade. ARNI is funded by a partnership between NSF and DoD OUSD R&E.

AI for Decision Making

AI Institute for Societal Decision Making (AI-SDM)

Led by Carnegie Mellon University, this institute seeks to create human-centric AI for decision making to bolster effective response in uncertain, dynamic and resource-constrained scenarios like disaster management and public health. By bringing together an interdisciplinary team of AI and social science researchers, AI-SDM will enable emergency managers, public health officials, first responders, community workers and the public to make decisions that are data driven, robust, agile, resource efficient and trustworthy. The vision of the institute will be realized via development of AI theory and methods, translational research, training and outreach, enabled by partnerships with diverse universities, government organizations, corporate partners, community colleges, public libraries and high schools.

AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes

AI Institute for Inclusive Intelligent Technologies for Education (INVITE)

Led by the University of Illinois Urbana-Champaign, this institute seeks to fundamentally reframe how educational technologies interact with learners by developing AI tools and approaches to support three crucial noncognitive skills known to underlie effective learning: persistence, academic resilience and collaboration. The institute’s use-inspired research will focus on how children communicate STEM content, how they learn to persist through challenging work, and how teachers support and promote noncognitive skill development. The resultant AI-based tools will be integrated into classrooms to empower teachers to support learners in more developmentally appropriate ways.

AI Institute for Exceptional Education (AI4ExceptionalEd)

Led by the University at Buffalo, this institute will work toward universal speech and language screening for children. The framework, the AI screener, will analyze video and audio streams of children during classroom interactions and assess the need for evidence-based interventions tailored to individual needs of students. The institute will serve children in need of ability-based speech and language services, advance foundational AI technologies and enhance understanding of childhood speech and language development. The AI Institute for Exceptional Education was previously announced in January 2023. The INVITE and AI4ExceptionalEd institutes are funded by a partnership between NSF and ED-IES.

Statements from NSF’s Federal Government Funding Partners

“Increasing AI system trustworthiness while reducing its risks will be key to unleashing AI’s potential benefits and ensuring our shared societal values,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “Today, the ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them.”

“The ACTION Institute will help us better assess the opportunities and risks of rapidly evolving AI technology and its impact on DHS missions,” said Dimitri Kusnezov, DHS under secretary for science and technology. “This group of researchers and their ambition to push the limits of fundamental AI and apply new insights represents a significant investment in cybersecurity defense. These partnerships allow us to collectively remain on the forefront of leading-edge research for AI technologies.”

“In the tradition of USDA National Institute of Food and Agriculture investments, this new institute leverages the scientific power of U.S. land-grant universities informed by close partnership with farmers, producers, educators and innovators to address the grand challenge of rising greenhouse gas concentrations and associated climate change,” said Acting NIFA Director Dionne Toombs. “This innovative center will address the urgent need to counter climate-related threats, lower greenhouse gas emissions, grow the American workforce and increase new rural opportunities.”

“The leading-edge in AI research inevitably draws from our, so far, limited understanding of human cognition. This AI Institute seeks to unify the fields of AI and neuroscience to bring advanced designs and approaches to more capable and trustworthy AI, while also providing better understanding of the human brain,” said Bindu Nair, director, Basic Research Office, Office of the Undersecretary of Defense for Research and Engineering. “We are proud to partner with NSF in this critical field of research, as continued advancement in these areas holds the potential for further and significant benefits to national security, the economy and improvements in quality of life.”

“We are excited to partner with NSF on these two AI institutes,” said IES Director Mark Schneider. “We hope that they will provide valuable insights into how to tap modern technologies to improve the education sciences — but more importantly we hope that they will lead to better student outcomes and identify ways to free up the time of teachers to deliver more informed individualized instruction for the students they care so much about.” 

Learn more about the NSF AI Institutes by visiting nsf.gov.

Two things I noticed, (1) No mention of including ethics training or concepts in science and technology education and (2) No mention of integrating ethics and social issues into any of the AI Institutes. So, it seems that ‘Responsible Design, Development and Deployment of Technologies (ReDDDoT)’ occupies its own fiefdom.

Some sobering thoughts

Things can go terribly wrong with new technology as seen in the British television hit series, Mr. Bates vs. The Post Office (based on a true story) , from a January 9, 2024 posting by Ani Blundel for tellyvisions.org,

… what is this show that’s caused the entire country to rise up as one to defend the rights of the lowly sub-postal worker? Known as the “British Post Office scandal,” the incidents first began in 1999 when the U.K. postal system began to switch to digital systems, using the Horizon Accounting system to track the monies brought in. However, the IT system was faulty from the start, and rather than blame the technology, the British government accused, arrested, persecuted, and convicted over 700 postal workers of fraud and theft. This continued through 2015 when the glitch was finally recognized, and in 2019, the convictions were ruled to be a miscarriage of justice.

Here’s the series synopsis:

The drama tells the story of one of the greatest miscarriages of justice in British legal history. Hundreds of innocent sub-postmasters and postmistresses were wrongly accused of theft, fraud, and false accounting due to a defective IT system. Many of the wronged workers were prosecuted, some of whom were imprisoned for crimes they never committed, and their lives were irreparably ruined by the scandal. Following the landmark Court of Appeal decision to overturn their criminal convictions, dozens of former sub-postmasters and postmistresses have been exonerated on all counts as they battled to finally clear their names. They fought for over ten years, finally proving their innocence and sealing a resounding victory, but all involved believe the fight is not over yet, not by a long way.

Here’s a video trailer for ‘Mr. Bates vs. The Post Office,

More from Blundel’s January 9, 2024 posting, Note: A link has been removed,

The outcry from the general public against the government’s bureaucratic mismanagement and abuse of employees has been loud and sustained enough that Prime Minister Rishi Sunak had to come out with a statement condemning what happened back during the 2009 incident. Further, the current Justice Secretary, Alex Chalk, is now trying to figure out the fastest way to exonerate the hundreds of sub-post managers and sub-postmistresses who were wrongfully convicted back then and if there are steps to be taken to punish the post office a decade later.

It’s a horrifying story and the worst I’ve seen so far but, sadly, it’s not the only one of its kind.

Too often people’s concerns and worries about new technology are dismissed or trivialized. Somehow, all the work done to establish ethical standards and develop trust seems to be used as a kind of sop to the concerns rather than being integrated into the implementation of life-altering technologies.

Graphene and smart textiles

Here’s one of the more recent efforts to create fibres that are electronic and capable of being woven into a smart textile. (Details about a previous effort can be found at the end of this post.) Now for this one, from a Dec. 3, 2018 news item on ScienceDaily,

The quest to create affordable, durable and mass-produced ‘smart textiles’ has been given fresh impetus through the use of the wonder material Graphene.

An international team of scientists, led by Professor Monica Craciun from the University of Exeter Engineering department, has pioneered a new technique to create fully electronic fibres that can be incorporated into the production of everyday clothing.

A Dec. 3, 2018 University of Exeter press release (also on EurekAlert), provides more detail about the problems associated with wearable electronics and the solution being offered (Note: A link has been removed),

Currently, wearable electronics are achieved by essentially gluing devices to fabrics, which can mean they are too rigid and susceptible to malfunctioning.

The new research instead integrates the electronic devices into the fabric of the material, by coating electronic fibres with light-weight, durable components that will allow images to be shown directly on the fabric.

The research team believe that the discovery could revolutionise the creation of wearable electronic devices for use in a range of every day applications, as well as health monitoring, such as heart rates and blood pressure, and medical diagnostics.

The international collaborative research, which includes experts from the Centre for Graphene Science at the University of Exeter, the Universities of Aveiro and Lisbon in Portugal, and CenTexBel in Belgium, is published in the scientific journal Flexible Electronics.

Professor Craciun, co-author of the research said: “For truly wearable electronic devices to be achieved, it is vital that the components are able to be incorporated within the material, and not simply added to it.

Dr Elias Torres Alonso, Research Scientist at Graphenea and former PhD student in Professor Craciun’s team at Exeter added “This new research opens up the gateway for smart textiles to play a pivotal role in so many fields in the not-too-distant future.  By weaving the graphene fibres into the fabric, we have created a new technique to all the full integration of electronics into textiles. The only limits from now are really within our own imagination.”

At just one atom thick, graphene is the thinnest substance capable of conducting electricity. It is very flexible and is one of the strongest known materials. The race has been on for scientists and engineers to adapt graphene for the use in wearable electronic devices in recent years.

This new research used existing polypropylene fibres – typically used in a host of commercial applications in the textile industry – to attach the new, graphene-based electronic fibres to create touch-sensor and light-emitting devices.

The new technique means that the fabrics can incorporate truly wearable displays without the need for electrodes, wires of additional materials.

Professor Saverio Russo, co-author and from the University of Exeter Physics department, added: “The incorporation of electronic devices on fabrics is something that scientists have tried to produce for a number of years, and is a truly game-changing advancement for modern technology.”

Dr Ana Neves, co-author and also from Exeter’s Engineering department added “The key to this new technique is that the textile fibres are flexible, comfortable and light, while being durable enough to cope with the demands of modern life.”

In 2015, an international team of scientists, including Professor Craciun, Professor Russo and Dr Ana Neves from the University of Exeter, have pioneered a new technique to embed transparent, flexible graphene electrodes into fibres commonly associated with the textile industry.

Here’s a link to and a citation for the paper,

Graphene electronic fibres with touch-sensing and light-emitting functionalities for smart textiles by Elias Torres Alonso, Daniela P. Rodrigues, Mukond Khetani, Dong-Wook Shin, Adolfo De Sanctis, Hugo Joulie, Isabel de Schrijver, Anna Baldycheva, Helena Alves, Ana I. S. Neves, Saverio Russo & Monica F. Craciun. Flexible Electronicsvolume 2, Article number: 25 (2018) DOI: https://doi.org/10.1038/s41528-018-0040-2 Published 25 September 2018

This paper is open access.

I have an earlier post about an effort to weave electronics into textiles for soldiers, from an April 5, 2012 posting,

I gather that today’s soldier (aka, warfighter)  is carrying as many batteries as weapons. Apparently, the average soldier carries a couple of kilos worth of batteries and cables to keep their various pieces of equipment operational. The UK’s Centre for Defence Enterprise (part of the Ministry of Defence) has announced that this situation is about to change as a consequence of a recently funded research project with a company called Intelligent Textiles. From Bob Yirka’s April 3, 2012 news item for physorg.com,

To get rid of the cables, a company called Intelligent Textiles has come up with a type of yarn that can conduct electricity, which can be woven directly into the fabric of the uniform. And because they allow the uniform itself to become one large conductive unit, the need for multiple batteries can be eliminated as well.

I dug down to find more information about this UK initiative and the Intelligent Textiles company but the trail seems to end in 2015. Still, I did find a Canadian connection (for those who don’t know I’m a Canuck) and more about Intelligent Textile’s work with the British military in this Sept. 21, 2015 article by Barry Collins for alphr.com (Note: Links have been removed),

A two-person firm operating from a small workshop in Staines-upon-Thames, Intelligent Textiles has recently landed a multimillion-pound deal with the US Department of Defense, and is working with the Ministry of Defence (MoD) to bring its potentially life-saving technology to British soldiers. Not bad for a company that only a few years ago was selling novelty cushions.

Intelligent Textiles was born in 2002, almost by accident. Asha Peta Thompson, an arts student at Central Saint Martins, had been using textiles to teach children with special needs. That work led to a research grant from Brunel University, where she was part of a team tasked with creating a “talking jacket” for the disabled. The garment was designed to help cerebral palsy sufferers to communicate, by pressing a button on the jacket to say “my name is Peter”, for example, instead of having a Stephen Hawking-like communicator in front of them.

Another member of that Brunel team was engineering lecturer Dr Stan Swallow, who was providing the electronics expertise for the project. Pretty soon, the pair realised the prototype waistcoat they were working on wasn’t going to work: it was cumbersome, stuffed with wires, and difficult to manufacture. “That’s when we had the idea that we could weave tiny mechanical switches into the surface of the fabric,” said Thompson.

The conductive weave had several advantages over packing electronics into garments. “It reduces the amount of cables,” said Thompson. “It can be worn and it’s also washable, so it’s more durable. It doesn’t break; it can be worn next to the skin; it’s soft. It has all the qualities of a piece of fabric, so it’s a way of repackaging the electronics in a way that’s more user-friendly and more comfortable.” The key to Intelligent Textiles’ product isn’t so much the nature of the raw materials used, but the way they’re woven together. “All our patents are in how we weave the fabric,” Thompson explained. “We weave two conductive yarns to make a tiny mechanical switch that is perfectly separated or perfectly connected. We can weave an electronic circuit board into the fabric itself.”

Intelligent Textiles’ big break into the military market came when they met a British textiles firm that was supplying camouflage gear to the Canadian armed forces. [emphasis mine] The firm was attending an exhibition in Canada and invited the Intelligent Textiles duo to join them. “We showed a heated glove and an iPod controller,” said Thompson. “The Canadians said ‘that’s really fantastic, but all we need is power. Do you think you could weave a piece of fabric that distributes power?’ We said, ‘we’re already doing it’.”Before long it wasn’t only power that the Canadians wanted transmitted through the fabric, but data.

“The problem a soldier faces at the moment is that he’s carrying 60 AA batteries [to power all the equipment he carries],” said Thompson. “He doesn’t know what state of charge those batteries are at, and they’re incredibly heavy. He also has wires and cables running around the system. He has snag hazards – when he’s going into a firefight, he can get caught on door handles and branches, so cables are a real no-no.”

The Canadians invited the pair to speak at a NATO conference, where they were approached by military brass with more familiar accents. “It was there that we were spotted by the British MoD, who said ‘wow, this is a British technology but you’re being funded by Canada’,” said Thompson. That led to £235,000 of funding from the Centre for Defence Enterprise (CDE) – the money they needed to develop a fabric wiring system that runs all the way through the soldier’s vest, helmet and backpack.

There are more details about the 2015 state of affairs, textiles-wise, in a March 11, 2015 article by Richard Trenholm for CNET.com (Note: A link has been removed),

Speaking at the Wearable Technology Show here, Swallow describes IT [Intelligent Textiles]L as a textile company that “pretends to be a military company…it’s funny how you slip into these domains.”

One domain where this high-tech fabric has seen frontline action is in the Canadian military’s IAV Stryker armoured personnel carrier. ITL developed a full QWERTY keyboard in a single piece of fabric for use in the Stryker, replacing a traditional hardware keyboard that involved 100 components. Multiple components allow for repair, but ITL knits in redundancy so the fabric can “degrade gracefully”. The keyboard works the same as the traditional hardware, with the bonus that it’s less likely to fall on a soldier’s head, and with just one glaring downside: troops can no longer use it as a step for getting in and out of the vehicle.

An armoured car with knitted controls is one thing, but where the technology comes into its own is when used about the person. ITL has worked on vests like the JTAC, a system “for the guys who call down airstrikes” and need “extra computing oomph.” Then there’s SWIPES, a part of the US military’s Nett Warrior system — which uses a chest-mounted Samsung Galaxy Note 2 smartphone — and British military company BAE’s Broadsword system.

ITL is currently working on Spirit, a “truly wearable system” for the US Army and United States Marine Corps. It’s designed to be modular, scalable, intuitive and invisible.

While this isn’t an ITL product, this video about Broadsword technology from BAE does give you some idea of what wearable technology for soldiers is like,

baesystemsinc

Uploaded on Jul 8, 2014

Broadsword™ delivers groundbreaking technology to the 21st Century warfighter through interconnecting components that inductively transfer power and data via The Spine™, a revolutionary e-textile that can be inserted into any garment. This next-generation soldier system offers enhanced situational awareness when used with the BAE Systems’ Q-Warrior® see-through display.

If anyone should have the latest news about Intelligent Textile’s efforts, please do share in the comments section.

I do have one other posting about textiles and the military, which is dated May 9, 2012, but while it does reference US efforts it is not directly related to weaving electronics into solder’s (warfighter’s) gear.

You can find CenTexBel (Belgian Textile Rsearch Centre) here and Graphenea here. Both are mentioned in the University of Exeter press release.

How to get people to trust artificial intelligence

Vyacheslav Polonski’s (University of Oxford researcher) January 10, 2018 piece (originally published Jan. 9, 2018 on The Conversation) on phys.org isn’t a gossip article although there are parts that could be read that way. Before getting to what I consider the juicy bits (Note: Links have been removed),

Artificial intelligence [AI] can already predict the future. Police forces are using it to map when and where crime is likely to occur [Note: See my Nov. 23, 2017 posting about predictive policing in Vancouver for details about the first Canadian municipality to introduce the technology]. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

The part (juicy bits) that satisfied some of my long held curiosity was this section on Watson and its life as a medical adjunct (Note: Links have been removed),

IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR [public relations] disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.

But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.

On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.

The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. …

It seems to me there might be a bit more to the doctors’ trust issues and I was surprised it didn’t seem to have occurred to Polonski. Then I did some digging (from Polonski’s webpage on the Oxford Internet Institute website),

Vyacheslav Polonski (@slavacm) is a DPhil [PhD] student at the Oxford Internet Institute. His research interests are located at the intersection of network science, media studies and social psychology. Vyacheslav’s doctoral research examines the adoption and use of social network sites, focusing on the effects of social influence, social cognition and identity construction.

Vyacheslav is a Visiting Fellow at Harvard University and a Global Shaper at the World Economic Forum. He was awarded the Master of Science degree with Distinction in the Social Science of the Internet from the University of Oxford in 2013. He also obtained the Bachelor of Science degree with First Class Honours in Management from the London School of Economics and Political Science (LSE) in 2012.

Vyacheslav was honoured at the British Council International Student of the Year 2011 awards, and was named UK’s Student of the Year 2012 and national winner of the Future Business Leader of the Year 2012 awards by TARGETjobs.

Previously, he has worked as a management consultant at Roland Berger Strategy Consultants and gained further work experience at the World Economic Forum, PwC, Mars, Bertelsmann and Amazon.com. Besides, he was involved in several start-ups as part of the 2012 cohort of Entrepreneur First and as part of the founding team of the London office of Rocket Internet. Vyacheslav was the junior editor of the bi-lingual book ‘Inspire a Nation‘ about Barack Obama’s first presidential election campaign. In 2013, he was invited to be a keynote speaker at the inaugural TEDx conference of IE University in Spain to discuss the role of a networked mindset in everyday life.

Vyacheslav is fluent in German, English and Russian, and is passionate about new technologies, social entrepreneurship, philanthropy, philosophy and modern art.

Research interests

Network science, social network analysis, online communities, agency and structure, group dynamics, social interaction, big data, critical mass, network effects, knowledge networks, information diffusion, product adoption

Positions held at the OII

  • DPhil student, October 2013 –
  • MSc Student, October 2012 – August 2013

Polonski doesn’t seem to have any experience dealing with, participating in, or studying the medical community. Getting a doctor to admit that his or her approach to a particular patient’s condition was wrong or misguided runs counter to their training and, by extension, the institution of medicine. Also, one of the biggest problems in any field is getting people to change and it’s not always about trust. In this instance, you’re asking a doctor to back someone else’s opinion after he or she has rendered theirs. This is difficult even when the other party is another human doctor let alone a form of artificial intelligence.

If you want to get a sense of just how hard it is to get someone to back down after they’ve committed to a position, read this January 10, 2018 essay by Lara Bazelon, an associate professor at the University of San Francisco School of Law. This is just one of the cases (Note: Links have been removed),

Davontae Sanford was 14 years old when he confessed to murdering four people in a drug house on Detroit’s East Side. Left alone with detectives in a late-night interrogation, Sanford says he broke down after being told he could go home if he gave them “something.” On the advice of a lawyer whose license was later suspended for misconduct, Sanders pleaded guilty in the middle of his March 2008 trial and received a sentence of 39 to 92 years in prison.

Sixteen days after Sanford was sentenced, a hit man named Vincent Smothers told the police he had carried out 12 contract killings, including the four Sanford had pleaded guilty to committing. Smothers explained that he’d worked with an accomplice, Ernest Davis, and he provided a wealth of corroborating details to back up his account. Smothers told police where they could find one of the weapons used in the murders; the gun was recovered and ballistics matched it to the crime scene. He also told the police he had used a different gun in several of the other murders, which ballistics tests confirmed. Once Smothers’ confession was corroborated, it was clear Sanford was innocent. Smothers made this point explicitly in an 2015 affidavit, emphasizing that Sanford hadn’t been involved in the crimes “in any way.”

Guess what happened? (Note: Links have been removed),

But Smothers and Davis were never charged. Neither was Leroy Payne, the man Smothers alleged had paid him to commit the murders. …

Davontae Sanford, meanwhile, remained behind bars, locked up for crimes he very clearly didn’t commit.

Police failed to turn over all the relevant information in Smothers’ confession to Sanford’s legal team, as the law required them to do. When that information was leaked in 2009, Sanford’s attorneys sought to reverse his conviction on the basis of actual innocence. Wayne County Prosecutor Kym Worthy fought back, opposing the motion all the way to the Michigan Supreme Court. In 2014, the court sided with Worthy, ruling that actual innocence was not a valid reason to withdraw a guilty plea [emphasis mine]. Sanford would remain in prison for another two years.

Doctors are just as invested in their opinions and professional judgments as lawyers  (just like  the prosecutor and the judges on the Michigan Supreme Court) are.

There is one more problem. From the doctor’s (or anyone else’s perspective), if the AI is making the decisions, why do he/she need to be there? At best it’s as if AI were turning the doctor into its servant or, at worst, replacing the doctor. Polonski alludes to the problem in one of his solutions to the ‘trust’ issue (Note: A link has been removed),

Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.

Having input into the AI decision-making process somewhat addresses one of the problems but the commitment to one’s own judgment even when there is overwhelming evidence to the contrary is a perennially thorny problem. The legal case mentioned here earlier is clearly one where the contrarian is wrong but it’s not always that obvious. As well, sometimes, people who hold out against the majority are right.

US Army

Getting back to building trust, it turns out the US Army Research Laboratory is also interested in transparency where AI is concerned (from a January 11, 2018 US Army news release on EurekAlert),

U.S. Army Research Laboratory [ARL] scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency [emphasis mine], which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

The U.S. Defense Science Board, in a 2016 report, identified six barriers to human trust in autonomous systems, with ‘low observability, predictability, directability and auditability’ as well as ‘low mutual understanding of common goals’ being among the key issues.

In order to address these issues, Chen and her colleagues developed the Situation awareness-based Agent Transparency, or SAT, model and measured its effectiveness on human-agent team performance in a series of human factors studies supported by the ARPI. The SAT model deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment. At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, likelihood of success/failure, and any uncertainty associated with the aforementioned projections.

In one of the ARPI projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL’s experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators’ decision making during military scenarios. The results of a series of human factors experiments collectively suggest that transparency on the part of the agent benefits the human’s decision making and thus the overall human-agent team performance. More specifically, researchers said the human’s trust in the agent was significantly better calibrated — accepting the agent’s plan when it is correct and rejecting it when it is incorrect– when the agent had a higher level of transparency.

The other project related to agent transparency that Chen and her colleagues performed under the ARPI was Autonomous Squad Member, on which ARL collaborated with Naval Research Laboratory scientists. The ASM is a small ground robot that interacts with and communicates with an infantry squad. As part of the overall ASM program, Chen’s group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM’s user interface features an at a glance transparency module where user-tested iconographic representations of the agent’s plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. A series of human factors studies on the ASM’s user interface have investigated the effects of agent transparency on the human teammate’s situation awareness, trust in the ASM, and workload. The results, consistent with the IMPACT project’s findings, demonstrated the positive effects of agent transparency on the human’s task performance without increase of perceived workload. The research participants also reported that they felt the ASM as more trustworthy, intelligent, and human-like when it conveyed greater levels of transparency.

Chen and her colleagues are currently expanding the SAT model into bidirectional transparency between the human and the agent.

“Bidirectional transparency, although conceptually straightforward–human and agent being mutually transparent about their reasoning process–can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent’s planning and performance–just as agent transparency can support the human’s situation awareness and task performance, which we have demonstrated in our studies,” Chen hypothesized.

The challenge is to design the user interfaces, which can include visual, auditory, and other modalities, that can support bidirectional transparency dynamically, in real time, while not overwhelming the human with too much information and burden.

Interesting, yes? Here’s a link and a citation for the paper,

Situation Awareness-based Agent Transparency and Human-Autonomy Teaming Effectiveness by Jessie Y.C. Chen, Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael Barnes. Theoretical Issues in Ergonomics Science May 2018. DOI 10.1080/1463922X.2017.1315750

This paper is behind a paywall.

‘Mother of all bombs’ is a nanoweapon?

According to physicist, Louis A. Del Monte, in an April 14, 2017 opinion piece for Huffington Post.com, the ‘mother of all bombs ‘ is a nanoweapon (Note: Links have been removed),

The United States military dropped its largest non-nuclear bomb, the GBU-43/B Massive Ordnance Air Blast Bomb (MOAB), nicknamed the “mother of all bombs,” on an ISIS cave and tunnel complex in the Achin District of the Nangarhar province, Afghanistan [on Thursday, April 13, 2017]. The Achin District is the center of ISIS activity in Afghanistan. This was the first use in combat of the GBU-43/B Massive Ordnance Air Blast (MOAB).

… Although it carries only about 8 tons of explosives, the explosive mixture delivers a destructive impact equivalent of 11 tons of TNT.

There is little doubt the United States Department of Defense is likely using nanometals, such as nanoaluminum (alternately spelled nano-aluminum) mixed with TNT, to enhance the detonation properties of the MOAB. The use of nanoaluminum mixed with TNT was known to boost the explosive power of the TNT since the early 2000s. If true, this means that the largest known United States non-nuclear bomb is a nanoweapon. When most of us think about nanoweapons, we think small, essentially invisible weapons, like nanobots (i.e., tiny robots made using nanotechnology). That can often be the case. But, as defined in my recent book, Nanoweapons: A Growing Threat to Humanity (Potomac 2017), “Nanoweapons are any military technology that exploits the power of nanotechnology.” This means even the largest munition, such as the MOAB, is a nanoweapon if it uses nanotechnology.

… The explosive is H6, which is a mixture of five ingredients (by weight):

  • 44.0% RDX & nitrocellulose (RDX is a well know explosive, more powerful that TNT, often used with TNT and other explosives. Nitrocellulose is a propellant or low-order explosive, originally known as gun-cotton.)
  • 29.5% TNT
  • 21.0% powdered aluminum
  • 5.0% paraffin wax as a phlegmatizing (i.e., stabilizing) agent.
  • 0.5% calcium chloride (to absorb moisture and eliminate the production of gas

Note, the TNT and powdered aluminum account for over half the explosive payload by weight. It is highly likely that the “powdered aluminum” is nanoaluminum, since nanoaluminum can enhance the destructive properties of TNT. This argues that H6 is a nano-enhanced explosive, making the MOAB a nanoweapon.

The United States GBU-43/B Massive Ordnance Air Blast Bomb (MOAB) was the largest non-nuclear bomb known until Russia detonated the Aviation Thermobaric Bomb of Increased Power, termed the “father of all bombs” (FOAB), in 2007. It is reportedly four times more destructive than the MOAB, even though it carries only 7 tons of explosives versus the 8 tons of the MOAB. Interestingly, the Russians claim to achieve the more destructive punch using nanotechnology.

If you have the time, I encourage you to read the piece in its entirety.

Nanoelectronic thread (NET) brain probes for long-term neural recording

A rendering of the ultra-flexible probe in neural tissue gives viewers a sense of the device’s tiny size and footprint in the brain. Image credit: Science Advances.

As long time readers have likely noted, I’m not a big a fan of this rush to ‘colonize’ the brain but it continues apace as a Feb. 15, 2017 news item on Nanowerk announces a new type of brain probe,

Engineering researchers at The University of Texas at Austin have designed ultra-flexible, nanoelectronic thread (NET) brain probes that can achieve more reliable long-term neural recording than existing probes and don’t elicit scar formation when implanted.

A Feb. 15, 2017 University of Texas at Austin news release, which originated the news item, provides more information about the new probes (Note: A link has been removed),

A team led by Chong Xie, an assistant professor in the Department of Biomedical Engineering in the Cockrell School of Engineering, and Lan Luan, a research scientist in the Cockrell School and the College of Natural Sciences, have developed new probes that have mechanical compliances approaching that of the brain tissue and are more than 1,000 times more flexible than other neural probes. This ultra-flexibility leads to an improved ability to reliably record and track the electrical activity of individual neurons for long periods of time. There is a growing interest in developing long-term tracking of individual neurons for neural interface applications, such as extracting neural-control signals for amputees to control high-performance prostheses. It also opens up new possibilities to follow the progression of neurovascular and neurodegenerative diseases such as stroke, Parkinson’s and Alzheimer’s diseases.

One of the problems with conventional probes is their size and mechanical stiffness; their larger dimensions and stiffer structures often cause damage around the tissue they encompass. Additionally, while it is possible for the conventional electrodes to record brain activity for months, they often provide unreliable and degrading recordings. It is also challenging for conventional electrodes to electrophysiologically track individual neurons for more than a few days.

In contrast, the UT Austin team’s electrodes are flexible enough that they comply with the microscale movements of tissue and still stay in place. The probe’s size also drastically reduces the tissue displacement, so the brain interface is more stable, and the readings are more reliable for longer periods of time. To the researchers’ knowledge, the UT Austin probe — which is as small as 10 microns at a thickness below 1 micron, and has a cross-section that is only a fraction of that of a neuron or blood capillary — is the smallest among all neural probes.

“What we did in our research is prove that we can suppress tissue reaction while maintaining a stable recording,” Xie said. “In our case, because the electrodes are very, very flexible, we don’t see any sign of brain damage — neurons stayed alive even in contact with the NET probes, glial cells remained inactive and the vasculature didn’t become leaky.”

In experiments in mouse models, the researchers found that the probe’s flexibility and size prevented the agitation of glial cells, which is the normal biological reaction to a foreign body and leads to scarring and neuronal loss.

“The most surprising part of our work is that the living brain tissue, the biological system, really doesn’t mind having an artificial device around for months,” Luan said.

The researchers also used advanced imaging techniques in collaboration with biomedical engineering professor Andrew Dunn and neuroscientists Raymond Chitwood and Jenni Siegel from the Institute for Neuroscience at UT Austin to confirm that the NET enabled neural interface did not degrade in the mouse model for over four months of experiments. The researchers plan to continue testing their probes in animal models and hope to eventually engage in clinical testing. The research received funding from the UT BRAIN seed grant program, the Department of Defense and National Institutes of Health.

Here’s a link to and citation for the paper,

Ultraflexible nanoelectronic probes form reliable, glial scar–free neural integration by Lan Luan, Xiaoling Wei, Zhengtuo Zhao, Jennifer J. Siegel, Ojas Potnis, Catherine A Tuppen, Shengqing Lin, Shams Kazmi, Robert A. Fowler, Stewart Holloway, Andrew K. Dunn, Raymond A. Chitwood, and Chong Xie. Science Advances  15 Feb 2017: Vol. 3, no. 2, e1601966 DOI: 10.1126/sciadv.1601966

This paper is open access.

You can get more detail about the research in a Feb. 17, 2017 posting by Dexter Johnson on his Nanoclast blog (on the IEEE [International Institute for Electrical and Electronics Engineers] website).

US white paper on neuromorphic computing (or the nanotechnology-inspired Grand Challenge for future computing)

The US has embarked on a number of what is called “Grand Challenges.” I first came across the concept when reading about the Bill and Melinda Gates (of Microsoft fame) Foundation. I gather these challenges are intended to provide funding for research that advances bold visions.

There is the US National Strategic Computing Initiative established on July 29, 2015 and its first anniversary results were announced one year to the day later. Within that initiative a nanotechnology-inspired Grand Challenge for Future Computing was issued and, according to a July 29, 2016 news item on Nanowerk, a white paper on the topic has been issued (Note: A link has been removed),

Today [July 29, 2016), Federal agencies participating in the National Nanotechnology Initiative (NNI) released a white paper (pdf) describing the collective Federal vision for the emerging and innovative solutions needed to realize the Nanotechnology-Inspired Grand Challenge for Future Computing.

The grand challenge, announced on October 20, 2015, is to “create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.” The white paper describes the technical priorities shared by the agencies, highlights the challenges and opportunities associated with these priorities, and presents a guiding vision for the research and development (R&D) needed to achieve key technical goals. By coordinating and collaborating across multiple levels of government, industry, academia, and nonprofit organizations, the nanotechnology and computer science communities can look beyond the decades-old approach to computing based on the von Neumann architecture and chart a new path that will continue the rapid pace of innovation beyond the next decade.

A July 29, 2016 US National Nanotechnology Coordination Office news release, which originated the news item, further and succinctly describes the contents of the paper,

“Materials and devices for computing have been and will continue to be a key application domain in the field of nanotechnology. As evident by the R&D topics highlighted in the white paper, this challenge will require the convergence of nanotechnology, neuroscience, and computer science to create a whole new paradigm for low-power computing with revolutionary, brain-like capabilities,” said Dr. Michael Meador, Director of the National Nanotechnology Coordination Office. …

The white paper was produced as a collaboration by technical staff at the Department of Energy, the National Science Foundation, the Department of Defense, the National Institute of Standards and Technology, and the Intelligence Community. …

The white paper titled “A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge” is 15 pp. and it offers tidbits such as this (Note: Footnotes not included),

A new materials base may be needed for future electronic hardware. While most of today’s electronics use silicon, this approach is unsustainable if billions of disposable and short-lived sensor nodes are needed for the coming Internet-of-Things (IoT). To what extent can the materials base for the implementation of future information technology (IT) components and systems support sustainability through recycling and bio-degradability? More sustainable materials, such as compostable or biodegradable systems (polymers, paper, etc.) that can be recycled or reused,  may play an important role. The potential role for such alternative materials in the fabrication of integrated systems needs to be explored as well. [p. 5]

The basic architecture of computers today is essentially the same as those built in the 1940s—the von Neumann architecture—with separate compute, high-speed memory, and high-density storage components that are electronically interconnected. However, it is well known that continued performance increases using this architecture are not feasible in the long term, with power density constraints being one of the fundamental roadblocks.7 Further advances in the current approach using multiple cores, chip multiprocessors, and associated architectures are plagued by challenges in software and programming models. Thus,  research and development is required in radically new and different computing architectures involving processors, memory, input-output devices, and how they behave and are interconnected. [p. 7]

Neuroscience research suggests that the brain is a complex, high-performance computing system with low energy consumption and incredible parallelism. A highly plastic and flexible organ, the human brain is able to grow new neurons, synapses, and connections to cope with an ever-changing environment. Energy efficiency, growth, and flexibility occur at all scales, from molecular to cellular, and allow the brain, from early to late stage, to never stop learning and to act with proactive intelligence in both familiar and novel situations. Understanding how these mechanisms work and cooperate within and across scales has the potential to offer tremendous technical insights and novel engineering frameworks for materials, devices, and systems seeking to perform efficient and autonomous computing. This research focus area is the most synergistic with the national BRAIN Initiative. However, unlike the BRAIN Initiative, where the goal is to map the network connectivity of the brain, the objective here is to understand the nature, methods, and mechanisms for computation,  and how the brain performs some of its tasks. Even within this broad paradigm,  one can loosely distinguish between neuromorphic computing and artificial neural network (ANN) approaches. The goal of neuromorphic computing is oriented towards a hardware approach to reverse engineering the computational architecture of the brain. On the other hand, ANNs include algorithmic approaches arising from machinelearning,  which in turn could leverage advancements and understanding in neuroscience as well as novel cognitive, mathematical, and statistical techniques. Indeed, the ultimate intelligent systems may as well be the result of merging existing ANN (e.g., deep learning) and bio-inspired techniques. [p. 8]

As government documents go, this is quite readable.

For anyone interested in learning more about the future federal plans for computing in the US, there is a July 29, 2016 posting on the White House blog celebrating the first year of the US National Strategic Computing Initiative Strategic Plan (29 pp. PDF; awkward but that is the title).

“Breaking Me Softly” at the nanoscale

“Breaking Me Softly” sounds like a song title but in this case the phrase as been coined to describe a new technique for controlling materials at the nanoscale according to a June 6, 2016 news item on ScienceDaily,

A finding by a University of Central Florida researcher that unlocks a means of controlling materials at the nanoscale and opens the door to a new generation of manufacturing is featured online in the journal Nature.

Using a pair of pliers in each hand and gradually pulling taut a piece of glass fiber coated in plastic, associate professor Ayman Abouraddy found that something unexpected and never before documented occurred — the inner fiber fragmented in an orderly fashion.

“What we expected to see happen is NOT what happened,” he said. “While we thought the core material would snap into two large pieces, instead it broke into many equal-sized pieces.”

He referred to the technique in the Nature article title as “Breaking Me Softly.”

A June 6, 2016 University of Central Florida (UCF) news release (also on EurekAlert) by Barbara Abney, which originated the news item, expands on the theme,

The process of pulling fibers to force the realignment of the molecules that hold them together, known as cold drawing, has been the standard for mass production of flexible fibers like plastic and nylon for most of the last century.

Abouraddy and his team have shown that the process may also be applicable to multi-layered materials, a finding that could lead to the manufacturing of a new generation of materials with futuristic attributes.

“Advanced fibers are going to be pursuing the limits of anything a single material can endure today,” Abouraddy said.

For example, packaging together materials with optical and mechanical properties along with sensors that could monitor such vital sign as blood pressure and heart rate would make it possible to make clothing capable of transmitting vital data to a doctor’s office via the Internet.

The ability to control breakage in a material is critical to developing computerized processes for potential manufacturing, said Yuanli Bai, a fracture mechanics specialist in UCF’s College of Engineering and Computer Science.

Abouraddy contacted Bai, who is a co-author on the paper, about three years ago and asked him to analyze the test results on a wide variety of materials, including silicon, silk, gold and even ice.

He also contacted Robert S. Hoy, a University of South Florida physicist who specializes in the properties of materials like glass and plastic, for a better understanding of what he found.

Hoy said he had never seen the phenomena Abouraddy was describing, but that it made great sense in retrospect.

The research takes what has traditionally been a problem in materials manufacturing and turned it into an asset, Hoy said.

“Dr. Abouraddy has found a new application of necking” –  a process that occurs when cold drawing causes non-uniform strain in a material, Hoy said.  “Usually you try to prevent necking, but he exploited it to do something potentially groundbreaking.”

The necking phenomenon was discovered decades ago at DuPont and ushered in the age of textiles and garments made of synthetic fibers.

Abouraddy said that cold-drawing is what makes synthetic fibers like nylon and polyester useful. While those fibers are initially brittle, once cold-drawn, the fibers toughen up and become useful in everyday commodities. This discovery at DuPont at the end of the 1920s ushered in the age of textiles and garments made of synthetic fibers.

Only recently have fibers made of multiple materials become possible, he said.  That research will be the centerpiece of a $317 Million U.S. Department of Defense program focused on smart fibers that Abouraddy and UCF will assist with.   The Revolutionary Fibers and Textiles Manufacturing Innovation Institute (RFT-MII), led by the Massachusetts Institute of Technology, will incorporate research findings published in the Nature paper, Abouraddy said.

The implications for manufacturing of the smart materials of the future are vast.

By controlling the mechanical force used to pull the fiber and therefore controlling the breakage patterns, materials can be developed with customized properties allowing them to interact with each other and eternal forces such as the sun (for harvesting energy) and the internet in customizable ways.

A co-author on the paper, Ali P. Gordon, an associate professor in the Department of Mechanical & Aerospace Engineering and director of UCF’s Mechanics of Materials Research Group said that the finding is significant because it shows that by carefully controlling the loading condition imparted to the fiber, materials can be developed with tailored performance attributes.

“Processing-structure-property relationships need to be strategically characterized for complex material systems. By combining experiments, microscopy, and computational mechanics, the physical mechanisms of the fragmentation process were more deeply understood,” Gordon said.

Abouraddy teamed up with seven UCF scientists from the College of Optics & Photonics and the College of Engineering & Computer Science (CECS) to write the paper.   Additional authors include one researcher each from the Massachusetts Institute of Technology, Nanyang Technological University in Singapore and the University of South Florida.

Here’s a link to and a citation for the paper,

Controlled fragmentation of multimaterial fibres and films via polymer cold-drawing by Soroush Shabahang, Guangming Tao, Joshua J. Kaufman, Yangyang Qiao, Lei Wei, Thomas Bouchenot, Ali P. Gordon, Yoel Fink, Yuanli Bai, Robert S. Hoy & Ayman F. Abouraddy. Nature (2016) doi:10.1038/nature17980 Published online  06 June 2016

This paper is behind a paywall.

Harvest water from desert air with carbon nanotube cups (competition for NBD Nano?)

It’s been a while since I’ve seen Pulickel Ajayan’s name in a Rice University (Texas) news release and I wonder if this is the beginning of a series. I’ve noticed that researchers often publish a series of papers within a few months and then become quiet for two or more years as they work in their labs to gather more information.

This time the research from Pulickel’s lab has focused on the use of carbon nanotubes to harvest water from desert air. From a June 12, 2014 news item on Azonano,

If you don’t want to die of thirst in the desert, be like the beetle. Or have a nanotube cup handy.

New research by scientists at Rice University demonstrated that forests of carbon nanotubes can be made to harvest water molecules from arid desert air and store them for future use.

The invention they call a “hygroscopic scaffold” is detailed in a new paper in the American Chemical Society journal Applied Materials and Interfaces.

Researchers in the lab of Rice materials scientist Pulickel Ajayan found a way to mimic the Stenocara beetle, which survives in the desert by stretching its wings to capture and drink water molecules from the early morning fog.

Here’s more about the research from a June 11, 2014 Rice University news release (by Mike Williams?), which originated the news item,

They modified carbon nanotube forests grown through a process created at Rice, giving the nanotubes a superhydrophobic (water-repelling) bottom and a hydrophilic (water loving) top. The forest attracts water molecules from the air and, because the sides are naturally hydrophobic, traps them inside.

“It doesn’t require any external energy, and it keeps water inside the forest,” said graduate student and first author Sehmus Ozden. “You can squeeze the forest to take the water out and use the material again.”

The forests grown via water-assisted chemical vapor deposition consist of nanotubes that measure only a few nanometers (billionths of a meter) across and about a centimeter long.

The Rice team led by Ozden deposited a superhydrophobic layer to the top of the forest and then removed the forest from its silicon base, flipped it and added a layer of hydrophilic polymer to the other side.

In tests, water molecules bonded to the hydrophilic top and penetrated the forest through capillary action and gravity. (Air inside the forest is compressed rather then expelled, the researchers assumed.) Once a little water bonds to the forest canopy, the effect multiplies as the molecules are drawn inside, spreading out over the nanotubes through van der Waals forces, hydrogen bonding and dipole interactions. The molecules then draw more water in.

The researchers tested several variants of their cup. With only the top hydrophilic layer, the forests fell apart when exposed to humid air because the untreated bottom lacked the polymer links that held the top together. With a hydrophilic top and bottom, the forest held together but water ran right through.

But with a hydrophobic bottom and hydrophilic top, the forest remained intact even after collecting 80 percent of its weight in water.

The amount of water vapor captured depends on the air’s humidity. An 8 milligram sample (with a 0.25-square-centimeter surface) pulled in 27.4 percent of its weight over 11 hours in dry air, and 80 percent over 13 hours in humid air. Further tests showed the forests significantly slowed evaporation of the trapped water.

If it becomes possible to grow nanotube forests on a large scale, the invention could become an efficient, effective water-collection device because it does not require an external energy source, the researchers said.

Ozden said the production of carbon nanotube arrays at a scale necessary to put the invention to practical use remains a bottleneck. “If it becomes possible to make large-scale nanotube forests, it will be a very easy material to make,” he said.

This is not the first time researchers have used the Stenocara beetle (also known as the Namib desert beetle) as inspiration for a water-harvesting material. In a Nov. 26, 2012 posting I traced the inspiration  back to 2001 while featuring the announcement of a new startup company,

… US startup company, NBD Nano, which aims to bring a self-filling water bottle based on Namib desert beetle to market,

NBD Nano, which consists of four recent university graduates and was formed in May [2012], looked at the Namib Desert beetle that lives in a region that gets about half an inch of rainfall per year.

Using a similar approach, the firm wants to cover the surface of a bottle with hydrophilic (water-attracting) and hydrophobic (water-repellent) materials.

The work is still in its early stages, but it is the latest example of researchers looking at nature to find inspiration for sustainable technology.

“It was important to apply [biomimicry] to our design and we have developed a proof of concept and [are] currently creating our first fully-functional prototype,” Miguel Galvez, a co-founder, told the BBC.

“We think our initial prototype will collect anywhere from half a litre of water to three litres per hour, depending on local environments.”

You can find out more about NBD Nano here although they don’t give many details about the material they’ve developed. Given that MIT (Massachusetts Institute of Technology) researchers published a  paper about a polymer-based material laced with silicon nanoparticles inspired by the Namib beetle in 2006 and that NBD Nano is based Massachusetts, I believe NBD Nano is attempting to commercialize the material or some variant developed at MIT.

Getting back to Rice University and carbon nanotubes, this is a different material attempting to achieve the same goal, harvesting water from desert air. Here’s a link to and a citation for the latest paper inspired by the Stenocara beetle (Namib beetle),

Anisotropically Functionalized Carbon Nanotube Array Based Hygroscopic Scaffolds by Sehmus Ozden, Liehui Ge , Tharangattu N. Narayanan , Amelia H. C. Hart , Hyunseung Yang , Srividya Sridhar , Robert Vajtai , and Pulickel M Ajayan. ACS Appl. Mater. Interfaces, DOI: 10.1021/am5022717 Publication Date (Web): June 4, 2014

Copyright © 2014 American Chemical Society

This paper is behind a paywall.

One final note, the research at MIT was funded by DARPA (US Defense Advanced Research Projects Agency). According to the news release the Rice University research held interest for similar agencies,

The U.S. Department of Defense and the U.S. Air Force Office of Scientific Research Multidisciplinary University Research Initiative supported the research.

US Air Force wants to merge classical and quantum physics

The US Air Force wants to merge classical and quantum physics for practical purposes according to a May 5, 2014 news item on Azonano,

The Air Force Office of Scientific Research has selected the Harvard School of Engineering and Applied Sciences (SEAS) to lead a multidisciplinary effort that will merge research in classical and quantum physics and accelerate the development of advanced optical technologies.

Federico Capasso, Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, will lead this Multidisciplinary University Research Initiative [MURI] with a world-class team of collaborators from Harvard, Columbia University, Purdue University, Stanford University, the University of Pennsylvania, Lund University, and the University of Southampton.

The grant is expected to advance physics and materials science in directions that could lead to very sophisticated lenses, communication technologies, quantum information devices, and imaging technologies.

“This is one of the world’s strongest possible teams,” said Capasso. “I am proud to lead this group of people, who are internationally renowned experts in their fields, and I believe we can really break new ground.”

A May 1, 2014 Harvard University School of Engineering and Applied Sciences news release, which originated the news item, provides a description of project focus: nanophotonics and metamaterials along with some details of Capasso’s work in these areas (Note: Links have been removed),

The premise of nanophotonics is that light can interact with matter in unusual ways when the material incorporates tiny metallic or dielectric features that are separated by a distance shorter than the wavelength of the light. Metamaterials are engineered materials that exploit these phenomena, producing strange effects, enabling light to bend unnaturally, twist into a vortex, or disappear entirely. Yet the fabrication of thick, or bulk, metamaterials—that manipulate light as it passes through the material—has proven very challenging.

Recent research by Capasso and others in the field has demonstrated that with the right device structure, the critical manipulations can actually be confined to the very surface of the material—what they have dubbed a “metasurface.” These metasurfaces can impart an instantaneous shift in the phase, amplitude, and polarization of light, effectively controlling optical properties on demand. Importantly, they can be created in the lab using fairly common fabrication techniques.

At Harvard, the research has produced devices like an extremely thin, flat lens, and a material that absorbs 99.75% of infrared light. But, so far, such devices have been built to order—brilliantly suited to a single task, but not tunable.

This project, however,is focused on the future (Note: Links have been removed),

“Can we make a rapidly configurable metasurface so that we can change it in real time and quickly? That’s really a visionary frontier,” said Capasso. “We want to go all the way from the fundamental physics to the material building blocks and then the actual devices, to arrive at some sort of system demonstration.”

The proposed research also goes further. A key thrust of the project involves combining nanophotonics with research in quantum photonics. By exploiting the quantum effects of luminescent atomic impurities in diamond, for example, physicists and engineers have shown that light can be captured, stored, manipulated, and emitted as a controlled stream of single photons. These types of devices are essential building blocks for the realization of secure quantum communication systems and quantum computers. By coupling these quantum systems with metasurfaces—creating so-called quantum metasurfaces—the team believes it is possible to achieve an unprecedented level of control over the emission of photons.

“Just 20 years ago, the notion that photons could be manipulated at the subwavelength scale was thought to be some exotic thing, far fetched and of very limited use,” said Capasso. “But basic research opens up new avenues. In hindsight we know that new discoveries tend to lead to other technology developments in unexpected ways.”

The research team includes experts in theoretical physics, metamaterials, nanophotonic circuitry, quantum devices, plasmonics, nanofabrication, and computational modeling. Co-principal investigator Marko Lončar is the Tiantsai Lin Professor of Electrical Engineering at Harvard SEAS. Co-PI Nanfang Yu, Ph.D. ’09, developed expertise in metasurfaces as a student in Capasso’s Harvard laboratory; he is now an assistant professor of applied physics at Columbia. Additional co-PIs include Alexandra Boltasseva and Vladimir Shalaev at Purdue, Mark Brongersma at Stanford, and Nader Engheta at the University of Pennsylvania. Lars Samuelson (Lund University) and Nikolay Zheludev (University of Southampton) will also participate.

The bulk of the funding will support talented graduate students at the lead institutions.

The project, titled “Active Metasurfaces for Advanced Wavefront Engineering and Waveguiding,” is among 24 planned MURI awards selected from 361 white papers and 88 detailed proposals evaluated by a panel of experts; each award is subject to successful negotiation. The anticipated amount of the Harvard-led grant is up to $6.5 million for three to five years.

For anyone who’s not familiar (that includes me, anyway) with MURI awards, there’s this from Wikipedia (Note: links have been removed),

Multidisciplinary University Research Initiative (MURI) is a basic research program sponsored by the US Department of Defense (DoD). Currently each MURI award is about $1.5 million a year for five years.

I gather that in addition to the Air Force, the Army and the Navy also award MURI funds.