Tag Archives: Cornell University

First round of seed funding announced for NSF (US National Science Foundation) Institute for Trustworthy AI in Law & Society (TRAILS)

Having published an earlier January 2024 US National Science Foundation (NSF) funding announcement for the TRAILS (Trustworthy AI in Law & Society) Institute yesterday (February 21, 2024), I’m following up with an announcement about the initiative’s first round of seed funding.

From a TRAILS undated ‘story‘ by Tom Ventsias on the initiative’s website (and published January 24, 2024 as a University of Maryland news release on EurekAlert),

The Institute for Trustworthy AI in Law & Society (TRAILS) has unveiled an inaugural round of seed grants designed to integrate a greater diversity of stakeholders into the artificial intelligence (AI) development and governance lifecycle, ultimately creating positive feedback loops to improve trustworthiness, accessibility and efficacy in AI-infused systems.

The eight grants announced on January 24, 2024—ranging from $100K to $150K apiece and totaling just over $1.5 million—were awarded to interdisciplinary teams of faculty associated with the institute. Funded projects include developing AI chatbots to assist with smoking cessation, designing animal-like robots that can improve autism-specific support at home, and exploring how people use and rely upon AI-generated language translation systems.

All eight projects fall under the broader mission of TRAILS, which is to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“At the speed with which AI is developing, our seed grant program will enable us to keep pace—or even stay one step ahead—by incentivizing cutting-edge research and scholarship that spans AI design, development and governance,” said Hal Daumé III, a professor of computer science at the University of Maryland who is the director of TRAILS.

After TRAILS was launched in May 2023 with a $20 million award from the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST), lead faculty met to brainstorm how the institute could best move forward with research, innovation and outreach that would have a meaningful impact.

They determined a seed grant program could quickly leverage the wide range of academic talent at TRAILS’ four primary institutions. This includes the University of Maryland’s expertise in computing and human-computer interaction; George Washington University’s strengths in systems engineering and AI as it relates to law and governance; Morgan State University’s work in addressing bias and inequity in AI; and Cornell University’s research in human behavior and decision-making.

“NIST and NSF’s support of TRAILS enables us to create a structured mechanism to reach across academic and institutional boundaries in search of innovative solutions,” said David Broniatowski, an associate professor of engineering management and systems engineering at George Washington University who leads TRAILS activities on the GW campus. “Seed funding from TRAILS will enable multidisciplinary teams to identify opportunities for their research to have impact, and to build the case for even larger, multi-institutional efforts.”

Further discussions were held at a TRAILS faculty retreat to identify seed grant guidelines and collaborative themes that mirror TRAILS’ primary research thrusts—participatory design, methods and metrics, evaluating trust, and participatory governance.

“Some of the funded projects are taking a fresh look at ideas we may have already been working on individually, and others are taking an entirely new approach to timely, pressing issues involving AI and machine learning,” said Virginia Byrne, an assistant professor of higher education & student affairs at Morgan State who is leading TRAILS activities on that campus and who served on the seed grant review committee.

A second round of seed funding will be announced later this year, said Darren Cambridge, who was recently hired as managing director of TRAILS to lead its day-to-day operations.

Projects selected in the first round are eligible for a renewal, while other TRAILS faculty—or any faculty member at the four primary TRAILS institutions—can submit new proposals for consideration, Cambridge said.

Ultimately, the seed funding program is expected to strengthen and incentivize other TRAILS activities that are now taking shape, including K–12 education and outreach programs, AI policy seminars and workshops on Capitol Hill, and multiple postdoc opportunities for early-career researchers.

“We want TRAILS to be the ‘go-to’ resource for educators, policymakers and others who are seeking answers and solutions on how to build, manage and use AI systems that will benefit all of society,” Cambridge said.

The eight projects selected for the first round of TRAILS seed-funding are:

Chung Hyuk Park and Zoe Szajnfarber from GW and Hernisa Kacorri from UMD aim to improve the support infrastructure and access to quality care for families of autistic children. Early interventions are strongly correlated with positive outcomes, while provider shortages and financial burdens have raised challenges—particularly for families without sufficient resources and experience. The researchers will develop novel parent-robot teaming for the home, advance the assistive technology, and assess the impact of teaming to promote more trust in human-robot collaborative settings.

Soheil Feizi from UMD and Robert Brauneis from GW will investigate various issues surrounding text-to-image [emphasis mine] generative AI models like Stable Diffusion, DALL-E 2, and Midjourney, focusing on myriad legal, aesthetic and computational aspects that are currently unresolved. A key question is how copyright law might adapt if these tools create works in an artist’s style. The team will explore how generative AI models represent individual artists’ styles, and whether those representations are complex and distinctive enough to form stable objects of protection. The researchers will also explore legal and technical questions to determine if specific artworks, especially rare and unique ones, have already been used to train AI models.

Huaishu Peng and Ge Gao from UMD will work with Malte Jung from Cornell to increase trust-building in embodied AI systems, which bridge the gap between computers and human physical senses. Specifically, the researchers will explore embodied AI systems in the form of miniaturized on-body or desktop robotic systems that can enable the exchange of nonverbal cues between blind and sighted individuals, an essential component of efficient collaboration. The researchers will also examine multiple factors—both physical and mental—in order to gain a deeper understanding of both groups’ values related to teamwork facilitated by embodied AI.

Marine Carpuat and Ge Gao from UMD will explore “mental models”—how humans perceive things—for language translation systems used by millions of people daily. They will focus on how individuals, depending on their language fluency and familiarity with the technology, make sense of their “error boundary”—that is, deciding whether an AI-generated translation is correct or incorrect. The team will also develop innovative techniques to teach users how to improve their mental models as they interact with machine translation systems.

Hal Daumé III, Furong Huang and Zubin Jelveh from UMD and Donald Braman from GW will propose new philosophies grounded in law to conceptualize, evaluate and achieve “effort-aware fairness,” which involves algorithms for determining whether an individual or a group of individuals is discriminated against in terms of equality of effort. The researchers will develop new metrics, evaluate fairness of datasets, and design novel algorithms that enable AI auditors to uncover and potentially correct unfair decisions.

Lorien Abroms and David Broniatowski from GW will recruit smokers to study the reliability of using generative chatbots, such as ChatGPT, as the basis for a digital smoking cessation program. Additional work will examine the acceptability by smokers and their perceptions of trust in using this rapidly evolving technology for help to quit smoking. The researchers hope their study will directly inform future digital interventions for smoking cessation and/or modifying other health behaviors.

Adam Aviv from GW and Michelle Mazurek from UMD will examine bias, unfairness and untruths such as sexism, racism and other forms of misrepresentation that come out of certain AI and machine learning systems. Though some systems have public warnings of potential biases, the researchers want to explore how users understand these warnings, if they recognize how biases may manifest themselves in the AI-generated responses, and how users attempt to expose, mitigate and manage potentially biased responses.

Susan Ariel Aaronson and David Broniatowski from GW plan to create a prototype of a searchable, easy-to-use website to enable policymakers to better utilize academic research related to trustworthy and participatory AI. The team will analyze research publications by TRAILS-affiliated researchers to ascertain which ones may have policy implications. Then, each relevant publication will be summarized and categorized by research questions, issues, keywords, and relevant policymaking uses. The resulting database prototype will enable the researchers to test the utility of this resource for policymakers over time.

Yes, things are moving quickly where AI is concerned. There’s text-to-image being investigated by Soheil Feizi and Robert Brauneis and, since the funding announcement in early January 2024, text-to-video has been announced (Open AI’s Sora was previewed February 15, 2024). I wonder if that will be added to the project.

One more comment, Huaishu Peng’s, Ge Gao’s, and Malte Jung’s project for “… trust-building in embodied AI systems …” brings to mind Elon Musk’s stated goal of using brain implants for “human/AI symbiosis.” (I have more about that in an upcoming post.) Hopefully, Susan Ariel Aaronson’s and David Broniatowski’s proposed website for policymakers will be able to keep up with what’s happening in the field of AI, including research on the impact of private investments primarily designed for generating profits.

Prioritizing ethical & social considerations in emerging technologies—$16M in US National Science Foundation funding

I haven’t seen this much interest in the ethics and social impacts of emerging technologies in years. It seems that the latest AI (artificial intelligence) panic has stimulated interest not only in regulation but ethics too.

The latest information I have on this topic comes from a January 9, 2024 US National Science Foundation (NSF) news release (also received via email),

NSF and philanthropic partners announce $16 million in funding to prioritize ethical and social considerations in emerging technologies

ReDDDoT is a collaboration with five philanthropic partners and crosses
all disciplines of science and engineering_

The U.S. National Science Foundation today launched a new $16 million
program in collaboration with five philanthropic partners that seeks to
ensure ethical, legal, community and societal considerations are
embedded in the lifecycle of technology’s creation and use. The
Responsible Design, Development and Deployment of Technologies (ReDDDoT)
program aims to help create technologies that promote the public’s
wellbeing and mitigate potential harms.

“The design, development and deployment of technologies have broad
impacts on society,” said NSF Director Sethuraman Panchanathan. “As
discoveries and innovations are translated to practice, it is essential
that we engage and enable diverse communities to participate in this
work. NSF and its philanthropic partners share a strong commitment to
creating a comprehensive approach for co-design through soliciting
community input, incorporating community values and engaging a broad
array of academic and professional voices across the lifecycle of
technology creation and use.”

The ReDDDoT program invites proposals from multidisciplinary,
multi-sector teams that examine and demonstrate the principles,
methodologies and impacts associated with responsible design,
development and deployment of technologies, especially those specified
in the “CHIPS and Science Act of 2022.” In addition to NSF, the
program is funded and supported by the Ford Foundation, the Patrick J.
McGovern Foundation, Pivotal Ventures, Siegel Family Endowment and the
Eric and Wendy Schmidt Fund for Strategic Innovation.

“In recognition of the role responsible technologists can play to
advance human progress, and the danger unaccountable technology poses to
social justice, the ReDDDoT program serves as both a collaboration and a
covenant between philanthropy and government to center public interest
technology into the future of progress,” said Darren Walker, president
of the Ford Foundation. “This $16 million initiative will cultivate
expertise from public interest technologists across sectors who are
rooted in community and grounded by the belief that innovation, equity
and ethics must equally be the catalysts for technological progress.”

The broad goals of ReDDDoT include:  

*Stimulating activity and filling gaps in research, innovation and capacity building in the responsible design, development, and deployment of technologies.
* Creating broad and inclusive communities of interest that bring
together key stakeholders to better inform practices for the design,
development, and deployment of technologies.
* Educating and training the science, technology, engineering, and
mathematics workforce on approaches to responsible design,
development, and deployment of technologies. 
* Accelerating pathways to societal and economic benefits while
developing strategies to avoid or mitigate societal and economic harms.
* Empowering communities, including economically disadvantaged and
marginalized populations, to participate in all stages of technology
development, including the earliest stages of ideation and design.

Phase 1 of the program solicits proposals for Workshops, Planning
Grants, or the creation of Translational Research Coordination Networks,
while Phase 2 solicits full project proposals. The initial areas of
focus for 2024 include artificial intelligence, biotechnology or natural
and anthropogenic disaster prevention or mitigation. Future iterations
of the program may consider other key technology focus areas enumerated
in the CHIPS and Science Act.

For more information about ReDDDoT, visit the program website or register for an informational webinar on Feb. 9, 2024, at 2 p.m. ET.

Statements from NSF’s Partners

“The core belief at the heart of ReDDDoT – that technology should be
shaped by ethical, legal, and societal considerations as well as
community values – also drives the work of the Patrick J. McGovern
Foundation to build a human-centered digital future for all. We’re
pleased to support this partnership, committed to advancing the
development of AI, biotechnology, and climate technologies that advance
equity, sustainability, and justice.” – Vilas Dhar, President, Patrick
J. McGovern Foundation

“From generative AI to quantum computing, the pace of technology
development is only accelerating. Too often, technological advances are
not accompanied by discussion and design that considers negative impacts
or unrealized potential. We’re excited to support ReDDDoT as an
opportunity to uplift new and often forgotten perspectives that
critically examine technology’s impact on civic life, and advance Siegel
Family Endowment’s vision of technological change that includes and
improves the lives of all people.” – Katy Knight, President and
Executive Director of Siegel Family Endowment

Only eight months ago, another big NSF funding project was announced but this time focused on AI and promoting trust, from a May 4, 2023 University of Maryland (UMD) news release (also on EurekAlert), Note: A link has been removed,

The University of Maryland has been chosen to lead a multi-institutional effort supported by the National Science Foundation (NSF) that will develop new artificial intelligence (AI) technologies designed to promote trust and mitigate risks, while simultaneously empowering and educating the public.

The NSF Institute for Trustworthy AI in Law & Society (TRAILS) announced on May 4, 2023, unites specialists in AI and machine learning with social scientists, legal scholars, educators and public policy experts. The multidisciplinary team will work with impacted communities, private industry and the federal government to determine what trust in AI looks like, how to develop technical solutions for AI that can be trusted, and which policy models best create and sustain trust.

Funded by a $20 million award from NSF, the new institute is expected to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“As artificial intelligence continues to grow exponentially, we must embrace its potential for helping to solve the grand challenges of our time, as well as ensure that it is used both ethically and responsibly,” said UMD President Darryll J. Pines. “With strong federal support, this new institute will lead in defining the science and innovation needed to harness the power of AI for the benefit of the public good and all humankind.”

In addition to UMD, TRAILS will include faculty members from George Washington University (GW) and Morgan State University, with more support coming from Cornell University, the National Institute of Standards and Technology (NIST), and private sector organizations like the DataedX Group, Arthur AI, Checkstep, FinRegLab and Techstars.

At the heart of establishing the new institute is the consensus that AI is currently at a crossroads. AI-infused systems have great potential to enhance human capacity, increase productivity, catalyze innovation, and mitigate complex problems, but today’s systems are developed and deployed in a process that is opaque and insular to the public, and therefore, often untrustworthy to those affected by the technology.

“We’ve structured our research goals to educate, learn from, recruit, retain and support communities whose voices are often not recognized in mainstream AI development,” said Hal Daumé III, a UMD professor of computer science who is lead principal investigator of the NSF award and will serve as the director of TRAILS.

Inappropriate trust in AI can result in many negative outcomes, Daumé said. People often “overtrust” AI systems to do things they’re fundamentally incapable of. This can lead to people or organizations giving up their own power to systems that are not acting in their best interest. At the same time, people can also “undertrust” AI systems, leading them to avoid using systems that could ultimately help them.

Given these conditions—and the fact that AI is increasingly being deployed to mediate society’s online communications, determine health care options, and offer guidelines in the criminal justice system—it has become urgent to ensure that people’s trust in AI systems matches those same systems’ level of trustworthiness.

TRAILS has identified four key research thrusts to promote the development of AI systems that can earn the public’s trust through broader participation in the AI ecosystem.

The first, known as participatory AI, advocates involving human stakeholders in the development, deployment and use of these systems. It aims to create technology in a way that aligns with the values and interests of diverse groups of people, rather than being controlled by a few experts or solely driven by profit.

Leading the efforts in participatory AI is Katie Shilton, an associate professor in UMD’s College of Information Studies who specializes in ethics and sociotechnical systems. Tom Goldstein, a UMD associate professor of computer science, will lead the institute’s second research thrust, developing advanced machine learning algorithms that reflect the values and interests of the relevant stakeholders.

Daumé, Shilton and Goldstein all have appointments in the University of Maryland Institute for Advanced Computer Studies, which is providing administrative and technical support for TRAILS.

David Broniatowski, an associate professor of engineering management and systems engineering at GW, will lead the institute’s third research thrust of evaluating how people make sense of the AI systems that are developed, and the degree to which their levels of reliability, fairness, transparency and accountability will lead to appropriate levels of trust. Susan Ariel Aaronson, a research professor of international affairs at GW, will use her expertise in data-driven change and international data governance to lead the institute’s fourth thrust of participatory governance and trust.

Virginia Byrne, an assistant professor of higher education and student affairs at Morgan State, will lead community-driven projects related to the interplay between AI and education. According to Daumé, the TRAILS team will rely heavily on Morgan State’s leadership—as Maryland’s preeminent public urban research university—in conducting rigorous, participatory community-based research with broad societal impacts.

Additional academic support will come from Valerie Reyna, a professor of human development at Cornell, who will use her expertise in human judgment and cognition to advance efforts focused on how people interpret their use of AI.

Federal officials at NIST will collaborate with TRAILS in the development of meaningful measures, benchmarks, test beds and certification methods—particularly as they apply to important topics essential to trust and trustworthiness such as safety, fairness, privacy, transparency, explainability, accountability, accuracy and reliability.

“The ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio.

Today’s announcement [May 4, 2023] is the latest in a series of federal grants establishing a cohort of National Artificial Intelligence Research Institutes. This recent investment in seven new AI institutes, totaling $140 million, follows two previous rounds of awards.

“Maryland is at the forefront of our nation’s scientific innovation thanks to our talented workforce, top-tier universities, and federal partners,” said U.S. Sen. Chris Van Hollen (D-Md.). “This National Science Foundation award for the University of Maryland—in coordination with other Maryland-based research institutions including Morgan State University and NIST—will promote ethical and responsible AI development, with the goal of helping us harness the benefits of this powerful emerging technology while limiting the potential risks it poses. This investment entrusts Maryland with a critical priority for our shared future, recognizing the unparalleled ingenuity and world-class reputation of our institutions.” 

The NSF, in collaboration with government agencies and private sector leaders, has now invested close to half a billion dollars in the AI institutes ecosystem—an investment that expands a collaborative AI research network into almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “[They] are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

As noted in the UMD news release, this funding is part of a ‘bundle’, here’s more from the May 4, 2023 US NSF news release announcing the full $ 140 million funding program, Note: Links have been removed,

The U.S. National Science Foundation, in collaboration with other federal agencies, higher education institutions and other stakeholders, today announced a $140 million investment to establish seven new National Artificial Intelligence Research Institutes. The announcement is part of a broader effort across the federal government to advance a cohesive approach to AI-related opportunities and risks.

The new AI Institutes will advance foundational AI research that promotes ethical and trustworthy AI systems and technologies, develop novel approaches to cybersecurity, contribute to innovative solutions to climate change, expand the understanding of the brain, and leverage AI capabilities to enhance education and public health. The institutes will support the development of a diverse AI workforce in the U.S. and help address the risks and potential harms posed by AI.  This investment means  NSF and its funding partners have now invested close to half a billion dollars in the AI Institutes research network, which reaches almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “These institutes are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

“These strategic federal investments will advance American AI infrastructure and innovation, so that AI can help tackle some of the biggest challenges we face, from climate change to health. Importantly, the growing network of National AI Research Institutes will promote responsible innovation that safeguards people’s safety and rights,” said White House Office of Science and Technology Policy Director Arati Prabhakar.

The new AI Institutes are interdisciplinary collaborations among top AI researchers and are supported by co-funding from the U.S. Department of Commerce’s National Institutes of Standards and Technology (NIST); U.S. Department of Homeland Security’s Science and Technology Directorate (DHS S&T); U.S. Department of Agriculture’s National Institute of Food and Agriculture (USDA-NIFA); U.S. Department of Education’s Institute of Education Sciences (ED-IES); U.S. Department of Defense’s Office of the Undersecretary of Defense for Research and Engineering (DoD OUSD R&E); and IBM Corporation (IBM).

“Foundational research in AI and machine learning has never been more critical to the understanding, creation and deployment of AI-powered systems that deliver transformative and trustworthy solutions across our society,” said NSF Assistant Director for Computer and Information Science and Engineering Margaret Martonosi. “These recent awards, as well as our AI Institutes ecosystem as a whole, represent our active efforts in addressing national economic and societal priorities that hinge on our nation’s AI capability and leadership.”

The new AI Institutes focus on six research themes:

Trustworthy AI

NSF Institute for Trustworthy AI in Law & Society (TRAILS)

Led by the University of Maryland, TRAILS aims to transform the practice of AI from one driven primarily by technological innovation to one driven with attention to ethics, human rights and support for communities whose voices have been marginalized into mainstream AI. TRAILS will be the first institute of its kind to integrate participatory design, technology, and governance of AI systems and technologies and will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness. TRAILS is funded by a partnership between NSF and NIST.

Intelligent Agents for Next-Generation Cybersecurity

AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION)

Led by the University of California, Santa Barbara, this institute will develop novel approaches that leverage AI to anticipate and take corrective actions against cyberthreats that target the security and privacy of computer networks and their users. The team of researchers will work with experts in security operations to develop a revolutionary approach to cybersecurity, in which AI-enabled intelligent security agents cooperate with humans across the cyberdefense life cycle to jointly improve the resilience of security of computer systems over time. ACTION is funded by a partnership between NSF, DHS S&T, and IBM.

Climate Smart Agriculture and Forestry

AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE)

Led by the University of Minnesota Twin Cities, this institute aims to advance foundational AI by incorporating knowledge from agriculture and forestry sciences and leveraging these unique, new AI methods to curb climate effects while lifting rural economies. By creating a new scientific discipline and innovation ecosystem intersecting AI and climate-smart agriculture and forestry, our researchers and practitioners will discover and invent compelling AI-powered knowledge and solutions. Examples include AI-enhanced estimation methods of greenhouse gases and specialized field-to-market decision support tools. A key goal is to lower the cost of and improve accounting for carbon in farms and forests to empower carbon markets and inform decision making. The institute will also expand and diversify rural and urban AI workforces. AI-CLIMATE is funded by USDA-NIFA.

Neural and Cognitive Foundations of Artificial Intelligence

AI Institute for Artificial and Natural Intelligence (ARNI)

Led by Columbia University, this institute will draw together top researchers across the country to focus on a national priority: connecting the major progress made in AI systems to the revolution in our understanding of the brain. ARNI will meet the urgent need for new paradigms of interdisciplinary research between neuroscience, cognitive science and AI. This will accelerate progress in all three fields and broaden the transformative impact on society in the next decade. ARNI is funded by a partnership between NSF and DoD OUSD R&E.

AI for Decision Making

AI Institute for Societal Decision Making (AI-SDM)

Led by Carnegie Mellon University, this institute seeks to create human-centric AI for decision making to bolster effective response in uncertain, dynamic and resource-constrained scenarios like disaster management and public health. By bringing together an interdisciplinary team of AI and social science researchers, AI-SDM will enable emergency managers, public health officials, first responders, community workers and the public to make decisions that are data driven, robust, agile, resource efficient and trustworthy. The vision of the institute will be realized via development of AI theory and methods, translational research, training and outreach, enabled by partnerships with diverse universities, government organizations, corporate partners, community colleges, public libraries and high schools.

AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes

AI Institute for Inclusive Intelligent Technologies for Education (INVITE)

Led by the University of Illinois Urbana-Champaign, this institute seeks to fundamentally reframe how educational technologies interact with learners by developing AI tools and approaches to support three crucial noncognitive skills known to underlie effective learning: persistence, academic resilience and collaboration. The institute’s use-inspired research will focus on how children communicate STEM content, how they learn to persist through challenging work, and how teachers support and promote noncognitive skill development. The resultant AI-based tools will be integrated into classrooms to empower teachers to support learners in more developmentally appropriate ways.

AI Institute for Exceptional Education (AI4ExceptionalEd)

Led by the University at Buffalo, this institute will work toward universal speech and language screening for children. The framework, the AI screener, will analyze video and audio streams of children during classroom interactions and assess the need for evidence-based interventions tailored to individual needs of students. The institute will serve children in need of ability-based speech and language services, advance foundational AI technologies and enhance understanding of childhood speech and language development. The AI Institute for Exceptional Education was previously announced in January 2023. The INVITE and AI4ExceptionalEd institutes are funded by a partnership between NSF and ED-IES.

Statements from NSF’s Federal Government Funding Partners

“Increasing AI system trustworthiness while reducing its risks will be key to unleashing AI’s potential benefits and ensuring our shared societal values,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “Today, the ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them.”

“The ACTION Institute will help us better assess the opportunities and risks of rapidly evolving AI technology and its impact on DHS missions,” said Dimitri Kusnezov, DHS under secretary for science and technology. “This group of researchers and their ambition to push the limits of fundamental AI and apply new insights represents a significant investment in cybersecurity defense. These partnerships allow us to collectively remain on the forefront of leading-edge research for AI technologies.”

“In the tradition of USDA National Institute of Food and Agriculture investments, this new institute leverages the scientific power of U.S. land-grant universities informed by close partnership with farmers, producers, educators and innovators to address the grand challenge of rising greenhouse gas concentrations and associated climate change,” said Acting NIFA Director Dionne Toombs. “This innovative center will address the urgent need to counter climate-related threats, lower greenhouse gas emissions, grow the American workforce and increase new rural opportunities.”

“The leading-edge in AI research inevitably draws from our, so far, limited understanding of human cognition. This AI Institute seeks to unify the fields of AI and neuroscience to bring advanced designs and approaches to more capable and trustworthy AI, while also providing better understanding of the human brain,” said Bindu Nair, director, Basic Research Office, Office of the Undersecretary of Defense for Research and Engineering. “We are proud to partner with NSF in this critical field of research, as continued advancement in these areas holds the potential for further and significant benefits to national security, the economy and improvements in quality of life.”

“We are excited to partner with NSF on these two AI institutes,” said IES Director Mark Schneider. “We hope that they will provide valuable insights into how to tap modern technologies to improve the education sciences — but more importantly we hope that they will lead to better student outcomes and identify ways to free up the time of teachers to deliver more informed individualized instruction for the students they care so much about.” 

Learn more about the NSF AI Institutes by visiting nsf.gov.

Two things I noticed, (1) No mention of including ethics training or concepts in science and technology education and (2) No mention of integrating ethics and social issues into any of the AI Institutes. So, it seems that ‘Responsible Design, Development and Deployment of Technologies (ReDDDoT)’ occupies its own fiefdom.

Some sobering thoughts

Things can go terribly wrong with new technology as seen in the British television hit series, Mr. Bates vs. The Post Office (based on a true story) , from a January 9, 2024 posting by Ani Blundel for tellyvisions.org,

… what is this show that’s caused the entire country to rise up as one to defend the rights of the lowly sub-postal worker? Known as the “British Post Office scandal,” the incidents first began in 1999 when the U.K. postal system began to switch to digital systems, using the Horizon Accounting system to track the monies brought in. However, the IT system was faulty from the start, and rather than blame the technology, the British government accused, arrested, persecuted, and convicted over 700 postal workers of fraud and theft. This continued through 2015 when the glitch was finally recognized, and in 2019, the convictions were ruled to be a miscarriage of justice.

Here’s the series synopsis:

The drama tells the story of one of the greatest miscarriages of justice in British legal history. Hundreds of innocent sub-postmasters and postmistresses were wrongly accused of theft, fraud, and false accounting due to a defective IT system. Many of the wronged workers were prosecuted, some of whom were imprisoned for crimes they never committed, and their lives were irreparably ruined by the scandal. Following the landmark Court of Appeal decision to overturn their criminal convictions, dozens of former sub-postmasters and postmistresses have been exonerated on all counts as they battled to finally clear their names. They fought for over ten years, finally proving their innocence and sealing a resounding victory, but all involved believe the fight is not over yet, not by a long way.

Here’s a video trailer for ‘Mr. Bates vs. The Post Office,

More from Blundel’s January 9, 2024 posting, Note: A link has been removed,

The outcry from the general public against the government’s bureaucratic mismanagement and abuse of employees has been loud and sustained enough that Prime Minister Rishi Sunak had to come out with a statement condemning what happened back during the 2009 incident. Further, the current Justice Secretary, Alex Chalk, is now trying to figure out the fastest way to exonerate the hundreds of sub-post managers and sub-postmistresses who were wrongfully convicted back then and if there are steps to be taken to punish the post office a decade later.

It’s a horrifying story and the worst I’ve seen so far but, sadly, it’s not the only one of its kind.

Too often people’s concerns and worries about new technology are dismissed or trivialized. Somehow, all the work done to establish ethical standards and develop trust seems to be used as a kind of sop to the concerns rather than being integrated into the implementation of life-altering technologies.

Nature’s missing evolutionary law added in new paper by leading scientists and philosophers

An October 22, 2023 commentary by Rae Hodge for Salon.com introduces the new work with a beautiful lede/lead and more,

A recently published scientific article proposes a sweeping new law of nature, approaching the matter with dry, clinical efficiency that still reads like poetry.

“A pervasive wonder of the natural world is the evolution of varied systems, including stars, minerals, atmospheres, and life,” the scientists write in the Proceedings of the National Academy of Sciences. “Evolving systems are asymmetrical with respect to time; they display temporal increases in diversity, distribution, and/or patterned behavior,” they continue, mounting their case from the shoulders of Charles Darwin, extending it toward all things living and not.

To join the known physics laws of thermodynamics, electromagnetism and Newton’s laws of motion and gravity, the nine scientists and philosophers behind the paper propose their “law of increasing functional information.”

In short, a complex and evolving system — whether that’s a flock of gold finches or a nebula or the English language — will produce ever more diverse and intricately detailed states and configurations of itself.

And here, any writer should find their breath caught in their throat. Any writer would have to pause and marvel.

It’s a rare thing to hear the voice of science singing toward its twin in the humanities. The scientists seem to be searching in their paper for the right words to describe the way the nested trills of a flautist rise through a vaulted cathedral to coalesce into notes themselves not played by human breath. And how, in the very same way, the oil-slick sheen of a June Bug wing may reveal its unseen spectra only against the brief-blooming dogwood in just the right season of sun.

Both intricate configurations of art and matter arise and fade according to their shared characteristic, long-known by students of the humanities: each have been graced with enough time to attend to the necessary affairs of their most enduring pleasures.

If you have the time, do read this October 22, 2023 commentary as Hodge waxes eloquent.

An October 16, 2023 news item on phys.org announces the work in a more prosaic fashion,

A paper published in the Proceedings of the National Academy of Sciences describes “a missing law of nature,” recognizing for the first time an important norm within the natural world’s workings.

In essence, the new law states that complex natural systems evolve to states of greater patterning, diversity, and complexity. In other words, evolution is not limited to life on Earth, it also occurs in other massively complex systems, from planets and stars to atoms, minerals, and more.

It was authored by a nine-member team— scientists from the Carnegie Institution for Science, the California Institute of Technology (Caltech) and Cornell University, and philosophers from the University of Colorado.

An October 16, 2023 Carnegie Science Earth and Planets Laboratory news release on EurekAlert (there is also a somewhat shorter October 16, 2023 version on the Carnegie Science [Carnegie Institution of Science] website), which originated the news item, provides a lot more detail,

“Macroscopic” laws of nature describe and explain phenomena experienced daily in the natural world. Natural laws related to forces and motion, gravity, electromagnetism, and energy, for example, were described more than 150 years ago. 

The new work presents a modern addition — a macroscopic law recognizing evolution as a common feature of the natural world’s complex systems, which are characterised as follows:

  • They are formed from many different components, such as atoms, molecules, or cells, that can be arranged and rearranged repeatedly
  • Are subject to natural processes that cause countless different arrangements to be formed
  • Only a small fraction of all these configurations survive in a process called “selection for function.”   

Regardless of whether the system is living or nonliving, when a novel configuration works well and function improves, evolution occurs. 

The authors’ “Law of Increasing Functional Information” states that the system will evolve “if many different configurations of the system undergo selection for one or more functions.”

“An important component of this proposed natural law is the idea of ‘selection for function,’” says Carnegie astrobiologist Dr. Michael L. Wong, first author of the study.

In the case of biology, Darwin equated function primarily with survival—the ability to live long enough to produce fertile offspring. 

The new study expands that perspective, noting that at least three kinds of function occur in nature. 

The most basic function is stability – stable arrangements of atoms or molecules are selected to continue. Also chosen to persist are dynamic systems with ongoing supplies of energy. 

The third and most interesting function is “novelty”—the tendency of evolving systems to explore new configurations that sometimes lead to startling new behaviors or characteristics. 

Life’s evolutionary history is rich with novelties—photosynthesis evolved when single cells learned to harness light energy, multicellular life evolved when cells learned to cooperate, and species evolved thanks to advantageous new behaviors such as swimming, walking, flying, and thinking. 

The same sort of evolution happens in the mineral kingdom. The earliest minerals represent particularly stable arrangements of atoms. Those primordial minerals provided foundations for the next generations of minerals, which participated in life’s origins. The evolution of life and minerals are intertwined, as life uses minerals for shells, teeth, and bones.

Indeed, Earth’s minerals, which began with about 20 at the dawn of our Solar System, now number almost 6,000 known today thanks to ever more complex physical, chemical, and ultimately biological processes over 4.5 billion years. 

In the case of stars, the paper notes that just two major elements – hydrogen and helium – formed the first stars shortly after the big bang. Those earliest stars used hydrogen and helium to make about 20 heavier chemical elements. And the next generation of stars built on that diversity to produce almost 100 more elements.

“Charles Darwin eloquently articulated the way plants and animals evolve by natural selection, with many variations and traits of individuals and many different configurations,” says co-author Robert M. Hazen of Carnegie Science, a leader of the research.

“We contend that Darwinian theory is just a very special, very important case within a far larger natural phenomenon. The notion that selection for function drives evolution applies equally to stars, atoms, minerals, and many other conceptually equivalent situations where many configurations are subjected to selective pressure.”

The co-authors themselves represent a unique multi-disciplinary configuration: three philosophers of science, two astrobiologists, a data scientist, a mineralogist, and a theoretical physicist.

Says Dr. Wong: “In this new paper, we consider evolution in the broadest sense—change over time—which subsumes Darwinian evolution based upon the particulars of ‘descent with modification.’”  

“The universe generates novel combinations of atoms, molecules, cells, etc. Those combinations that are stable and can go on to engender even more novelty will continue to evolve. This is what makes life the most striking example of evolution, but evolution is everywhere.”

Among many implications, the paper offers: 

  1. Understanding into how differing systems possess varying degrees to which they can continue to evolve. “Potential complexity” or “future complexity” have been proposed as metrics of how much more complex an evolving system might become
  2. Insights into how the rate of evolution of some systems can be influenced artificially. The notion of functional information suggests that the rate of evolution in a system might be increased in at least three ways: (1) by increasing the number and/or diversity of interacting agents, (2) by increasing the number of different configurations of the system; and/or 3) by enhancing the selective pressure on the system (for example, in chemical systems by more frequent cycles of heating/cooling or wetting/drying).
  3. A deeper understanding of generative forces behind the creation and existence of complex phenomena in the universe, and the role of information in describing them
  4. An understanding of life in the context of other complex evolving systems. Life shares certain conceptual equivalencies with other complex evolving systems, but the authors point to a future research direction, asking if there is something distinct about how life processes information on functionality (see also https://royalsocietypublishing.org/doi/10.1098/rsif.2022.0810).
  5. Aiding the search for life elsewhere: if there is a demarcation between life and non-life that has to do with selection for function, can we identify the “rules of life” that allow us to discriminate that biotic dividing line in astrobiological investigations? (See also https://conta.cc/3LwLRYS, “Did Life Exist on Mars? Other Planets? With AI’s Help, We May Know Soon”)
  6. At a time when evolving AI systems are an increasing concern, a predictive law of information that characterizes how both natural and symbolic systems evolve is especially welcome

Laws of nature – motion, gravity, electromagnetism, thermodynamics – etc. codify the general behavior of various macroscopic natural systems across space and time. 

The “law of increasing functional information” published today complements the 2nd law of thermodynamics, which states that the entropy (disorder) of an isolated system increases over time (and heat always flows from hotter to colder objects).

* * * * *


“This is a superb, bold, broad, and transformational article.  …  The authors are approaching the fundamental issue of the increase in complexity of the evolving universe. The purpose is a search for a ‘missing law’ that is consistent with the known laws.

“At this stage of the development of these ideas, rather like the early concepts in the mid-19th century of coming to understand ‘energy’ and ‘entropy,’ open broad discussion is now essential.”

Stuart Kauffman
Institute for Systems Biology, Seattle WA

“The study of Wong et al. is like a breeze of fresh air blowing over the difficult terrain at the trijunction of astrobiology, systems science and evolutionary theory. It follows in the steps of giants such as Erwin Schrödinger, Ilya Prigogine, Freeman Dyson and James Lovelock. In particular, it was Schrödinger who formulated the perennial puzzle: how can complexity increase — and drastically so! — in living systems, while they remain bound by the Second Law of thermodynamics? In the pile of attempts to resolve this conundrum in the course of the last 80 years, Wong et al. offer perhaps the best shot so far.”

“Their central idea, the formulation of the law of increasing functional information, is simple but subtle: a system will manifest an increase in functional information if its various configurations generated in time are selected for one or more functions. This, the authors claim, is the controversial ‘missing law’ of complexity, and they provide a bunch of excellent examples. From my admittedly quite subjective point of view, the most interesting ones pertain to life in radically different habitats like Titan or to evolutionary trajectories characterized by multiple exaptations of traits resulting in a dramatic increase in complexity. Does the correct answer to Schrödinger’s question lie in this direction? Only time will tell, but both my head and my gut are curiously positive on that one. Finally, another great merit of this study is worth pointing out: in this day and age of rabid Counter-Enlightenment on the loose, as well as relentless attacks on the freedom of thought and speech, we certainly need more unabashedly multidisciplinary and multicultural projects like this one.”

Milan Cirkovic 
Astronomical Observatory of Belgrade, Serbia; The Future of Humanity Institute, Oxford University [University of Oxford]

The natural laws we recognize today cannot yet account for one astounding characteristic of our universe—the propensity of natural systems to “evolve.” As the authors of this study attest, the tendency to increase in complexity and function through time is not specific to biology, but is a fundamental property observed throughout the universe. Wong and colleagues have distilled a set of principles which provide a foundation for cross-disciplinary discourse on evolving systems. In so doing, their work will facilitate the study of self-organization and emergent complexity in the natural world.

Corday Selden
Department of Marine and Coastal Sciences, Rutgers University

The paper “On the roles of function and selection in evolving systems” provides an innovative, compelling, and sound theoretical framework for the evolution of complex systems, encompassing both living and non-living systems. Pivotal in this new law is functional information, which quantitatively captures the possibilities a system has to perform a function. As some functions are indeed crucial for the survival of a living organism, this theory addresses the core of evolution and is open to quantitative assessment. I believe this contribution has also the merit of speaking to different scientific communities that might find a common ground for open and fruitful discussions on complexity and evolution.

Andrea Roli
Assistant Professor, Università di Bologna.

Here’s a link to and a citation for the paper,

On the roles of function and selection in evolving systems by Michael L. Wong, Carol E. Cleland, Daniel Arends Jr., Stuart Bartlett, H. James Cleaves, Heather Demarest, Anirudh Prabhu, Jonathan I. Lunine, and Robert M. Hazen. Proceedings of the National Academy of Sciences (PNAS) 120 (43) e2310223120 DOI: https://doi.org/10.1073/pnas.2310223120 Published: October 16, 2023

This paper is open access.

Metal oxide nanoparticles and negative effects on gut health?

A May 4, 2023 Binghamton University news release by Jillian McCarthy (also on EurekAlert but published on May 9, 2023) announces the research, Note: A link has been removed,

Common food additives known as metal oxide nanoparticles may have negative effects on your gut health, according to new research from Binghamton University, State University of New York and Cornell University. 

Gretchen Mahler, professor of biomedical engineering and interim vice provost and dean of the Graduate School, worked in collaboration with Cornell researchers to study five of these nanoparticles. Their findings were recently published in the Journal of Antioxidants.

“They’re all actual food additives,” said Mahler. “Titanium dioxide tends to show up as a whitening and brightening agent. Silicon dioxide tends to be added to foods to prevent it from clumping. Iron oxide tends to be added to meats, for example, to keep that red color. And zinc oxide can be used as a preservative because it’s antimicrobial.” [emphases mine]

In order to test these nanoparticles, Mahler and Elad Tako, senior author and associate professor of food science in the College of Agriculture and Life Sciences at Cornell, used the intestinal tract of chickens. A chicken’s intestinal tract is comparable to a human’s; the microbiota that they have and the bacterial components have a lot of overlap with the microbiota that you see in the human digestive system, said Mahler. 

“We’ve been testing a series of nanomaterials here at Binghamton, and we’ve been looking at things like nutrient absorption, enzyme expression and some of the more subtle, functional markers,” said Mahler.

The doses of nanoparticles that were tested reflect what is typically consumed by humans. The nanoparticles were injected into the amniotic sac of broiler chicken eggs, which are specifically bred and raised for their meat. These chickens get larger faster, so the effects of the nanoparticles are more obvious earlier in development. The amniotic sac at a certain stage of development flows through the chicken intestine.

“When they hatched, we harvested tissue from the small intestine, the microbiota and the liver,” said Mahler. “We looked at gene expression, microbiota composition and the structure of the small intestine.”

The researchers found more negative effects with silicone dioxide and titanium dioxide. They also found that the nanoparticles had affected the functioning of the chicken’s intestinal lining (called the brush border membrane), the balance of bacteria in their intestinal tract and the chickens’ ability to absorb minerals.

The other nanoparticles had more neutral, or even positive, effects. Zinc oxide appeared to support intestinal development or compensatory mechanisms due to intestinal damage. Iron oxide could potentially be used for iron fortification, but with potential alterations in intestinal functionality and health.

Mahler doesn’t want to suggest that these nanoparticles need to be removed from our diets completely. Their research is meant to provide some information, and allow people to have a better understanding of what’s really in the food they consume.

“We’re eating these things, so it’s important to consider what some of the more subtle effects could be,” said Mahler. “We develop these gut models around this problem to try to understand it, and this collaboration, where we have these complementary methods to try to look at the problem, has been successful.”

Here’s a link to and a citation for the paper,

Food-Grade Metal Oxide Nanoparticles Exposure Alters Intestinal Microbial Populations, Brush Border Membrane Functionality and Morphology, In Vivo (Gallus gallus) by Jacquelyn Cheng, Nikolai Kolba, Alba García-Rodríguez, Cláudia N. H. Marques, Gretchen J. Mahler and Elad Tako. Antioxidants 2023, 12(2), 431, DOI: https://doi.org/10.3390/antiox12020431 Published online : 9 February 2023 (This article belongs to the Special Issue Dietary Supplements and Oxidative Stress)

This paper is open access.

SkinKit: smart tattoo provides on-skin computing

The SkinKit wearable sensing interface, developed in the Hybrid Body Lab, can be used for health and wellness, personal safety, as assistive technology and for athletic training, among many applications. Hybrid Body Lab/Provided

A November 3, 2022 Cornell University news release on EurekAlert announces a computer you can attach to your skin (Note: Links have been removed),

Researchers at Cornell University have come up with a reliable, skin-tight computing system that’s easy to attach and detach, and can be used for a variety of purposes – from health monitoring to fashion.

On-skin interfaces – sometimes known as “smart tattoos” – have the potential to outperform the sensing capabilities of current wearable technologies but combining comfort and durability has proven challenging.

“We’ve been working on this for years,” said Cindy (Hsin-Liu) Kao, assistant professor of human centered design, and the study’s senior author, “and I think we’ve finally figured out a lot of the technical challenges. We wanted to create a modular approach to smart tattoos, to make them as straightforward as building Legos.”

SkinKit – a plug-and-play system that aims to “lower the floor for entry” to on-skin interfaces for those with little or no technical expertise – is the product of countless hours of development, testing and redevelopment, Kao said. Fabrication is done with temporary tattoo paper, silicone textile stabilizer and water, creating a multi-layer thin film structure they call “skin cloth.” The layered material can be cut into desired shapes and fitted with electronics hardware to perform a range of tasks.

“The wearer can easily attach them together and also detach them,” said Pin-Sung Ku, lead author of the paper and Hybrid Body Lab member. “Let’s say that today you want to use one of the sensors for certain purposes, but tomorrow you want it for something different. You can easily just detach them and reuse some of the modules to make a new device in minutes.”

The paper “SkinKit: Construction Kit for On-Skin Interface Prototyping” was presented at UbiComp ’22, the Association for Computing Machinery’s international joint conference on pervasive and ubiquitous computing.

Here’s a SkinKit video provided by Cornell University’s Hybrid Body Lab,

Tom Fleischman’s November 3, 2022 story for the Cornell Chronicle provides more details about SkinKit (Note: Links have been removed),

SkinKit – a plug-and-play system that aims to “lower the floor for entry” to on-skin interfaces, Kao said, for those with little or no technical expertise – is the product of countless hours of development, testing and redevelopment, she said.

Kao’s lab is also very conscious of cultural differences generally, and she thinks it’s important to bring these devices to diverse populations.

“People from different cultures, backgrounds and ethnicities can have very different perceptions toward these devices,” she said. “We felt it’s actually very important to let more people have a voice in saying what they want these smart tattoos to do.”

To test SkinKit, the researchers first recruited nine participants with both STEM and design backgrounds to build and wear the devices. Their input from the 90-minute workshop helped inform further modifications, which the group performed before conducting a larger, two-day study involving 25 participants with both STEM and design backgrounds.

Devices designed by the 25 study participants addressed: health and wellness, including temperature sensors to detect fever due to COVID-19; personal safety, including a device that would help the wearer maintain social distance during the pandemic; notification, including an arm-worn device that a runner could wear that would vibrate when a vehicle was near; and assistive technology, such as a wrist-worn sensor for the blind that would vibrate when the wearer was about to bump into an object.

Kao said members of her lab, including Ku, took part in the 4-H Career Explorations Conference over the summer, and had approximately 10 middle-schoolers from upstate New York build their own SkinKit devices.

“I think it just shows us a lot of potential for STEM [science, technology, engineering, and mathematics] learning, and especially to be able to engage people who maybe originally wouldn’t have interest in STEM,” Kao said. “But by combining it with body art and fashion, I think there’s a lot of potential for it to engage the next generation and broader populations to explore the future of smart tattoos.”

Here’s a citation for the paper,

SkinKit: Construction Kit for On-Skin Interface Prototyping” by Pin-Sung Ku, Md. Tahmidul Islam Molla, Kunpeng Huang, Priya Kattappurath, Krithik Ranjan, Hsin-Liu Cindy Kao. Proceedings of the ACM [Aossciation for Computing Machinery] on Interactive, Mobile, Wearable and Ubiquitous Technologies Volume 5 Issue 4 Dec 2021 Article No.: 165pp 1–23 DOI: https://doi.org/10.1145/3494989 Published: 30 December 2021

This paper is behind a paywall.

The Hybrid Body Lab can be found here (the pictures are fascinating). Here’s more from their About page,

The Hybrid Body Lab at Cornell University, founded and directed by Prof. Cindy Hsin-Liu Kao, focuses on the invention of culturally-inspired materials, processes, and tools for crafting technology on the body surface. Designing across scales, we explore how body scale interfaces can enhance our relations with everyday products and both natural and man-made environments. We conduct research at the intersection of Human-Computer Interaction, Wearable & Ubiquitous Computing, Digital Fabrication, Interaction Design, and Fashion & Body Art. We synthesize this knowledge to contribute a culturally-sensitive lens to the future of designs that interface the body and the environment. Our current investigations include:

Wearable Technology & On-Skin Interfaces
We develop novel wearable interfaces and fabrication processes, which a focus on skin-conformable or textile-based form factors. By hybridizing miniaturized robotics, machines, and materials with cultural body decoration practices, we investigate how technology can be situated as a culturally meaningful material for crafting our identities.

Designing Skins Across Scales
‘Many different types of machines that were parts of architecture have become parts of our bodies.’ —Bill Mitchell, Me++

We design “skins” that can be adapted across scales, from the architectural to the body scale. We investigate the interactions of a wearer’s body-borne interface with its surrounding ecology. This includes its interaction with other people, objects, to environments. We are also interested in developing skins that can be deployed across scales — from the body to the architectural scale.

Understanding Social Perceptions Towards On-Body Technologies
Wearable devices have evolved towards intrinsic human augmentation, unlocking the human skin as an interface for seamless interaction. However, the non-traditional form factor of these on-skin interfaces may raise concerns for public wear. These perceptions will influence whether a new form of technology will eventually be accepted, or rejected by society.  We investigate the cultural and social concerns that need to be considered when generating on-body technologies for inclusive design.

Spiders can outsource hearing to their webs

A March 29, 2022 news item on ScienceDaily highlights research into how spiders hear,

Everyone knows that humans and most other vertebrate species hear using eardrums that turn soundwave pressure into signals for our brains. But what about smaller animals like insects and arthropods? Can they detect sounds? And if so, how?

Distinguished Professor Ron Miles, a Department of Mechanical Engineering faculty member at Binghamton University’s Thomas J. Watson College of Engineering and Applied Science, has been exploring that question for more than three decades, in a quest to revolutionize microphone technology.

A newly published study of orb-weaving spiders — the species featured in the classic children’s book “Charlotte’s Web” — has yielded some extraordinary results: The spiders are using their webs as extended auditory arrays to capture sounds, possibly giving spiders advanced warning of incoming prey or predators.

Binghamton University (formal name: State University of New York at Binghamton) has made this fascinating (to me anyway) video available,

Binghamton University and Cornell University (also in New York state) researchers worked collaboratively on this project. Consequently, there are two news releases and there is some redundancy but I always find that information repeated in different ways is helpful for learning.

A March 29, 2022 Binghamton University news release (also on EurekAlert) by Chris Kocher gives more detail about the work (Note: Links have been removed),

It is well-known that spiders respond when something vibrates their webs, such as potential prey. In these new experiments, researchers for the first time show that spiders turned, crouched or flattened out in response to sounds in the air.

The study is the latest collaboration between Miles and Ron Hoy, a biology professor from Cornell, and it has implications for designing extremely sensitive bio-inspired microphones for use in hearing aids and cell phone

Jian Zhou, who earned his PhD in Miles’ lab and is doing postdoctoral research at the Argonne National Laboratory, and Junpeng Lai, a current PhD student in Miles’ lab, are co-first authors. Miles, Hoy and Associate Professor Carol I. Miles from the Harpur College of Arts and Sciences’ Department of Biological Sciences at Binghamton are also authors for this study. Grants from the National Institutes of Health to Ron Miles funded the research.

A single strand of spider silk is so thin and sensitive that it can detect the movement of vibrating air particles that make up a soundwave, which is different from how eardrums work. Ron Miles’ previous research has led to the invention of novel microphone designs that are based on hearing in insects.

“The spider is really a natural demonstration that this is a viable way to sense sound using viscous forces in the air on thin fibers,” he said. “If it works in nature, maybe we should have a closer look at it.”

Spiders can detect miniscule movements and vibrations through sensory organs on their tarsal claws at the tips of their legs, which they use to grasp their webs. Orb-weaver spiders are known to make large webs, creating a kind of acoustic antennae with a sound-sensitive surface area that is up to 10,000 times greater than the spider itself.

In the study, the researchers used Binghamton University’s anechoic chamber, a completely soundproof room under the Innovative Technologies Complex. Collecting orb-weavers from windows around campus, they had the spiders spin a web inside a rectangular frame so they could position it where they wanted.

The team began by using pure tone sound 3 meters away at different sound levels to see if the spiders responded or not. Surprisingly, they found spiders can respond to sound levels as low as 68 decibels. For louder sound, they found even more types of behaviors.

They then placed the sound source at a 45-degree angle, to see if the spiders behaved differently. They found that not only are the spiders localizing the sound source, but they can tell the sound incoming direction with 100% accuracy.

To better understand the spider-hearing mechanism, the researchers used laser vibrometry and measured over one thousand locations on a natural spider web, with the spider sitting in the center under the sound field. The result showed that the web moves with sound almost at maximum physical efficiency across an ultra-wide frequency range.

“Of course, the real question is, if the web is moving like that, does the spider hear using it?” Miles said. “That’s a hard question to answer.”

Lai added: “There could even be a hidden ear within the spider body that we don’t know about.”

So the team placed a mini-speaker 5 centimeters away from the center of the web where the spider sits, and 2 millimeters away from the web plane — close but not touching the web. This allows the sound to travel to the spider both through air and through the web. The researchers found that the soundwave from the mini-speaker died out significantly as it traveled through the air, but it propagated readily through the web with little attenuation. The sound level was still at around 68 decibels when it reached the spider. The behavior data showed that four out of 12 spiders responded to this web-borne signal.

Those reactions proved that the spiders could hear through the webs, and Lai was thrilled when that happened: “I’ve been working on this research for five years. That’s a long time, and it’s great to see all these efforts will become something that everybody can read.”

The researchers also found that, by crouching and stretching, spiders may be changing the tension of the silk strands, thereby tuning them to pick up different frequencies. By using this external structure to hear, the spider could be able to customize it to hear different sorts of sounds.

Future experiments may investigate how spiders make use of the sound they can detect using their web. Additionally, the team would like to test whether other types of web-weaving spiders also use their silk to outsource their hearing.

“It’s reasonable to guess that a similar spider on a similar web would respond in a similar way,” Ron Miles said. “But we can’t draw any conclusions about that, since we tested a certain kind of spider that happens to be pretty common.”

Lai admitted he had no idea he would be working with spiders when he came to Binghamton as a mechanical engineering PhD student.

“I’ve been afraid of spiders all my life, because of their alien looks and hairy legs!” he said with a laugh. “But the more I worked with spiders, the more amazing I found them. I’m really starting to appreciate them.”

A March 29, 2022 Cornell University news release (also on EurekAlert but published March 30, 2022) by Krishna Ramanujan offers a somewhat different perspective on the work, Note: Links have been removed)

Charlotte’s web is made for more than just trapping prey.

A study of orb weaver spiders finds their massive webs also act as auditory arrays that capture sounds, possibly giving spiders advanced warning of incoming prey or predators.

In experiments, the researchers found the spiders turned, crouched or flattened out in response to sounds, behaviors that spiders have been known to exhibit when something vibrates their webs.

The paper, “Outsourced Hearing in an Orb-weaving Spider That Uses its Web as an Auditory Sensor,” published March 29 [2022] in the Proceedings of the National Academy of Sciences, provides the first behavioral evidence that a spider can outsource hearing to its web.

The findings have implications for designing bio-inspired extremely sensitive microphones for use in hearing aids and cell phones.

A single strand of spider silk is so thin and sensitive it can detect the movement of vibrating air particles that make up a sound wave. This is different from how ear drums work, by sensing pressure from sound waves; spider silk detects sound from nanoscale air particles that become excited from sound waves.

“The individual [silk] strands are so thin that they’re essentially wafting with the air itself, jostled around by the local air molecules,” said Ron Hoy, the Merksamer Professor of Biological Science, Emeritus, in the College of Arts and Sciences and one of the paper’s senior authors, along with Ronald Miles, professor of mechanical engineering at Binghamton University.

Spiders can detect miniscule movements and vibrations via sensory organs in their tarsi – claws at the tips of their legs they use to grasp their webs, Hoy said. Orb weaver spiders are known to make large webs, creating a kind of acoustic antennae with a sound-sensitive surface area that is up to 10,000 times greater than the spider itself.

In the study, the researchers used a special quiet room without vibrations or air flows at Binghamton University. They had an orb-weaver build a web inside a rectangular frame, so they could position it where they wanted. The team began by putting a mini-speaker within millimeters of the web without actually touching it, where sound operates as a mechanical vibration. They found the spider detected the mechanical vibration and moved in response.

They then placed a large speaker 3 meters away on the other side of the room from the frame with the web and spider, beyond the range where mechanical vibration could affect the web. A laser vibrometer was able to show the vibrations of the web from excited air particles.

The team then placed the speaker in different locations, to the right, left and center with respect to the frame. They found that the spider not only detected the sound, it turned in the direction of the speaker when it was moved. Also, it behaved differently based on the volume, by crouching or flattening out.

Future experiments may investigate whether spiders rebuild their webs, sometimes daily, in part to alter their acoustic capabilities, by varying a web’s geometry or where it is anchored. Also, by crouching and stretching, spiders may be changing the tension of the silk strands, thereby tuning them to pick up different frequencies, Hoy said.

Additionally, the team would like to test if other types of web-weaving spiders also use their silk to outsource their hearing. “The potential is there,” Hoy said.

Miles’ lab is using tiny fiber strands bio-inspired by spider silk to design highly sensitive microphones that – unlike conventional pressure-based microphones – pick up all frequencies and cancel out background noise, a boon for hearing aids.  

Here’s a link to and a citation for the paper,

Outsourced hearing in an orb-weaving spider that uses its web as an auditory sensor by Jian Zhou, Junpeng Lai, Gil Menda, Jay A. Stafstrom, Carol I. Miles, Ronald R. Hoy, and Ronald N. Miles. Proceedings of the National Academy of Sciences (PNAS) DOI: https://doi.org/10.1073/pnas.2122789119 Published March 29, 2022 | 119 (14) e2122789119

This paper appears to be open access and video/audio files are included (you can heat the sound and watch the spider respond).

The nanoscale precision of pearls

An October 21, 2021 news item on phys.org features a quote about nothingness and symmetry (Note: A link has been removed),

In research that could inform future high-performance nanomaterials, a University of Michigan-led team has uncovered for the first time how mollusks build ultradurable structures with a level of symmetry that outstrips everything else in the natural world, with the exception of individual atoms.

“We humans, with all our access to technology, can’t make something with a nanoscale architecture as intricate as a pearl,” said Robert Hovden, U-M assistant professor of materials science and engineering and an author on the paper. “So we can learn a lot by studying how pearls go from disordered nothingness to this remarkably symmetrical structure.” [emphasis mine]

The analysis was done in collaboration with researchers at the Australian National University, Lawrence Berkeley National Laboratory, Western Norway University [of Applied Sciences] and Cornell University.

a. A Keshi pearl that has been sliced into pieces for study. b. A magnified cross-section of the pearl shows its transition from its disorderly center to thousands of layers of finely matched nacre. c. A magnification of the nacre layers shows their self-correction—when one layer is thicker, the next is thinner to compensate, and vice-versa. d, e: Atomic scale images of the nacre layers. f, g, h, i: Microscopy images detail the transitions between the pearl’s layers. Credit: University of Michigan

An October 21, 2021 University of Michigan news release (also on EurekAlert), which originated the news item, reveals a surprise,

Published in the Proceedings of the National Academy of Sciences [PNAS], the study found that a pearl’s symmetry becomes more and more precise as it builds, answering centuries-old questions about how the disorder at its center becomes a sort of perfection. 

Layers of nacre, the iridescent and extremely durable organic-inorganic composite that also makes up the shells of oysters and other mollusks, build on a shard of aragonite that surrounds an organic center. The layers, which make up more than 90% of a pearl’s volume, become progressively thinner and more closely matched as they build outward from the center.

Perhaps the most surprising finding is that mollusks maintain the symmetry of their pearls by adjusting the thickness of each layer of nacre. If one layer is thicker, the next tends to be thinner, and vice versa. The pearl pictured in the study contains 2,615 finely matched layers of nacre, deposited over 548 days.

“These thin, smooth layers of nacre look a little like bed sheets, with organic matter in between,” Hovden said. “There’s interaction between each layer, and we hypothesize that that interaction is what enables the system to correct as it goes along.”

The team also uncovered details about how the interaction between layers works. A mathematical analysis of the pearl’s layers show that they follow a phenomenon known as “1/f noise,” where a series of events that seem to be random are connected, with each new event influenced by the one before it. 1/f noise has been shown to govern a wide variety of natural and human-made processes including seismic activity, economic markets, electricity, physics and even classical music.

“When you roll dice, for example, every roll is completely independent and disconnected from every other roll. But 1/f noise is different in that each event is linked,” Hovden said. “We can’t predict it, but we can see a structure in the chaos. And within that structure are complex mechanisms that enable a pearl’s thousands of layers of nacre to coalesce toward order and precision.”

The team found that pearls lack true long-range order—the kind of carefully planned symmetry that keeps the hundreds of layers in brick buildings consistent. Instead, pearls exhibit medium-range order, maintaining symmetry for around 20 layers at a time. This is enough to maintain consistency and durability over the thousands of layers that make up a pearl.

The team gathered their observations by studying Akoya “keshi” pearls, produced by the Pinctada imbricata fucata oyster near the Eastern shoreline of Australia. They selected these particular pearls, which measure around 50 millimeters in diameter, because they form naturally, as opposed to bead-cultured pearls, which have an artificial center. Each pearl was cut with a diamond wire saw into sections measuring three to five millimeters in diameter, then polished and examined under an electron microscope.

Hovden says the study’s findings could help inform next-generation materials with precisely layered nanoscale architecture.

“When we build something like a brick building, we can build in periodicity through careful planning and measuring and templating,” he said. “Mollusks can achieve similar results on the nanoscale by using a different strategy. So we have a lot to learn from them, and that knowledge could help us make stronger, lighter materials in the future.”

Here’s a link to and a citation for the paper,

The mesoscale order of nacreous pearls by Jiseok Gim, Alden Koch, Laura M. Otter, Benjamin H. Savitzky, Sveinung Erland, Lara A. Estroff, Dorrit E. Jacob, and Robert Hovden. PNAS vol. 118 no. 42 e2107477118 DOI: https://doi.org/10.1073/pnas.2107477118 Published in issue October 19, 2021 Published online October 18, 2021

This paper appears to be open access.

Ever heard a bird singing and wondered what kind of bird?

The Cornell University Lab of Ornithology’s sound recognition feature in its Merlin birding app(lication) can answer that question for you according to a July 14, 2021 article by Steven Melendez for Fast Company (Note: Links have been removed),

The lab recently upgraded its Merlin smartphone app, designed for both new and experienced birdwatchers. It now features an AI-infused “Sound ID” feature that can capture bird sounds and compare them to crowdsourced samples to figure out just what bird is making that sound. … people have used it to identify more than 1 million birds. New user counts are also up 58% since the two weeks before launch, and up 44% over the same period last year, according to Drew Weber, Merlin’s project coordinator.

Even when it’s listening to bird sounds, the app still relies on recent advances in image recognition, says project research engineer Grant Van Horn. …, it actually transforms the sound into a visual graph called a spectrogram, similar to what you might see in an audio editing program. Then, it analyzes that spectrogram to look for similarities to known bird calls, which come from the Cornell Lab’s eBird citizen science project.

There’s more detail about Merlin in Marc Devokaitis’ June 23, 2021 article for the Cornell Chronicle,

… Merlin can recognize the sounds of more than 400 species from the U.S. and Canada, with that number set to expand rapidly in future updates.

As Merlin listens, it uses artificial intelligence (AI) technology to identify each species, displaying in real time a list and photos of the birds that are singing or calling.

Automatic song ID has been a dream for decades, but analyzing sound has always been extremely difficult. The breakthrough came when researchers, including Merlin lead researcher Grant Van Horn, began treating the sounds as images and applying new and powerful image classification algorithms like the ones that already power Merlin’s Photo ID feature.

“Each sound recording a user makes gets converted from a waveform to a spectrogram – a way to visualize the amplitude [volume], frequency [pitch] and duration of the sound,” Van Horn said. “So just like Merlin can identify a picture of a bird, it can now use this picture of a bird’s sound to make an ID.”

Merlin’s pioneering approach to sound identification is powered by tens of thousands of citizen scientists who contributed their bird observations and sound recordings to eBird, the Cornell Lab’s global database.

“Thousands of sound recordings train Merlin to recognize each bird species, and more than a billion bird observations in eBird tell Merlin which birds are likely to be present at a particular place and time,” said Drew Weber, Merlin project coordinator. “Having this incredibly robust bird dataset – and feeding that into faster and more powerful machine-learning tools – enables Merlin to identify birds by sound now, when doing so seemed like a daunting challenge just a few years ago.”

The Merlin Bird ID app with the new Sound ID feature is available for free on iOS and Android devices. Click here to download the Merlin Bird ID app and follow the prompts. If you already have Merlin installed on your phone, tap “Get Sound ID.”

Do take a look at Devokaitis’ June 23, 2021 article for more about how the Merlin app provides four ways to identify birds.

For anyone who likes to listen to the news, there’s an August 26, 2021 podcast (The Warblers by Birds Canada) featuring Drew Weber, Merlin project coordinator, and Jody Allair, Birds Canada Director of Community Engagement, discussing Merlin,

It’s a dream come true – there’s finally an app for identifying bird sounds. In the next episode of The Warblers podcast, we’ll explore the Merlin Bird ID app’s new Sound ID feature and how artificial intelligence is redefining birding. We talk with Drew Weber and Jody Allair and go deep into the implications and opportunities that this technology will bring for birds, and new as well as experienced birders.

The Warblers is hosted by Andrea Gress and Andrés Jiménez.

Sounds of Central African Landscapes; a Cornell (University) Elephant Listening Project

This September 13, 2021 news item about sound recordings taken in a rainforest (on phys.org) is downright fascinating,

More than a million hours of sound recordings are available from the Elephant Listening Project (ELP) in the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology—a rainforest residing in the cloud.

ELP researchers, in collaboration with the Wildlife Conservation Society, use remote recording units to capture the entire soundscape of a Congolese rainforest. Their targets are vocalizations from endangered African forest elephants, but they also capture tropical parrots shrieking, chimps chattering and rainfall spattering on leaves to the beat of grumbling thunder.

For someone who suffers from acrophobia (fear of heights), this is a disturbing picture (how tall is that tree? is the rope reinforced? who or what is holding him up? where is the photographer perched?),

Frelcia Bambi is a member of the Congolese team that deploys sound recorders in the rainforest and analyzes the data. Photo by Sebastien Assoignons, courtesy of the Wildlife Conservation Society.

A September 13, 2021 Cornell University (NY state, US) news release by Pat Leonard, which originated the news item, provides more details about the sounds themselves and the Elephant Listening Project,

“Scientists can use these soundscapes to monitor biodiversity,” said ELP director Peter Wrege. “You could measure overall sound levels before, during and after logging operations, for example. Or hone in on certain frequencies where insects may vocalize. Sound is increasingly being used as a conservation tool, especially for establishing the presence or absence of a species.”

For the past four years, 50 tree-mounted recording units have been collecting data continuously, covering a region that encompasses old logging sites, recent logging sites and part of the Nouabalé-Ndoki National Park in the Republic of the Congo. The sensors sometimes capture the booming guns of poachers, alerting rangers who then head out to track down the illegal activity.

But everyday nature lovers can tune in rainforest sounds, too.

“We’ve had requests to use some of the files for meditation or for yoga,” Wrege said. “It is very soothing to listen to rainforest sounds—you hear the sounds of insects, birds, frogs, chimps, wind and rain all blended together.”

But, as Wrege and others have learned, big data can also be a big problem. The Sounds of Central African Landscapes recordings would gobble up nearly 100 terabytes of computer space, and ELP takes in another eight terabytes every four months. But now, Amazon Web Services is storing the jungle sounds for free under its Open Data Sponsorship Program, which preserves valuable scientific data for public use.

This makes it possible for Wrege to share the jungle sounds and easier for users to analyze them with Amazon tools so they don’t have to move the massive files or try to download them.

Searching for individual species amid the wealth of data is a bit more daunting. ELP uses computer algorithms to search through the recordings for elephant sounds. Wrege has created a detector for the sounds of gorillas beating their chests. There are software platforms that help users create detectors for specific sounds, including Raven Pro 1.6, created by the Cornell Lab’s bioacoustics engineers. Wrege says the next iteration, Raven 2.0, will make this process even easier.

Wrege is also eyeing future educational uses for the recordings which he says could help train in-country biologists to not only collect the data but do the analyses. This is gradually happening now in the Republic of the Congo—ELP’s team of Congolese researchers does all the analysis for gunshot detection, though the elephant analyses are still done at ELP.

“We could use these recordings for internships and student training in Congo and other countries where we work, such as Gabon,” Wrege said. “We can excite young people about conservation in Central Africa. It would be a huge benefit to everyone living there.”

To listen or download clips from Sounds of the Central African Landscape, go to ELP’s data page on Amazon Web Services. You’ll need to create an account with AWS (choose the free option). Then sign in with your username and password. Click on the “recordings” item in the list you see, then “wav/” on the next page. From there you can click on any item in the list to play or download clips that are each 1.3 GB and 24 hours long.

Scientists looking to use sounds for research and analysis should start here.

World Conservation Society Forest Elephant Congo [downloaded from https://congo.wcs.org/Wildlife/Forest-Elephant.aspx]

What follows may be a little cynical but I can’t help noticing that this worthwhile and fascinating project will result in more personal and/or professional data for Amazon since you have to sign up even if all you’re doing is reading or listening to a few files that they’ve made available for the general public. In a sense, Amazon gets ‘paid’ when you give up an email address to them. Plus, Amazon gets to look like a good world citizen.

Let’s hope something greater than one company’s reputation as a world citizen comes out of this.