Tag Archives: US Department of Commerce

Prioritizing ethical & social considerations in emerging technologies—$16M in US National Science Foundation funding

I haven’t seen this much interest in the ethics and social impacts of emerging technologies in years. It seems that the latest AI (artificial intelligence) panic has stimulated interest not only in regulation but ethics too.

The latest information I have on this topic comes from a January 9, 2024 US National Science Foundation (NSF) news release (also received via email),

NSF and philanthropic partners announce $16 million in funding to prioritize ethical and social considerations in emerging technologies

ReDDDoT is a collaboration with five philanthropic partners and crosses
all disciplines of science and engineering_

The U.S. National Science Foundation today launched a new $16 million
program in collaboration with five philanthropic partners that seeks to
ensure ethical, legal, community and societal considerations are
embedded in the lifecycle of technology’s creation and use. The
Responsible Design, Development and Deployment of Technologies (ReDDDoT)
program aims to help create technologies that promote the public’s
wellbeing and mitigate potential harms.

“The design, development and deployment of technologies have broad
impacts on society,” said NSF Director Sethuraman Panchanathan. “As
discoveries and innovations are translated to practice, it is essential
that we engage and enable diverse communities to participate in this
work. NSF and its philanthropic partners share a strong commitment to
creating a comprehensive approach for co-design through soliciting
community input, incorporating community values and engaging a broad
array of academic and professional voices across the lifecycle of
technology creation and use.”

The ReDDDoT program invites proposals from multidisciplinary,
multi-sector teams that examine and demonstrate the principles,
methodologies and impacts associated with responsible design,
development and deployment of technologies, especially those specified
in the “CHIPS and Science Act of 2022.” In addition to NSF, the
program is funded and supported by the Ford Foundation, the Patrick J.
McGovern Foundation, Pivotal Ventures, Siegel Family Endowment and the
Eric and Wendy Schmidt Fund for Strategic Innovation.

“In recognition of the role responsible technologists can play to
advance human progress, and the danger unaccountable technology poses to
social justice, the ReDDDoT program serves as both a collaboration and a
covenant between philanthropy and government to center public interest
technology into the future of progress,” said Darren Walker, president
of the Ford Foundation. “This $16 million initiative will cultivate
expertise from public interest technologists across sectors who are
rooted in community and grounded by the belief that innovation, equity
and ethics must equally be the catalysts for technological progress.”

The broad goals of ReDDDoT include:  

*Stimulating activity and filling gaps in research, innovation and capacity building in the responsible design, development, and deployment of technologies.
* Creating broad and inclusive communities of interest that bring
together key stakeholders to better inform practices for the design,
development, and deployment of technologies.
* Educating and training the science, technology, engineering, and
mathematics workforce on approaches to responsible design,
development, and deployment of technologies. 
* Accelerating pathways to societal and economic benefits while
developing strategies to avoid or mitigate societal and economic harms.
* Empowering communities, including economically disadvantaged and
marginalized populations, to participate in all stages of technology
development, including the earliest stages of ideation and design.

Phase 1 of the program solicits proposals for Workshops, Planning
Grants, or the creation of Translational Research Coordination Networks,
while Phase 2 solicits full project proposals. The initial areas of
focus for 2024 include artificial intelligence, biotechnology or natural
and anthropogenic disaster prevention or mitigation. Future iterations
of the program may consider other key technology focus areas enumerated
in the CHIPS and Science Act.

For more information about ReDDDoT, visit the program website or register for an informational webinar on Feb. 9, 2024, at 2 p.m. ET.

Statements from NSF’s Partners

“The core belief at the heart of ReDDDoT – that technology should be
shaped by ethical, legal, and societal considerations as well as
community values – also drives the work of the Patrick J. McGovern
Foundation to build a human-centered digital future for all. We’re
pleased to support this partnership, committed to advancing the
development of AI, biotechnology, and climate technologies that advance
equity, sustainability, and justice.” – Vilas Dhar, President, Patrick
J. McGovern Foundation

“From generative AI to quantum computing, the pace of technology
development is only accelerating. Too often, technological advances are
not accompanied by discussion and design that considers negative impacts
or unrealized potential. We’re excited to support ReDDDoT as an
opportunity to uplift new and often forgotten perspectives that
critically examine technology’s impact on civic life, and advance Siegel
Family Endowment’s vision of technological change that includes and
improves the lives of all people.” – Katy Knight, President and
Executive Director of Siegel Family Endowment

Only eight months ago, another big NSF funding project was announced but this time focused on AI and promoting trust, from a May 4, 2023 University of Maryland (UMD) news release (also on EurekAlert), Note: A link has been removed,

The University of Maryland has been chosen to lead a multi-institutional effort supported by the National Science Foundation (NSF) that will develop new artificial intelligence (AI) technologies designed to promote trust and mitigate risks, while simultaneously empowering and educating the public.

The NSF Institute for Trustworthy AI in Law & Society (TRAILS) announced on May 4, 2023, unites specialists in AI and machine learning with social scientists, legal scholars, educators and public policy experts. The multidisciplinary team will work with impacted communities, private industry and the federal government to determine what trust in AI looks like, how to develop technical solutions for AI that can be trusted, and which policy models best create and sustain trust.

Funded by a $20 million award from NSF, the new institute is expected to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“As artificial intelligence continues to grow exponentially, we must embrace its potential for helping to solve the grand challenges of our time, as well as ensure that it is used both ethically and responsibly,” said UMD President Darryll J. Pines. “With strong federal support, this new institute will lead in defining the science and innovation needed to harness the power of AI for the benefit of the public good and all humankind.”

In addition to UMD, TRAILS will include faculty members from George Washington University (GW) and Morgan State University, with more support coming from Cornell University, the National Institute of Standards and Technology (NIST), and private sector organizations like the DataedX Group, Arthur AI, Checkstep, FinRegLab and Techstars.

At the heart of establishing the new institute is the consensus that AI is currently at a crossroads. AI-infused systems have great potential to enhance human capacity, increase productivity, catalyze innovation, and mitigate complex problems, but today’s systems are developed and deployed in a process that is opaque and insular to the public, and therefore, often untrustworthy to those affected by the technology.

“We’ve structured our research goals to educate, learn from, recruit, retain and support communities whose voices are often not recognized in mainstream AI development,” said Hal Daumé III, a UMD professor of computer science who is lead principal investigator of the NSF award and will serve as the director of TRAILS.

Inappropriate trust in AI can result in many negative outcomes, Daumé said. People often “overtrust” AI systems to do things they’re fundamentally incapable of. This can lead to people or organizations giving up their own power to systems that are not acting in their best interest. At the same time, people can also “undertrust” AI systems, leading them to avoid using systems that could ultimately help them.

Given these conditions—and the fact that AI is increasingly being deployed to mediate society’s online communications, determine health care options, and offer guidelines in the criminal justice system—it has become urgent to ensure that people’s trust in AI systems matches those same systems’ level of trustworthiness.

TRAILS has identified four key research thrusts to promote the development of AI systems that can earn the public’s trust through broader participation in the AI ecosystem.

The first, known as participatory AI, advocates involving human stakeholders in the development, deployment and use of these systems. It aims to create technology in a way that aligns with the values and interests of diverse groups of people, rather than being controlled by a few experts or solely driven by profit.

Leading the efforts in participatory AI is Katie Shilton, an associate professor in UMD’s College of Information Studies who specializes in ethics and sociotechnical systems. Tom Goldstein, a UMD associate professor of computer science, will lead the institute’s second research thrust, developing advanced machine learning algorithms that reflect the values and interests of the relevant stakeholders.

Daumé, Shilton and Goldstein all have appointments in the University of Maryland Institute for Advanced Computer Studies, which is providing administrative and technical support for TRAILS.

David Broniatowski, an associate professor of engineering management and systems engineering at GW, will lead the institute’s third research thrust of evaluating how people make sense of the AI systems that are developed, and the degree to which their levels of reliability, fairness, transparency and accountability will lead to appropriate levels of trust. Susan Ariel Aaronson, a research professor of international affairs at GW, will use her expertise in data-driven change and international data governance to lead the institute’s fourth thrust of participatory governance and trust.

Virginia Byrne, an assistant professor of higher education and student affairs at Morgan State, will lead community-driven projects related to the interplay between AI and education. According to Daumé, the TRAILS team will rely heavily on Morgan State’s leadership—as Maryland’s preeminent public urban research university—in conducting rigorous, participatory community-based research with broad societal impacts.

Additional academic support will come from Valerie Reyna, a professor of human development at Cornell, who will use her expertise in human judgment and cognition to advance efforts focused on how people interpret their use of AI.

Federal officials at NIST will collaborate with TRAILS in the development of meaningful measures, benchmarks, test beds and certification methods—particularly as they apply to important topics essential to trust and trustworthiness such as safety, fairness, privacy, transparency, explainability, accountability, accuracy and reliability.

“The ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio.

Today’s announcement [May 4, 2023] is the latest in a series of federal grants establishing a cohort of National Artificial Intelligence Research Institutes. This recent investment in seven new AI institutes, totaling $140 million, follows two previous rounds of awards.

“Maryland is at the forefront of our nation’s scientific innovation thanks to our talented workforce, top-tier universities, and federal partners,” said U.S. Sen. Chris Van Hollen (D-Md.). “This National Science Foundation award for the University of Maryland—in coordination with other Maryland-based research institutions including Morgan State University and NIST—will promote ethical and responsible AI development, with the goal of helping us harness the benefits of this powerful emerging technology while limiting the potential risks it poses. This investment entrusts Maryland with a critical priority for our shared future, recognizing the unparalleled ingenuity and world-class reputation of our institutions.” 

The NSF, in collaboration with government agencies and private sector leaders, has now invested close to half a billion dollars in the AI institutes ecosystem—an investment that expands a collaborative AI research network into almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “[They] are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

As noted in the UMD news release, this funding is part of a ‘bundle’, here’s more from the May 4, 2023 US NSF news release announcing the full $ 140 million funding program, Note: Links have been removed,

The U.S. National Science Foundation, in collaboration with other federal agencies, higher education institutions and other stakeholders, today announced a $140 million investment to establish seven new National Artificial Intelligence Research Institutes. The announcement is part of a broader effort across the federal government to advance a cohesive approach to AI-related opportunities and risks.

The new AI Institutes will advance foundational AI research that promotes ethical and trustworthy AI systems and technologies, develop novel approaches to cybersecurity, contribute to innovative solutions to climate change, expand the understanding of the brain, and leverage AI capabilities to enhance education and public health. The institutes will support the development of a diverse AI workforce in the U.S. and help address the risks and potential harms posed by AI.  This investment means  NSF and its funding partners have now invested close to half a billion dollars in the AI Institutes research network, which reaches almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “These institutes are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

“These strategic federal investments will advance American AI infrastructure and innovation, so that AI can help tackle some of the biggest challenges we face, from climate change to health. Importantly, the growing network of National AI Research Institutes will promote responsible innovation that safeguards people’s safety and rights,” said White House Office of Science and Technology Policy Director Arati Prabhakar.

The new AI Institutes are interdisciplinary collaborations among top AI researchers and are supported by co-funding from the U.S. Department of Commerce’s National Institutes of Standards and Technology (NIST); U.S. Department of Homeland Security’s Science and Technology Directorate (DHS S&T); U.S. Department of Agriculture’s National Institute of Food and Agriculture (USDA-NIFA); U.S. Department of Education’s Institute of Education Sciences (ED-IES); U.S. Department of Defense’s Office of the Undersecretary of Defense for Research and Engineering (DoD OUSD R&E); and IBM Corporation (IBM).

“Foundational research in AI and machine learning has never been more critical to the understanding, creation and deployment of AI-powered systems that deliver transformative and trustworthy solutions across our society,” said NSF Assistant Director for Computer and Information Science and Engineering Margaret Martonosi. “These recent awards, as well as our AI Institutes ecosystem as a whole, represent our active efforts in addressing national economic and societal priorities that hinge on our nation’s AI capability and leadership.”

The new AI Institutes focus on six research themes:

Trustworthy AI

NSF Institute for Trustworthy AI in Law & Society (TRAILS)

Led by the University of Maryland, TRAILS aims to transform the practice of AI from one driven primarily by technological innovation to one driven with attention to ethics, human rights and support for communities whose voices have been marginalized into mainstream AI. TRAILS will be the first institute of its kind to integrate participatory design, technology, and governance of AI systems and technologies and will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness. TRAILS is funded by a partnership between NSF and NIST.

Intelligent Agents for Next-Generation Cybersecurity

AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION)

Led by the University of California, Santa Barbara, this institute will develop novel approaches that leverage AI to anticipate and take corrective actions against cyberthreats that target the security and privacy of computer networks and their users. The team of researchers will work with experts in security operations to develop a revolutionary approach to cybersecurity, in which AI-enabled intelligent security agents cooperate with humans across the cyberdefense life cycle to jointly improve the resilience of security of computer systems over time. ACTION is funded by a partnership between NSF, DHS S&T, and IBM.

Climate Smart Agriculture and Forestry

AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE)

Led by the University of Minnesota Twin Cities, this institute aims to advance foundational AI by incorporating knowledge from agriculture and forestry sciences and leveraging these unique, new AI methods to curb climate effects while lifting rural economies. By creating a new scientific discipline and innovation ecosystem intersecting AI and climate-smart agriculture and forestry, our researchers and practitioners will discover and invent compelling AI-powered knowledge and solutions. Examples include AI-enhanced estimation methods of greenhouse gases and specialized field-to-market decision support tools. A key goal is to lower the cost of and improve accounting for carbon in farms and forests to empower carbon markets and inform decision making. The institute will also expand and diversify rural and urban AI workforces. AI-CLIMATE is funded by USDA-NIFA.

Neural and Cognitive Foundations of Artificial Intelligence

AI Institute for Artificial and Natural Intelligence (ARNI)

Led by Columbia University, this institute will draw together top researchers across the country to focus on a national priority: connecting the major progress made in AI systems to the revolution in our understanding of the brain. ARNI will meet the urgent need for new paradigms of interdisciplinary research between neuroscience, cognitive science and AI. This will accelerate progress in all three fields and broaden the transformative impact on society in the next decade. ARNI is funded by a partnership between NSF and DoD OUSD R&E.

AI for Decision Making

AI Institute for Societal Decision Making (AI-SDM)

Led by Carnegie Mellon University, this institute seeks to create human-centric AI for decision making to bolster effective response in uncertain, dynamic and resource-constrained scenarios like disaster management and public health. By bringing together an interdisciplinary team of AI and social science researchers, AI-SDM will enable emergency managers, public health officials, first responders, community workers and the public to make decisions that are data driven, robust, agile, resource efficient and trustworthy. The vision of the institute will be realized via development of AI theory and methods, translational research, training and outreach, enabled by partnerships with diverse universities, government organizations, corporate partners, community colleges, public libraries and high schools.

AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes

AI Institute for Inclusive Intelligent Technologies for Education (INVITE)

Led by the University of Illinois Urbana-Champaign, this institute seeks to fundamentally reframe how educational technologies interact with learners by developing AI tools and approaches to support three crucial noncognitive skills known to underlie effective learning: persistence, academic resilience and collaboration. The institute’s use-inspired research will focus on how children communicate STEM content, how they learn to persist through challenging work, and how teachers support and promote noncognitive skill development. The resultant AI-based tools will be integrated into classrooms to empower teachers to support learners in more developmentally appropriate ways.

AI Institute for Exceptional Education (AI4ExceptionalEd)

Led by the University at Buffalo, this institute will work toward universal speech and language screening for children. The framework, the AI screener, will analyze video and audio streams of children during classroom interactions and assess the need for evidence-based interventions tailored to individual needs of students. The institute will serve children in need of ability-based speech and language services, advance foundational AI technologies and enhance understanding of childhood speech and language development. The AI Institute for Exceptional Education was previously announced in January 2023. The INVITE and AI4ExceptionalEd institutes are funded by a partnership between NSF and ED-IES.

Statements from NSF’s Federal Government Funding Partners

“Increasing AI system trustworthiness while reducing its risks will be key to unleashing AI’s potential benefits and ensuring our shared societal values,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “Today, the ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them.”

“The ACTION Institute will help us better assess the opportunities and risks of rapidly evolving AI technology and its impact on DHS missions,” said Dimitri Kusnezov, DHS under secretary for science and technology. “This group of researchers and their ambition to push the limits of fundamental AI and apply new insights represents a significant investment in cybersecurity defense. These partnerships allow us to collectively remain on the forefront of leading-edge research for AI technologies.”

“In the tradition of USDA National Institute of Food and Agriculture investments, this new institute leverages the scientific power of U.S. land-grant universities informed by close partnership with farmers, producers, educators and innovators to address the grand challenge of rising greenhouse gas concentrations and associated climate change,” said Acting NIFA Director Dionne Toombs. “This innovative center will address the urgent need to counter climate-related threats, lower greenhouse gas emissions, grow the American workforce and increase new rural opportunities.”

“The leading-edge in AI research inevitably draws from our, so far, limited understanding of human cognition. This AI Institute seeks to unify the fields of AI and neuroscience to bring advanced designs and approaches to more capable and trustworthy AI, while also providing better understanding of the human brain,” said Bindu Nair, director, Basic Research Office, Office of the Undersecretary of Defense for Research and Engineering. “We are proud to partner with NSF in this critical field of research, as continued advancement in these areas holds the potential for further and significant benefits to national security, the economy and improvements in quality of life.”

“We are excited to partner with NSF on these two AI institutes,” said IES Director Mark Schneider. “We hope that they will provide valuable insights into how to tap modern technologies to improve the education sciences — but more importantly we hope that they will lead to better student outcomes and identify ways to free up the time of teachers to deliver more informed individualized instruction for the students they care so much about.” 

Learn more about the NSF AI Institutes by visiting nsf.gov.

Two things I noticed, (1) No mention of including ethics training or concepts in science and technology education and (2) No mention of integrating ethics and social issues into any of the AI Institutes. So, it seems that ‘Responsible Design, Development and Deployment of Technologies (ReDDDoT)’ occupies its own fiefdom.

Some sobering thoughts

Things can go terribly wrong with new technology as seen in the British television hit series, Mr. Bates vs. The Post Office (based on a true story) , from a January 9, 2024 posting by Ani Blundel for tellyvisions.org,

… what is this show that’s caused the entire country to rise up as one to defend the rights of the lowly sub-postal worker? Known as the “British Post Office scandal,” the incidents first began in 1999 when the U.K. postal system began to switch to digital systems, using the Horizon Accounting system to track the monies brought in. However, the IT system was faulty from the start, and rather than blame the technology, the British government accused, arrested, persecuted, and convicted over 700 postal workers of fraud and theft. This continued through 2015 when the glitch was finally recognized, and in 2019, the convictions were ruled to be a miscarriage of justice.

Here’s the series synopsis:

The drama tells the story of one of the greatest miscarriages of justice in British legal history. Hundreds of innocent sub-postmasters and postmistresses were wrongly accused of theft, fraud, and false accounting due to a defective IT system. Many of the wronged workers were prosecuted, some of whom were imprisoned for crimes they never committed, and their lives were irreparably ruined by the scandal. Following the landmark Court of Appeal decision to overturn their criminal convictions, dozens of former sub-postmasters and postmistresses have been exonerated on all counts as they battled to finally clear their names. They fought for over ten years, finally proving their innocence and sealing a resounding victory, but all involved believe the fight is not over yet, not by a long way.

Here’s a video trailer for ‘Mr. Bates vs. The Post Office,

More from Blundel’s January 9, 2024 posting, Note: A link has been removed,

The outcry from the general public against the government’s bureaucratic mismanagement and abuse of employees has been loud and sustained enough that Prime Minister Rishi Sunak had to come out with a statement condemning what happened back during the 2009 incident. Further, the current Justice Secretary, Alex Chalk, is now trying to figure out the fastest way to exonerate the hundreds of sub-post managers and sub-postmistresses who were wrongfully convicted back then and if there are steps to be taken to punish the post office a decade later.

It’s a horrifying story and the worst I’ve seen so far but, sadly, it’s not the only one of its kind.

Too often people’s concerns and worries about new technology are dismissed or trivialized. Somehow, all the work done to establish ethical standards and develop trust seems to be used as a kind of sop to the concerns rather than being integrated into the implementation of life-altering technologies.

Two-dimensional material stacks into multiple layers to build a memory cell for longer lasting batteries

This research comes from Purdue University (US) and the December announcement seemed particularly timely since battery-powered gifts are popular at Christmas but since it could be many years before this work is commercialized, you may want to tuck it away for future reference.  Also, readers familiar with memristors might see a resemblance to the memory cells mentioned in the following excerpt. From a December 13, 2018 news item on Nanowerk,

The more objects we make “smart,” from watches to entire buildings, the greater the need for these devices to store and retrieve massive amounts of data quickly without consuming too much power.

Millions of new memory cells could be part of a computer chip and provide that speed and energy savings, thanks to the discovery of a previously unobserved functionality in a material called molybdenum ditelluride.

The two-dimensional material stacks into multiple layers to build a memory cell. Researchers at Purdue University engineered this device in collaboration with the National Institute of Standards and Technology (NIST) and Theiss Research Inc.

A December 13, 2018 Purdue University news release by Kayla Wiles, which originated the news item,  describes the work in more detail,

Chip-maker companies have long called for better memory technologies to enable a growing network of smart devices. One of these next-generation possibilities is resistive random access memory, or RRAM for short.

In RRAM, an electrical current is typically driven through a memory cell made up of stacked materials, creating a change in resistance that records data as 0s and 1s in memory. The sequence of 0s and 1s among memory cells identifies pieces of information that a computer reads to perform a function and then store into memory again.

A material would need to be robust enough for storing and retrieving data at least trillions of times, but materials currently used have been too unreliable. So RRAM hasn’t been available yet for widescale use on computer chips.

Molybdenum ditelluride could potentially last through all those cycles.
“We haven’t yet explored system fatigue using this new material, but our hope is that it is both faster and more reliable than other approaches due to the unique switching mechanism we’ve observed,” Joerg Appenzeller, Purdue University’s Barry M. and Patricia L. Epstein Professor of Electrical and Computer Engineering and the scientific director of nanoelectronics at the Birck Nanotechnology Center.

Molybdenum ditelluride allows a system to switch more quickly between 0 and 1, potentially increasing the rate of storing and retrieving information. This is because when an electric field is applied to the cell, atoms are displaced by a tiny distance, resulting in a state of high resistance, noted as 0, or a state of low resistance, noted as 1, which can occur much faster than switching in conventional RRAM devices.

“Because less power is needed for these resistive states to change, a battery could last longer,” Appenzeller said.

In a computer chip, each memory cell would be located at the intersection of wires, forming a memory array called cross-point RRAM.

Appenzeller’s lab wants to explore building a stacked memory cell that also incorporates the other main components of a computer chip: “logic,” which processes data, and “interconnects,” wires that transfer electrical signals, by utilizing a library of novel electronic materials fabricated at NIST.

“Logic and interconnects drain battery too, so the advantage of an entirely two-dimensional architecture is more functionality within a small space and better communication between memory and logic,” Appenzeller said.

Two U.S. patent applications have been filed for this technology through the Purdue Office of Technology Commercialization.

The work received financial support from the Semiconductor Research Corporation through the NEW LIMITS Center (led by Purdue University), NIST, the U.S. Department of Commerce and the Material Genome Initiative.

Here’s a link to and a citation for the paper,

Electric-field induced structural transition in vertical MoTe2- and Mo1–xWxTe2-based resistive memories by Feng Zhang, Huairuo Zhang, Sergiy Krylyuk, Cory A. Milligan, Yuqi Zhu, Dmitry Y. Zemlyanov, Leonid A. Bendersky, Benjamin P. Burton, Albert V. Davydov, & Joerg Appenzeller. Nature Materials volume 18, pages 55–61 (2019) Published: 10 December 2018 DOI: https://doi.org/10.1038/s41563-018-0234-y

This paper is behind a paywall.

US Nanotechnology Initiative for water sustainability

Wednesday, March 23, 2016 was World Water Day and to coincide with that event the US National Nanotechnology Initiative (NNI) in collaboration with several other agencies announced a new ‘signature initiative’. From a March 24, 2016 news item on Nanowerk (Note: A link has been removed),

As a part of the White House Water Summit held yesterday on World Water Day, the Federal agencies participating in the National Nanotechnology Initiative (NNI) announced the launch of a Nanotechnology Signature Initiative (NSI), Water Sustainability through Nanotechnology: Nanoscale Solutions for a Global-Scale Challenge.

A March 23, 2016 NNI news release provides more information about why this initiative is important,

Access to clean water remains one of the world’s most pressing needs. As today’s White House Office of Science and Technology blog post explains, “the small size and exceptional properties of engineered nanomaterials are particularly promising for addressing the key technical challenges related to water quality and quantity.”

“One cannot find an issue more critical to human life and global security than clean, plentiful, and reliable water sources,” said Dr. Michael Meador, Director of the National Nanotechnology Coordination Office (NNCO). “Through the NSI mechanism, NNI member agencies will have an even greater ability to make meaningful strides toward this initiative’s thrust areas: increasing water availability, improving the efficiency of water delivery and use, and enabling next-generation water monitoring systems.”

A March 23, 2016 US White House blog posting by Lloyd Whitman and Lisa Friedersdorf describes the efforts in more detail (Note: A link has been removed),

The small size and exceptional properties of engineered nanomaterials are particularly promising for addressing the pressing technical challenges related to water quality and quantity. For example, the increased surface area—a cubic centimeter of nanoparticles has a surface area larger than a football field—and reactivity of nanometer-scale particles can be exploited to create catalysts for water purification that do not require rare or precious metals. And composites incorporating nanomaterials such as carbon nanotubes might one day enable stronger, lighter, and more durable piping systems and components. Under this NSI, Federal agencies will coordinate and collaborate to more rapidly develop nanotechnology-enabled solutions in three main thrusts: [thrust 1] increasing water availability; [thrust 2] improving the efficiency of water delivery and use; and [thrust 3] enabling next-generation water monitoring systems.

A technical “white paper” released by the agencies this week highlights key technical challenges for each thrust, identifies key objectives to overcome those challenges, and notes areas of research and development where nanotechnology promises to provide the needed solutions. By shining a spotlight on these areas, the new NSI will increase Federal coordination and collaboration, including with public and private stakeholders, which is vital to making progress in these areas. The additional focus and associated collective efforts will advance stewardship of water resources to support the essential food, energy, security, and environment needs of all stakeholders.

We applaud the commitment of the Federal agencies who will participate in this effort—the Department of Commerce/National Institute of Standards and Technology, Department of Energy, Environmental Protection Agency, National Aeronautics and Space Administration, National Science Foundation, and U.S. Department of Agriculture/National Institute of Food and Agriculture. As made clear at this week’s White House Water Summit, the world’s water systems are under tremendous stress, and new and emerging technologies will play a critical role in ensuring a sustainable water future.

The white paper (12 pp.) is titled: Water Sustainability through Nanotechnology: Nanoscale Solutions for a Global-Scale Challenge and describes the thrusts in more detail.

A March 22, 2016 US White House fact sheet lays out more details including funding,

Click here to learn more about all of the commitments and announcements being made today. They include:

  • Nearly $4 billion in private capital committed to investment in a broad range of water-infrastructure projects nationwide. This includes $1.5 billion from Ultra Capital to finance decentralized and scalable water-management solutions, and $500 million from Sustainable Water to develop water reclamation and reuse systems.
  • More than $1 billion from the private sector over the next decade to conduct research and development into new technologies. This includes $500 million from GE to fuel innovation, expertise, and global capabilities in advanced water, wastewater, and reuse technologies.
  • A Presidential Memorandum and supporting Action Plan on building national capabilities for long-term drought resilience in the United States, including by setting drought resilience policy goals, directing specific drought resilience activities to be completed by the end of the year, and permanently establishing the National Drought Resilience Partnership as an interagency task force responsible for coordinating drought-resilience, response, and recovery efforts.
  • Nearly $35 million this year in Federal grants from the Environmental Protection Agency, the National Oceanic and Atmospheric Administration, the National Science Foundation, and the U.S. Department of Agriculture to support cutting-edge water science;
  • The release of a new National Water Model that will dramatically enhance the Nation’s river-forecasting capabilities by delivering forecasts for approximately 2.7 million locations, up from 4,000 locations today (a 700-fold increase in forecast density).

This seems promising and hopefully other countries will follow suit.

2011 Scientific integrity processes: the US and Canada

Given recent scientific misconduct  (July is science scandal month [July 25 2011] post at The Prodigal Academic blog) and a very slow news month this August,  I thought I’d take a look at scientific integrity in the US and in Canada.

First, here’s a little history. March 9, 2009 US President Barack Obama issued a Presidential Memorandum on Scientific Integrity (excerpted),

Science and the scientific process must inform and guide decisions of my Administration on a wide range of issues, including improvement of public health, protection of the environment, increased efficiency in the use of energy and other resources, mitigation of the threat of climate change, and protection of national security.

The public must be able to trust the science and scientific process informing public policy decisions.  Political officials should not suppress or alter scientific or technological findings and conclusions.  If scientific and technological information is developed and used by the Federal Government, it should ordinarily be made available to the public.  To the extent permitted by law, there should be transparency in the preparation, identification, and use of scientific and technological information in policymaking.  The selection of scientists and technology professionals for positions in the executive branch should be based on their scientific and technological knowledge, credentials, experience, and integrity.

December 17, 2010 John P. Holdren, Assistant to the President for Science and Technology and Director of the Office of Science and Technology Policy,  issued his own memorandum requesting compliance with the President’s order (from the Dec. 17, 2010 posting on The White House blog),

Today, in response to the President’s request, I am issuing a Memorandum to the Heads of Departments and Agencies that provides further guidance to Executive Branch leaders as they implement Administration policies on scientific integrity. The new memorandum describes the minimum standards expected as departments and agencies craft scientific integrity rules appropriate for their particular missions and cultures, including a clear prohibition on political interference in scientific processes and expanded assurances of transparency. It requires that department and agency heads report to me on their progress toward completing those rules within 120 days.

Here’s my edited version (I removed fluff, i.e. material along these lines: scientific integrity is of utmost importance …) of the list Holdren provided,

Foundations

  1. Ensure a culture of scientific integrity.
  2. Strengthen the actual and perceived credibility of Government research. Of particular importance are (a) ensuring that selection of candidates for scientific positions in executive branch is based primarily on their scientific and technological knowledge, credentials, experience, and integrity, (b) ensuring that data and research used to support policy decisions undergo independent peer review by qualified experts where feasibly and appropriate, and consistent with law, (c) setting clear standards governing conflicts, and (d) adopting appropriate whistleblower protections.
  3. Facilitate the free flow of scientific and technological information, consistent with privacy and classification standards. … Consistent with the Administration’s Open Government Initiative, agencies should expand and promote access to scientific and technological information by making it available  online in open formats. Where appropriate, this should include data and models underlying regulatory proposals and policy decisions.
  4. Establish principles for conveying scientific and technological information to the public. … Agencies should communicate scientific and technological findings by including a clear explication of underlying assumptions; accurate contextualization of uncertainties; and a description of the probabilities associated with optimistic and pessimistic projections, including best-case and worst-case scenarios where appropriate.

Public communication

  1. In response to media interview requests about the scientific and technological dimensions of their work, agencies will offer articulate and knowledgeable spokespersons who can, in an objective and nonpartisan fashion, describe and explain these dimension to the media and the American people.
  2. Federal scientists may speak to the media and the public about scientific and technological matters based on their official work, with appropriate coordination with their immediate supervisor and their public affairs office. In no circumstance may public affairs officers ask or direct Federal scientists to alter scientific findings.
  3. Mechanisms are in place to resolve disputes that arise from decisions to proceed or not to proceed  with proposed interviews or other public information-related activities. …

(The sections on Federal Advisory Committees and professional development were less relevant to this posting, so I haven’t included them here.)

It seems to have taken the agencies a little longer than the 120 day deadline that John Holdren gave them but all (or many of the agencies) have complied according to an August 15, 2011 posting by David J. Hanson on the Chemical & Engineering News (C&EN) website,

OSTP director John P. Holdren issued the call for the policies on May 5 in response to a 2009 Presidential memorandum (C&EN, Jan. 10, page 28). [emphasis mine] The memorandum was a response to concerns about politicization of science during the George W. Bush Administration.

The submitted integrity plans include 14 draft policies and five final policies. The final policies are from the National Aeronautics & Space Administration, the Director of National Intelligences for the intelligence agencies, and the Departments of Commerce, Justice, and Interior.

Draft integrity policies are in hand from the Departments of Agriculture, Defense, Education, Energy, Homeland Security, Health & Human Services, Labor, and Transportation and from the National Oceanic & Atmospheric Administration, National Science Foundation, Environmental Protection Agency, Social Security Administrations, OSTP, and Veterans Administration.

The drafts still under review are from the Department of State, the Agency for International Development, and the National Institute of Standards & Technology.

The dates in this posting don’t match up with what I’ve found but it’s possible that the original deadline was moved to better accommodate the various reporting agencies. In any event, David Bruggeman at his Pasco Phronesis blog has commented on this initiative in a number of posts including this August 10, 2011 posting,

… I’m happy to see something out there at all, given the paltry public response from most of the government.  Comments are open until September 6.Regrettably, the EPA [Environmental Protection Agency] policy falls into a trap that is all too common.  The support of scientific integrity is all too often narrowly assumed to simply mean that agency (or agency-funded) scientists need to behave, and there will be consequences for demonstrated bad behavior.

But there is a serious problem of interference from non-scientific agency staff that would go beyond reasonable needs for crafting the public message.

David goes on to discuss a lack of clarity in this policy and in the Dept. of the Interior’s policy.

His August 11, 2011 posting notes the OSTP claims that 19 departments/agencies have submitted draft or final policies,

… Not only does the OSTP blog post not include draft or finalized policies submitted to their office, it fails to mention any timeframe for making them publicly available.  Even more concerning, there is no mention of those policies that have been publicly released.  That is, regrettably, consistent with past practice. While the progress report notes that OSTP will create a policy for its own activities, and that OSTP is working with the Office of Management and Budget on a policy for all of the Executive Office of the President, there’s no discussion of a government-wide policy.

In the last one of his recent series, the August 12, 2011 posting focuses on a Dept. of Commerce memo (Note: The US Dept. of Commerce includes the National Oceanic and Atmospheric Administration and the National Institute of Standards and Technology),

“This memorandum confirms that DAO 219-1 [a Commerce Department order concerning scientific communications] allows scientists to engage in oral fundamental research communications (based on their official work) with the media and the public without notification or prior approval to their supervisor or to the Office of Public Affairs. [emphasis David Bruggeman] Electronic communications with the media related to fundamental research that are the equivalent of a dialogue are considered to be oral communications; thus, prior approval is not required for  scientist to engage in online discussions or email with the media about fundamental research, subject to restrictions on protected nonpublic information as set forth in 219-1.”

I find the exercise rather interesting especially in light of Margaret Munro’s July 27, 2011 article, Feds silence scientist over salmon study, for Postmedia,

Top bureaucrats in Ottawa have muzzled a leading fisheries scientist whose discovery could help explain why salmon stocks have been crashing off Canada’s West Coast, according to documents obtained by Postmedia News.

The documents show the Privy Council Office, which supports the Prime Minister’s Office, stopped Kristi Miller from talking about one of the most significant discoveries to come out of a federal fisheries lab in years.

Science, one of the world’s top research journals, published Miller’s findings in January. The journal considered the work so significant it notified “over 7,400” journalists worldwide about Miller’s “Suffering Salmon” study.

The documents show major media outlets were soon lining up to speak with Miller, but the Privy Council Office said no to the interviews.

In a Twitter conversation with me, David Bruggeman did note that the Science paywall also acts as a kind of muzzle.

I was originally going to end the posting with that last paragraph but I made a discovery, quite by accident. Canada’s Tri-Agency Funding Councils opened a consultation with stakeholders on Ethics and Integrity for Institutions, Applicants, and Award Holders on August 15, 2011 which will run until September 30, 2011. (This differs somewhat from the US exercise which is solely focussed on science as practiced in various government agencies.  The equivalent in Canada would be if Stephen Harper requested scientific integrity guidelines from the Ministries of Environment, Natural Resources, Health, Industry, etc.) From the NSERC Ethics and Integrity Guidelines page,

Upcoming Consultation on the Draft Tri-Agency Framework: Responsible Conduct of Research

The Canadian Institutes of Health Research (CIHR), the Social Sciences and Humanities Research Council of Canada (SSHRC), and NSERC (the tri-agencies) continue to work on improving their policy framework for research and scholarly integrity, and financial accountability. From August 15 to September 30, 2011, the three agencies are consulting with a wide range of stakeholders in the research community on the draft consultation document, Tri-Agency Framework: Responsible Conduct of Research.

I found the answers to these two questions in the FAQs particularly interesting,

  • What are some of the new elements in this draft Framework?

The draft Framework introduces new elements, including the following:

A strengthened Tri-Agency Research Integrity Policy
The draft Framework includes a strengthened Tri-Agency Research Integrity Policy that clarifies the responsibilities of the researcher.

‘Umbrella’ approach to RCR
The draft Framework provides an overview of all applicable research policies, including those related to the ethical conduct of research involving humans and financial management, as well as research integrity. It also clarifies the roles and responsibilities of researchers, institutions and Agencies in responding to all types of alleged breaches of Agency policies, for example, misuse of funds, unethical conduct of research involving human participants or plagiarism.

A definition of a policy breach
The draft Framework clarifies what constitutes a breach of an Agency policy.

Disclosure
The draft Framework requires researchers to disclose, at the time of application, whether they have ever been found to have breached any Canadian or other research policies, regardless of the source of funds that supported the research and whether or not the findings originated in Canada or abroad.

The Agencies are currently seeking advice from privacy experts on the scope of the information to be requested.

Institutional Investigations
The Agencies currently specify that institutional investigation committee membership must exclude those in conflict of interest. The draft Framework stipulates also that an investigation committee must include at least one member external to the Institution, and that an Agency may conduct its own review or compliance audit, or require the Institution to conduct an independent review/audit.

Timeliness of investigation
Currently, it is up to institutions to set timelines for investigations. The draft Framework states that inquiry and investigation reports are to be submitted to the relevant Agency within two and seven months, respectively, following receipt of the allegation by the institution.

  • Who is being consulted?

The Agencies have targeted their consultation to individual researchers, post-secondary institutions and other eligible organizations that apply for and receive Agency funding.

As far as I can tell, there is no mention of ethical issues where the government has interfered in the dissemination of scientific information; it seems there is an assumption that almost all ethical misbehaviour is on that part of the individual researcher or a problem with an institution following policy. There is one section devoted breaches by institutions (all two paragraphs of it),

5 Breaches of Agency Policies by Institutions

In accordance with the MOU signed by the Agencies and each Institution, the Agencies require that each Institution complies with Agency policies as a condition of eligibility to apply for and administer Agency funds.

The process followed by the Agencies to address an allegation of a breach of an Agency policy by an Institution, and the recourse that the Agencies may exercise, commensurate with the severity of a confirmed breach, are outlined in the MOU.

My criticism of this is similar to the one that David Bruggeman made of the US policies in that the focus is primarily on the individual.