Announcements Follow Six Months of Progress to Leverage AI Responsibly Across the Homeland Security Enterprise and Recent Establishment of AI Safety and Security Board
The Department, in Coordination with CISA and CWMD, Releases New Guidelines to Protect Against AI Risks to Critical Infrastructure; Submits Report on Chemical, Biological, Radiological, and Nuclear Threats
WASHINGTON – Today, the Department of Homeland Security (DHS) marked the 180-day mark of President Biden’s Executive Order (EO) 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI)” by unveiling new resources to address threats posed by AI: (1) guidelines to mitigate AI risks to critical infrastructure and (2) a report on AI misuse in the development and production of chemical, biological, radiological, and nuclear (CBRN).
These resources build upon the Department’s broader efforts to protect the nations’ critical infrastructure and help stakeholders leverage AI, which include the recent establishment of the Artificial Intelligence Safety and Security Board. This new Board, announced last week, assembles technology and critical infrastructure executives, civil rights leaders, academics, state and local government leaders, and policymakers to advance responsible development and deployment of AI.
“AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks. Our Department is taking steps to identify and mitigate those threats,” said Secretary of Homeland Security Alejandro N. Mayorkas. “When President Biden tasked DHS as a leader in the safe, secure, and reliable development of AI, our Department accelerated our previous efforts to lead on AI. In the 180 days since the Biden-Harris Administration’s landmark EO on AI, DHS has established a new AI Corps, developed AI pilot programs across the Department, unveiled an AI roadmap detailing DHS’s current use of AI and its plans for the future, and much more. DHS is more committed than ever to advancing the responsible use of AI for homeland security missions and promoting nationwide AI safety and security, building on the unprecedented progress made by this Administration. We will continue embracing AI’s potential while guarding against its harms.”
Guidelines to Mitigate AI Risks to Critical Infrastructure
DHS, in coordination with its Cybersecurity and Infrastructure Security Agency (CISA), released new safety and security guidelines to address cross-sector AI risks impacting the safety and security of U.S. critical infrastructure systems. The guidelines organize its analysis around three overarching categories of system-level risk:
- Attacks Using AI: The use of AI to enhance, plan, or scale physical attacks on, or cyber compromises of, critical infrastructure.
- Attacks Targeting AI Systems: Targeted attacks on AI systems supporting critical infrastructure.
- Failures in AI Design and Implementation: Deficiencies or inadequacies in the planning, structure, implementation, or execution of an AI tool or system leading to malfunctions or other unintended consequences that affect critical infrastructure operations.
“CISA was pleased to lead the development of ‘Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators on behalf of DHS,” said CISA Director Jen Easterly. “Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk."
To address these risks, DHS outlines a four-part mitigation strategy, building upon the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF), that critical infrastructure owners and users can consider when approaching contextual and unique AI risk situations:
- Govern: Establish an organizational culture of AI risk management - Prioritize and take ownership of safety and security outcomes, embrace radical transparency, and build organizational structures that make security a top business priority.
- Map: Understand your individual AI use context and risk profile - Establish and understand the foundational context from which AI risks can be evaluated and mitigated.
- Measure: Develop systems to assess, analyze, and track AI risks - Identify repeatable methods and metrics for measuring and monitoring AI risks and impacts.
- Manage: Prioritize and act upon AI risks to safety and security - Implement and maintain identified risk management controls to maximize the benefits of AI systems while decreasing the likelihood of harmful safety and security impacts.
Countering Chemical, Biological, Radiological, and Nuclear Threats
The Department worked with its Countering Weapons of Mass Destruction Office (CWMD) to analyze the risk of AI being misused to assist in the development or production of CBRN threats, and analyze and provide recommended steps to mitigate potential threats to the homeland. This report, developed through extensive collaboration across the United States Government, academia, and industry, furthers long-term objectives around how to ensure the safe, secure, and trustworthy development and use of artificial intelligence, and guides potential interagency follow-on policy and implementation efforts.
“The responsible use of AI holds great promise for advancing science, solving urgent and future challenges, and improving our national security, but AI also requires that we be prepared to rapidly mitigate the misuse of AI in the development of chemical and biological threats,” said Assistant Secretary for CWMD Mary Ellen Callahan. “This report highlights the emerging nature of AI technologies, their interplay with chemical and biological research and the associated risks, and provides longer-term objectives around how to ensure safe, secure, and trustworthy development and use of AI. I am incredibly proud of our team at CMWD for this vital work which builds upon the Biden-Harris Administration’s forward-leaning Executive Order.”
A Department-Wide Effort to Address AI Risks and Opportunities
In the 180 days since President Biden issued his landmark EO on AI, Secretary Mayorkas has led a sustained effort to expand DHS’s leadership on AI and made progress on a number of initiatives geared towards protecting critical infrastructure and ensuring the safe implementation of AI technology. Most recently, the Secretary established the Artificial Intelligence Safety and Security Board (AISSB) to advise DHS, the critical infrastructure community, private sector stakeholders, and the broader public on the safe and secure development and deployment of AI in our nation’s critical infrastructure. This diverse range of leaders on the Board will provide recommendations to help critical infrastructure stakeholders more responsibly leverage AI and protect against its dangers.
In March, DHS unveiled a detailed AI roadmap for using AI technologies to deliver meaningful benefits to the American public and advance homeland security while protecting individuals’ privacy, civil rights, and civil liberties. Within the roadmap, the Department announced three innovative pilot projects that deploy AI in specific mission areas, including pilots housed in Homeland Security Investigations (HSI), the Federal Emergency Management Agency (FEMA), and United States Citizenship and Immigration Services (USCIS). CISA completed an operational pilot of AI cybersecurity systems to aid in the detection and remediation of vulnerabilities in critical United States Government software, systems, and networks, pursuant to the EO.
In February, DHS launched the DHS AI Corps, an accelerated hiring initiative to better leverage AI responsibly across strategic areas of the homeland security enterprise. DHS immediately saw a strong response and received thousands of applicants interested in AI technology experts looking to further the Department’s AI work across strategic areas of the homeland security enterprise.
To read the DHS safety and security guidelines for critical infrastructure owners and operators, please visit: Safety and Security Guidelines for Critical Infrastructure Owners and Operators.
To read the DHS report on Chemical, Biological, Radiological, and Nuclear (CBRN) threats, please visit: FACT SHEET: DHS Advances Efforts to Reduce the Risks at the Intersection of Artificial Intelligence and Chemical, Biological, Radiological, and Nuclear (CBRN) Threats.
To learn more about how DHS uses AI technologies to protect the homeland, visit Artificial Intelligence at DHS.