DHS plays a critical role in ensuring Artificial Intelligence (AI) use is safe and secure nationwide. DHS leads in the responsible innovation and public protection missions at the intersection of AI and homeland security.
On November 14, 2024, DHS released the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” (“Framework”). This first-of-its kind collaboration with industry and civil society recommends new guidance to advance the responsible use of AI in America’s critical infrastructure.
The recommendations were developed by and for entities at each layer of the AI supply chain: cloud and computer providers, AI developers, and critical infrastructure owners and operators – as well as the civil society and public sector entities that protect and advocate for consumers.
This product is the culmination of considerable dialogue and debate among the Artificial Intelligence Safety and Security Board (the Board), a public-private advisory committee that identified the need for clear guidance on how each layer of the AI supply chain can do their part to ensure that AI is deployed safely and securely in U.S. critical infrastructure. The report complements other work carried out by DHS on AI safety on managing a wide range of misuse and accident risks.
Read the press release on the Framework.
To protect U.S. networks and critical infrastructure, DHS published initial guidelines to address cross-sector AI risks impacting the safety and security of U.S. critical infrastructure systems on April 29th, 2024. The guidelines organize around three overarching categories of system-level risk:
- Attacks Using AI: The use of AI to enhance, plan, or scale physical attacks on, or cyber compromises of, critical infrastructure.
- Attacks Targeting AI Systems: Targeted attacks on AI systems supporting critical infrastructure.
- Failures in AI Design and Implementation: Deficiencies or inadequacies, in the planning, structure, implementation, or execution of an AI tool or system leading to malfunctions or other unintended consequences that affect critical infrastructure operations.
To read the DHS safety and security guidelines for critical infrastructure owners and operators, please visit: Safety and Security Guidelines for Critical Infrastructure Owners and Operators.
DHS is partnering with the Department of Defense to plan a pilot program that will develop an AI capability that can fix vulnerabilities in critical US government networks. The pilot program will also develop advanced monitoring of Infrastructure as a Service providers that use AI in critical infrastructure.
DHS is using the Cybersecurity and Infrastructure Security Agency's (CISA) cybersecurity best practices and vulnerability management process to increase the cybersecurity of AI systems. DHS is also helping the Department of State develop agreements related to cross-border threats to U.S. critical infrastructure.
CISA's comprehensive approach to AI includes:
- responsibly using AI to support the mission;
- assuring AI systems;
- protecting critical infrastructure from malicious use of AI;
- collaborating and communicating with the public and interagency and international partners; and
- expanding AI expertise in the DHS workforce.
DHS is working with the United Kingdom (UK) on secure AI. CISA is coordinating with the UK’s National Cyber Security Centre (NCSC) to develop guidance for secure AI. This effort is part of CISA’s Secure by Design initiative, which strives to build security into the design and manufacture of technology. DHS is also helping the Department of State develop agreements related to cross-border threats to U.S. critical infrastructure.
AI systems can help DHS defend against cyber threats, but AI systems also require protection from cyber threats. DHS emerging technology experts research, test, and deploy technologies to protect against a wide range of AI-based threats, including biological and chemical threats to and from AI systems.
The DHS Countering Weapons of Mass Destruction Office’s (CWMD) work to counter chemical, biological, nuclear, radiological, or explosives threats will be used in the creation of new programs and a counter-AI working group. CWMD released its groundbreaking report to the President on AI CBRN risks on April 29th, 2024. The DHS Office of Strategy, Policy, and Plans is working with DHS Science and Technology and Federally Funded Research and Development Centers to create focused assessments of national security risks and mitigation plans for the adversarial use of AI.
DHS will use its experience in technology evaluation and CISA’s cybersecurity expertise, to run real world tests and monitor high-risk AI systems used in critical infrastructure. The Department will continuously test and evaluate the AI systems in use to ensure they are safe, secure, and effective.
DHS is creating a program to assist AI developers in mitigating AI-related Intellectual Property (IP) risks. The Department will develop guidance and other resources to help private-sector actors mitigate the risks of AI-related IP theft. DHS will also help update the IP Enforcement Coordinator Joint Strategic Plan on IP Enforcement to address AI-related issues.