DHS plays a critical role in ensuring Artificial Intelligence (AI) use is safe and secure nationwide. The Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence outlines a coordinated, government-wide approach to AI. DHS leads in the responsible innovation and public protection missions at the intersection of AI and homeland security.
The AI Safety and Security Advisory Board (AISSB) includes AI experts from the private sector and government that advise the Secretary and the critical infrastructure community. The AISSB provides information and recommendations for improving security, resilience, and incident response related to the use of AI in critical infrastructure. At launch, the AISSB included more than 20 technology and critical infrastructure executives, civil rights leaders, academics, and policy makers.
AI presents opportunities to improve the operations of critical infrastructure, but it also introduces new risks. The AISSB harnesses AI and infrastructure expertise to assess emerging risks to critical infrastructure from AI and provide advice and recommendations to mitigate those risks. Establishment of the AISSB was published in the Federal Register on April 29th, 2024.
On November 14, 2024, DHS released the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” (“Framework”). This first-of-its kind collaboration with industry and civil society recommends new guidance to advance the responsible use of AI in America’s critical infrastructure.
The recommendations were developed by and for entities at each layer of the AI supply chain: cloud and computer providers, AI developers, and critical infrastructure owners and operators – as well as the civil society and public sector entities that protect and advocate for consumers.
This product is the culmination of considerable dialogue and debate among the Artificial Intelligence Safety and Security Board (the Board), a public-private advisory committee established by DHS Secretary Alejandro N. Mayorkas, who identified the need for clear guidance on how each layer of the AI supply chain can do their part to ensure that AI is deployed safely and securely in U.S. critical infrastructure. The report complements other work carried out by the Administration on AI safety, such as the guidance from the AI Safety Institute, on managing a wide range of misuse and accident risks.
Read the press release on the Framework.
To protect U.S. networks and critical infrastructure, DHS is adapting and incorporating the White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights, the National Institute of Standards and Technology’s AI Risk Management Framework, as well as other appropriate security guidance, into safety and security guidance for use by critical infrastructure owners and operators. DHS published initial guidelines to address cross-sector AI risks impacting the safety and security of U.S. critical infrastructure systems on April 29th, 2024. The guidelines organize around three overarching categories of system-level risk:
- Attacks Using AI: The use of AI to enhance, plan, or scale physical attacks on, or cyber compromises of, critical infrastructure.
- Attacks Targeting AI Systems: Targeted attacks on AI systems supporting critical infrastructure.
- Failures in AI Design and Implementation: Deficiencies or inadequacies, in the planning, structure, implementation, or execution of an AI tool or system leading to malfunctions or other unintended consequences that affect critical infrastructure operations.
To read the DHS safety and security guidelines for critical infrastructure owners and operators, please visit: Safety and Security Guidelines for Critical Infrastructure Owners and Operators.
DHS will partner with the Department of Defense to plan a pilot program that will develop an AI capability that can fix vulnerabilities in critical US government networks. The pilot program will also develop advanced monitoring of Infrastructure as a Service providers that use AI in critical infrastructure.
DHS will use the Cybersecurity and Infrastructure Security Agency's (CISA) cybersecurity best practices and vulnerability management process to increase the cybersecurity of AI systems. DHS is also helping the Department of State develop agreements related to cross-border threats to U.S. critical infrastructure.
CISA's comprehensive approach to AI includes:
- responsibly using AI to support the mission;
- assuring AI systems;
- protecting critical infrastructure from malicious use of AI;
- collaborating and communicating with the public and interagency and international partners; and
- expanding AI expertise in the DHS workforce.
DHS is working with the United Kingdom (UK) on secure AI. CISA is coordinating with the UK’s National Cyber Security Centre (NCSC) to develop guidance for secure AI. This effort is part of CISA’s Secure by Design initiative, which strives to build security into the design and manufacture of technology. DHS is also helping the Department of State develop agreements related to cross-border threats to U.S. critical infrastructure.
AI systems can help DHS defend against cyber threats, but AI systems also require protection from cyber threats. DHS emerging technology experts research, test, and deploy technologies to protect against a wide range of AI-based threats, including biological and chemical threats to and from AI systems.
The DHS Countering Weapons of Mass Destruction Office’s (CWMD) work to counter chemical, biological, nuclear, radiological, or explosives threats will be used in the creation of new programs and a counter-AI working group. CWMD released its groundbreaking report to the President on AI CBRN risks on April 29th, 2024. The DHS Office of Strategy, Policy, and Plans is working with DHS Science and Technology and Federally Funded Research and Development Centers to create focused assessments of national security risks and mitigation plans for the adversarial use of AI.
DHS will use its experience in technology evaluation and CISA’s cybersecurity expertise to run real world tests and monitor high-risk AI systems used in critical infrastructure. The Department will continuously test and evaluate the AI systems in use to ensure they are safe, secure, and effective.
DHS is creating a program to assist AI developers in mitigating AI-related Intellectual Property (IP) risks. The Department will develop guidance and other resources to help private-sector actors mitigate the risks of AI-related IP theft. DHS will also help update the IP Enforcement Coordinator Joint Strategic Plan on IP Enforcement to address AI-related issues.
Cultivating talent in AI and other emerging technologies is critical to U.S. global competitiveness. To ensure that the United States can attract and retain this top talent, DHS will streamline processing times of petitions and applications for noncitizens who seek to travel to the United States to work on, study, or conduct research in AI or other critical and emerging technologies.
DHS is also working to clarify and modernize immigration pathways for such experts, including those for O-1A and EB-1 noncitizens of extraordinary ability; EB-2 advanced-degree holders and noncitizens of exceptional ability; and startup founders using the International Entrepreneur Rule.