U.S. flag

An official website of the United States government

Government Website

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Safely connect using HTTPS

Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Breadcrumb

  1. Home
  2. News
  3. Fact Sheets
  4. FACT SHEET: DHS Completes First Phase of AI Technology Pilots, Hires New AI Corps Members, Furthers Efforts for Safe and Secure AI Use and Development

FACT SHEET: DHS Completes First Phase of AI Technology Pilots, Hires New AI Corps Members, Furthers Efforts for Safe and Secure AI Use and Development

Release Date: October 30, 2024

The Department Continues to Lead in the Integration of AI for its Missions While Combatting its Adversarial Use One Year After President Biden’s Landmark Executive Order

WASHINGTON – In the year since President Biden issued his landmark Executive Order (EO) 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” the Department of Homeland Security (DHS) has taken bold action to responsibly leverage Artificial Intelligence (AI) to advance the homeland security mission. As directed by the President, DHS has deepened its commitment to protecting individuals’ privacy, civil rights, and civil liberties; promoting national AI safety and security; and strengthening AI leadership through innovation and strong partnerships. As AI technology rapidly reshapes our world, DHS will continue to advance international cooperation in an effort to enhance global awareness and response to the threats as well as our ability to harness its potential.

To learn more about DHS’s work in AI, visit the Artificial Intelligence at DHS webpage.

Successfully Tested the Effectiveness of 3 AI Pilot Programs, While Protecting Civil Rights, Privacy, and Civil Liberties

In March 2024, DHS became the first federal agency to roll out a comprehensive “AI Roadmap” to integrate the technology into a variety of uses. The AI Roadmap announced three Generative AI (GenAI) pilots to test the effectiveness of GenAI solutions and their potential to enhance mission specific capabilities in a safe, responsible, and effective way. These pilot programs were housed in the United States Citizenship and Immigration Services (USCIS), Homeland Security Investigations (HSI), and the Federal Emergency Management Agency (FEMA).

By October 2024, DHS has successfully tested these pilot programs, while protecting civil rights, privacy, and civil liberties. The Department gained valuable insights into the real-life impact of GenAI tools as well as their limitations. Learnings from these pilots will help guide the development and deployment of other AI tools throughout the Department.

  • USCIS: Improving Training Capacity and Experience - The USCIS pilot introduced a training application that allowed immigration officers to interact with the GenAI tool to practice conducting an interview with a refugee/asylum seeker. The USCIS pilot successfully supplemented officers’ training by giving them opportunities to practice eliciting testimony. Officers gave positive reviews for the program’s ease-of-use and the ability to access it on their own schedule. Based on the success of the pilot, USCIS and DHS are looking at how GenAI can be used in other training scenarios as a supplemental tool to better prepare the next generation of DHS officers. The USCIS pilot is only used in officer training and is not used for immigration eligibility determinations.
     
  • HSI: Strengthening Investigative Processes - HSI’s pilot focused on utilizing large language models (LLMs) to produce summaries of HSI-approved law enforcement reports to bolster investigative processes and optimize the efficiency and precision of investigative summaries. It also enabled semantic search, a search engine technology that interprets the meaning of words and phrases to enable law enforcement officers to search through millions of reports easily. The pilot showed that these were valuable tools to enhance investigative processes. The HSI pilot, which was developed using an open-source AI model, found that open-source models provided the flexibility necessary to experiment and measure effectiveness. HSI professionals continue to test and optimize the use of open-source models in supporting law enforcement investigations.
     
  • FEMA: Increasing Community Resilience – Communities can help build their resilience to emergencies by developing hazard mitigation plans, but these plans can be challenging and time consuming to produce, particularly for communities that lack sufficient resources. FEMA’s pilot used a LLM to help state and local governments generate draft plans customized to meet their needs and understand risks and mitigation strategies. FEMA learned that increasing user understanding of AI and receiving feedback directly from community users is an important first step to integrating GenAI into any existing process. FEMA is using lessons learned from the pilot to help determine how the technology can best support their mission.

Hired 31 New Experts to the “AI Corps” Who are Helping Responsibly Leverage AI Across DHS Mission Areas

As part of the Department’s “AI Corps” hiring sprint, DHS has onboarded 31 technology experts since February 15. This effort remains one of the most significant AI-talent recruitment initiatives of any federal civilian agency. To date, these experts have provided critical technical support and conducted extensive evaluations across multiple priority projects, significantly advancing the understanding and application of AI technologies within DHS.

  • The AI Corps partnered with the DHS Supply Chain Resilience Center (SCRC) to investigate how AI could be used to forecast the impacts of critical supply chain disruptions to public safety and security. This sprint included requirements development, use case mapping, market research, and system demonstrations. The AI Corps was able to guide the SCRC on the path to evaluate the technical landscape and provide recommendations to support mission needs.
     
  • Members from the AI Corps supported the HSI GenAI pilot in creating a first-of-its-kind LLM-powered tool to search and produce summaries of HSI approved law enforcement reports and information that is obtained through the standard legal process. The team provided technical expertise to incorporate the latest approaches in advanced AI search and generative summaries. This preliminary work was crucial in demonstrating the potential of the technology.

Collaborated with the AI Board to Provide Guidance on Safe and Secure Development and Deployment of AI Technology in U.S. Critical Infrastructure

At the request of the President, Secretary of Homeland Security Alejandro N. Mayorkas established the Artificial Intelligence Safety and Security Board (the Board) to advise the Secretary, the critical infrastructure community, other private sector stakeholders, and the broader public on the safe and secure development and deployment of AI technology in our nation’s critical infrastructure. Officially launched in April, the Board announced a membership of 23 representatives from a range of sectors, including software and hardware companies, critical infrastructure operators, public officials, the civil rights community, and academia.

Secretary Mayorkas convened the Board three times since May 2024. The Department with the Board’s close consultation have been developing guidance to improve AI safety and security across the AI ecosystem. The deployment of safe, secure, and trustworthy AI generates consumer trust and fuels adoption and innovation. AI can substantially improve the services the nation’s critical infrastructure provides if we secure systems against safety and security threats.

Defended against AI-enabled Cyber threats to U.S. Critical Infrastructure

To protect U.S. networks and critical infrastructure, DHS is adapting and incorporating the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and other appropriate guidance into actionable guidelines for use by critical infrastructure owners and operators. DHS and the Cyber and Infrastructure Security Agency (CISA) published “Safety and Security Guidelines for Critical Infrastructure Owners and Operators” in April 2024 to address cross-sector AI risks that impact the safety and security of critical infrastructure systems and their functions. DHS and CISA developed these guidelines in coordination with the Department of Commerce, the Sector Risk Management Agencies (SRMAs), and other critical infrastructure sector regulators, and have continued to develop and publish additional best practices at the intersection of AI and cyber defense. To aid in the detection and remediation of vulnerabilities in critical U.S. Government software, systems, and networks, CISA also completed a pilot for AI-enabled vulnerability detection and provided a report on the pilot to the White House in July 2024.

Provided Technical Expertise to Counter Threats from Adversarial AI

Under the EO, the DHS Countering Weapons of Mass Destruction Office (CWMD), in partnership with the DHS Science and Technology Directorate (S&T) is working to counter chemical, biological, nuclear, and radiological (CBRN) threats enabled by AI systems. DHS delivered a report to the President that examines and provides recommendations on how to better understand and mitigate the risk of AI being misused to assist in the development or use of CBRN. This report, released to the public in June, identifies the trends in AI and types of AI models, including foundation models and Biological Design Tools, that might present or intensify biological and chemical threats to the United States. It offers recommendations to mitigate potential threats to national security in the training, deployment, publication, and use of AI models and associated data. as well as underscoring the vital role of safety evaluations and whole-of-community guardrails.

CWMD also developed a strategy to evaluate and improve the effectiveness of synthetic nucleic acid synthesis screening, helping to prevent the misuse of AI for engineering dangerous biological materials. Working closely with the White House Office of Science and Technology Policy, DHS will help advance safety in this important industry. CWMD and S&T will also support the Department of Commerce’s AI Safety Institute in evaluating CBRN risks from AI systems, ensuring DHS’s unique expertise in these areas are a part of AI Safety Institute’s effort to promote AI safety.

--

In accordance with the DHS' Compliance Plan for OMB Memorandum M-24-10, the Department will continue its work to advance AI governance and innovation while managing risks from the use of AI in the Federal Government, particularly those affecting the rights and safety of the public.

Last Updated: 10/30/2024
Was this page helpful?
This page was not helpful because the content