DHS agencies use Artificial Intelligence (AI) to address some of the most important risks our nation faces. DHS harnesses the potential of this technology to continually improve our ability to secure the homeland.
U.S. Customs and Border Protection (CBP) uses AI to help screen cargo at ports of entry, validate identities in the CBP One app, and enhance awareness of threats at the border. AI models are used to automatically identify objects in streaming video and imagery. Real-time alerts are sent to operators when an anomaly is detected, enhancing CBP’s ability to stop drugs and other illegal goods from entering the country. The Science and Technology Directorate supports academic research and development of technologies for enhanced border security through the Center for Accelerating Operational Efficiency.
The Transportation Security Administration (TSA) uses AI to power its contactless airport security lines. AI models allow passengers who elect to be verified by facial recognition technology to be easily identified. TSA’s baggage-screening technology uses machine learning object detection and image classification to detect prohibited items in carry-on luggage. The Transportation Security Laboratory (TSL) at the Science and Technology Directorate supports TSA in this endeavor. The TSL provides independent and cooperative developmental testing and evaluation for screening technologies.
U.S. Immigration and Customs Enforcement (ICE) uses AI for document analysis, language translation, phone number normalization, and facial recognition in certain investigations. Facial recognition helps ICE's Homeland Security Investigations identify and rescue victims of child sexual exploitation (CSE). The use of facial recognition at ICE has led to arrests of suspected CSE perpetrators and the rescue of victims in previously cold cases.
U.S. Citizenship and Immigration Services (USCIS) uses AI to deliver immigration services to their customers more efficiently. Machine learning models eliminate redundant paperwork by pulling together customer information from disparate systems. USCIS can now quickly access information already provided by customers for a more comprehensive customer service interaction.
The Federal Emergency Management Agency (FEMA) uses AI to assess the severity and extent of damage to homes, buildings, and other property after a disaster more efficiently. AI-powered computer vision identifies damaged structures from aerial imagery, and human analysts review the outputs of the AI models to verify the level of damage. FEMA's analysts can accurately process millions of images in a matter of days, meaning thousands of assessments can be completed within a week after a disaster.
The Cybersecurity and Infrastructure Security Agency (CISA) is using AI to improve its ability to identify and report cyber vulnerabilities in our nation’s critical infrastructure like power plants, pipelines, and public transportation. CISA’s Cybersecurity Division uses machine learning and natural language processing models to collect and sort vulnerability data before it is presented to human analysts. CISA’s experts can then more efficiently assess cyber risks that are shared in publications like the Known Exploited Vulnerabilities Catalog and the National Vulnerability Database. The Science and Technology Directorate (S&T) supports CISA in this effort through S&T’s Cyber Analytics and Platform Capabilities project.
Learn more about CISA's roadmap for artificial intelligence.
The Science and Technology Directorate (S&T) uses AI to help the people on the front lines of our homeland security mission, like first responders, to reduce risk and make data-driven decisions. S&T AI research and development efforts support multiple DHS missions:
- screening and detection at ports of entry and facilities that protect critical infrastructure;
- data analysis, imaging, and visualization to identify patterns that may indicate organized criminal activity; and
- predictive analytics and computer vision to detect illegal goods, including fentanyl and weapons.
Learn more about S&T's AI research, development, and innovation.
DHS combines leading cybersecurity methods and proven AI-powered applications to protect networks and critical infrastructure from AI-enhanced attacks.
"The proliferation of accessible artificial intelligence (AI) tools likely will bolster our adversaries’ tactics. Nation-states seeking to undermine trust in our government institutions, social cohesion, and democratic processes are using AI to create more believable mis-, dis-, and mal-information campaigns, while cyber actors use AI to develop new tools and accesses that allow them to compromise more victims and enable larger-scale, faster, efficient, and more evasive cyber attacks."
-Homeland Threat Assessment 2024
Read the full 2024 Homeland Threat Assessment.
Researching Into the Future
DHS is conducting ongoing research to identify new ways AI can advance its homeland security mission and assess how adversaries may use AI against us.
Partnering Across Sectors
DHS is fostering a leading AI community by developing strong relationships with AI experts and organizations across public and private industry. These relationships help the Department proactively address the evolving threats and vulnerabilities presented by the malicious use of AI.
Anticipating Emerging Risks
To keep track of new risks from AI and related technologies, DHS is creating an AI risk register. In addition to creating a clear and organized view of known AI risks to add to the National Cyber Threat Landscape, the register will help DHS prioritize and mitigate AI risks and identify strategies for combatting AI threat actors.
Deploying Defensive AI
To defend against the malicious use of AI, DHS is deploying defensive AI:
- Malware Reverse Engineering uses machine learning techniques to disrupt adversaries' malware development lifecycle.
- Cyber Vulnerability Reporting uses automation, machine learning, and natural language processing to dramatically increase the accuracy and relevancy of vulnerability data. With the enhanced data Cyber Vulnerability Reporting provides, human analysts can make informed decisions more efficiently to keep our networks and critical infrastructure safe.
GenAI Tools for the DHS Workforce
DHS recognizes the potential of Generative AI (GenAI) applications to increase efficiency in daily work through the creation of images and text in various forms.
The Department recognizes GenAI tools must have guardrails to ensure they are used responsibly. For this reason, DHS has issued its policy on Use of Commercial Generative AI Tools. The policy requires DHS personnel to complete training on GenAI that shows them how to protect privacy and identify potential civil rights, civil liberties, ethical, and intellectual property issues. The policy also requires DHS personnel to follow rules of behavior when using Commercial GenAI tools.
DHS’s GenAI policy is only the first step in the Department’s harnessing the potential of Generative AI. The DHS Artificial Intelligence Task Force is exploring how the Department could build its own GenAI tools. These tools would be tailored to DHS’s specific operational needs, meet the highest security and privacy standards, and help our dedicated workforce stay mission focused.
Read more about the DHS’s GenAI policy and its other GenAI resources, including training and rules of behavior for the DHS workforce, and a privacy impact assessment.