Politics

An Analysis of AI’s Impact on Terrorism

Executive Summary:  

• ISIS released an Artificial Intelligence (AI) generated misinformation video following a bombing attack on a Russian concert venue, which prompted fears about the depth of AI use in terrorist activities. 

• Generative AI has cut time, automated recruitment and information dissemination, as well as decreased the need for highly skilled members in terrorist cells and holds potential for future incorporation in attack planning and undertaking through tools such as facial recognition, Unmanned Aircraft Vehicle Systems (UAVS), Denial-of-Service Attacks (DoS), or malware. 

• Law enforcement officers should use AI tools to support preventive counterterrorist measures and enact extensive education to counteract the current AI tools and their potential incorporation into terrorist arsenals in the future. 

Introduction:  

In March 2024, following an The Islamic State of Iraq and Syria (ISIS) attack on a Russian concert venue, videos started circulating as part of the New Harvest media program portraying a reporter underlining that this was not a terrorist attack, but rather part of the ongoing war between countries fighting Islam and Islamic States. The catch? It was ISIS using Artificial Intelligence (AI) to generate a video of a reporter to support its cause. AI constitutes the largest recent technological development, but significant opportunities and challenges come with its use. AI allows for remarkable automation of daily tasks for governments, public organizations, and individuals, making work more efficient. In the wrong hands, they can be exploited for harmful purposes. While these instruments are not entirely new, AI tools have significantly decreased entry barriers for terrorist organizations, reducing the significant engineering or computational background to understand and use the technology. All AI systems can be used to support terrorist activities, with closed-source systems, such as ChatGPT or DeepAI, allowing for prompt image or text generation in a limited capacity. ChatGPT is continuously updated to enhance filtering and prevent harmful content from being reused by malicious actors. Open-source tools could be downloaded, edited, and deployed to fit specific needs of terrorist cells, without internal limitations and suspicious activity reporting set forth by the parent companies.     

Terrorist cells have often been early implementers of emerging technological tools in an ongoing race with national security organs, and AI is not an exception. AI tools, allow terrorist cells to decrease the need for highly skilled members; generate recruitment, phishing, and advertising materials more efficiently; and open previously unused recruitment avenues such as AI chatbots. Additionally, generative AI holds significant capabilities to enhance future terrorist attacks through access to tools such as Unmanned Aircraft Vehicle Systems (UAVS), Denial-of-Service Attacks (DoS), or malware. Enabling them to autonomously make decisions, process data in real-time, and initiate responses or actions without the need for human intervention, 

The open-source aspect of traditional AI and generative AI creates a unique space for future threats, requiring decision-makers and security officials to set counteractive measures preemptively. With the existing use of AI in terrorist recruitment and information dissemination as well as the future potential for threats and misuse of AI tools in attack planning and deployment, this paper focuses on the primary risks that leaders and security officials need to consider when establishing policies designed to prevent terrorist attacks. 

Current Use: 

Recruitment 

A major threat is the use of AI and the internet to aid terrorist groups in recruiting individuals, and youth, regardless of their nationality, tend to be the main target of terrorist recruitment. Teenagers have widespread access to the internet and social media sites, underdeveloped digital literacy skills, and a higher susceptibility to online messaging than their adult counterparts.  

ISIS uses online recruitment through Social Media Sites such as X or Instagram, gaining trust and disseminating unprecedented amounts of material through posts, both covert and overt. Social media presence is not limited to ISIS; other terrorist cells, such as Al Qaeda and Boko Haram, use the internet to disseminate propaganda and recruitment materials. The ability to use AI to make messaging on social media more impactful, supports the creation of graphics that evoke emotions based on ongoing conflicts and points of contention, makes ads more targeted, saves costs, and increases production of tailored fake news and recruitment materials.  

Beyond social media messaging, individual-made chatbots that mimic fringe beliefs and recruit users are gaining traction and interest. The novelty factor of holding conversations with AI, paired with their implementation in widely used apps such as Snapchat, creates a false sense of trust and safety for youth. Additional challenges include disseminating materials in a broader range of languages supported through large language models (LLMs). Previously, materials targeted English, French, or Arabic speakers; however, LLMs opened access to generate less commonly spoken languages, significantly increasing the recruitment pool. Terrorist organizations are no longer restricted by the skills of their current members but rather by the limits of their access to AI-powered tools and the internet.  

Future Threats: 

In addition to existing AI applications in recruitment, a 2024 United Nations Counter-Terrorism Centre report outlines possible future applications of AI in the terrorist arsenal, although it states that the cells are not using those capabilities yet.  

Attack Planning 

Generative AI tools, such as Chat GPT or Trip Planner AI, allow individuals to input any location, time, and specific requirements to plan a detailed trip. Such tools could easily be used to plan terrorist attacks. Gen AI can outline necessary routes, costs, and security measures popular tourist destinations are implementing. Previous terrorist attacks required detailed knowledge of the target location, language, and layouts, requiring lengthy stakeouts or terrorist members with previous experience with said location. AI planning tools can circumvent those experiences, making attack arrangements accessible to more and smaller terrorist cells with fewer resources and members.  

Facial Recognition 

Terrorist cells  have shown interest in obtaining facial recognition capabilities. This AI tool has the potential to support terrorists in identifying individual targets more efficiently and accurately, increasing the likelihood of a successful attack. Additionally, Unmanned Aircraft Vehicle Systems (UAVS) with facial recognition capabilities can be used as attack weapons. Drones equipped with explosives are significantly more stealthy than humans, can access significantly more spaces, can create more damage, and do not decrease the number of terrorists as they do not require a suicide bomber. While those UAVS are currently only available for military use, widespread purchasing availability or access to tools and programs to self-build autonomous facial recognition drones is not far off given current technological development trends. 

Unmanned Vehicles 

Beyond UAVS, unmanned vehicles further enhance terrorist attacks. In 2016, NATO assessed that ISIS terrorist cells were developing autonomous cars to deliver bombs without the need for suicide bombers. While initial tries have focused on remotely operated cars, eight years later, AI has opened the door to computer-operated vehicles, cutting out human involvement beyond monitoring.  

Denial-of-Service Attacks (DoS) 

DoS attack constitutes a hostile cyber threat actor’s actions prevent authorized users from accessing devices, information systems, or other network resources by overwhelming a computer’s memory and rendering it inaccessible through repeated and plentiful connection requests. DoS attacks are relatively cheap and do not require significant computing power compared to more advanced cyberattacks, making it a preferred mode for terrorist cyberattacks. Rather than targeting a specific vulnerability, the attack overwhelms the systems through other device (botnet) requests, making it a favorite among terrorist cells. The hacker must infect botnets and set up a DoS attack, which takes time and requires significant hacking knowledge. With the use of machine learning, terrorists can efficiently train machine learning to replicate prior known DoS attacks and tailor them depending on the attack context. Additionally, AI tools make scanning systems to identify the most susceptible target easier and faster, leading to the identification of new targets. Use of machine learning and AI scanning tools decreases the need for skilled attackers and allows the terrorist cells to deploy DoS attacks more efficiently and at new targets. As such, large federal institutions, which have counteractive measures for cyber-attacks, will not be the only targets, requiring state and local institutions to also implement stricter security protocols.  

Malware 

Malicious Software (malware), including viruses, spyware, and ransomware, is a significant security threat to devices. Currently, terrorists use malware to steal sensitive information to plan attacks, extract funds, and access security clearance documentation. But, with significant variability, malware coding and dissemination are time-consuming and require significant coding knowledge. AI tools can advance and automate malware distribution, making attacks more efficient, adaptive, and difficult to detect. Some applications of AI tools in malware dissemination include identifying and analyzing personal data to craft highly customized phishing emails and fake security pop-ups; AI-driven bots on social media tasked with posting infected photos or news report links through social engineering schemes; or scanning networks for compromised websites and software, exploiting their vulnerabilities. A test by a Forbes journalist shows that an AI spear-phishing tool tweeted at a rate of 6.75 tweets per minute compared to a human tweeting at a rate of 1.075 per minute. The success rate was 0.33 for the former and 0.37 success rate for the latter, exhibiting that the only statistically relevant difference is the rate of tweets, pointing to AI having the capacity to significantly alter and increase the volume of attacks.  

Mitigation Strategies: 

Policy and education are vital tools to ensure that law enforcement is preemptively addressing future threats. Education on phishing techniques is vital to ensure that employees are prepared for highly personalized phishing attacks deployed by terrorists. Additionally, education on responsible and ethical use of AI tools is necessary to guarantee the technology is used to its highest potential.  

Beyond implementing education and policy, Using AI tools to counteract terrorism is necessary. Facial recognition is a powerful tool in the arsenal of counterterrorism entities. Law enforcement officials can identify perpetrators from video recordings and live streams based on their biometric features. Facial recognition is already heavily implemented in daily operation of the Department of Homeland Security but focuses mainly on monitoring ports of entry and reducing the danger of AI being misused to assist in the development or use of chemical, biological, radiological, and nuclear threats, rather than on terrorist activity online. An additional challenge lies in the implementation of facial recognition tools on a state level, where funding and worker constraints could pose a significant barrier. Another mitigation strategy lies in using machine learning and AI to scan social networks and predict patterns of radicalization. Research using machine learning to analyze radicalization patterns identified 19 critical variables out of 79 included in the “Profiles of Individual Radicalization in the United States” dataset, predicting the emergence of violence with an accuracy of 86.3 percent. Machine learning can significantly reduce time spent analyzing data and allow law enforcement officials to collate more datasets compared to human analysts. Similarly, officials can deploy AI tools to scan governmental systems for vulnerabilities to preemptively strengthen protection against basic attacks. Government deployed AI bots can further interact with suspected terrorist AI bots to scan for recruitment tactics and radicalizing messaging, ensuring that law enforcement agents can focus less on menial checking tasks and concentrate on counterterrorist activates that require human inputs.

Finally, as Congress begins considering regulations to mitigate the harms of AI, legislators should also not add unnecessary barriers to development of AI systems in the United States. While AI systems can be used in harmful ways, these tools can be used to mitigate actions by foreign actors. And if leadership in AI development occurs in China, Russia, or another adversary, new AI tools will be developed without considerations for health, safety, and ethical AI use.  

Conclusion:  

AI constitutes a new threat to national security due to its increasing use by terrorist cells. Easily accessible and widely used AI tools have made recruitment and attack planning more efficient, cost-effective, and accessible. An additional pressing issue facing security officials is preparing for future use of AI in DoS attacks, malware, and terrorist attacks, targeting both individuals and nations. Preemptive counterterrorist measures and widespread education on the harms of fringe beliefs targeted at youth are necessary to offset the current AI trends and their future inclusion in terrorist arsenals.