Navigating the Intersection of AI and CBRN Threats

Published:

By Michele Mignogna & Giovanni Tricco

The world must be prepared for when artificial intelligence and CBRN threats converge, argue Michele Mignogna and Giovanni Tricco.

Artificial intelligence (AI) technologies are poised to shape the future of human existence. Sophisticated models such as machine learning (ML) and large language models (LLMs) can enhance and extend human capabilities while significantly reducing the time and effort to achieve specific objectives. In a nutshell, AI can be a determinant factor for the success of human society and fostering economic development. However, given the dual-use nature typical of disruptive technologies, the same AI that can be used for good can be trained for malicious activities.

Dystopian scenarios envisioning extremely sophisticated AI attaining consciousness and taking control, and thereby endangering humanity, are steadily present in public discourse. Nevertheless, these scenarios remain far-fetched at present. AI should absolutely be considered an opportunity to extend human development. However if synergized with specific weapons or skills it could increase risks to the long-standing well-being of society. 

Governments have recognized the dual-use nature of AI and are heavily involved in creating laws and standards to regulate the development and application of AI systems. The EU has taken a significant step by adopting the first comprehensive law, the European AI Act, regulating AI systems. This risk-based law classifies AI applications based on the danger posed to society and human well-being, and AI systems can fall within the categories of prohibited, high-risk, or systems with minimal or limited risks. Simultaneously, on the other side of the Atlantic, the Biden administration has adopted an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. While not legally binding, it demonstrates the interest and importance given to safeguarding the use of AI by the United States.

The dual-use nature of AI raises concerns about potential malicious applications, such as using AI to carry out cyber-attacks or training AI for unconventional warfare. While current limitations and ethical considerations make dystopian scenarios improbable, the need for comprehensive regulation becomes increasingly apparent. The EU’s European AI Act and the U.S. Executive Order are significant steps in this direction, emphasizing the importance of ethical, responsible, and secure AI development. As the global community navigates the integration of AI into defense systems, it becomes paramount to address specific examples and scenarios where AI could be exploited for malicious purposes in areas such as terrorism and CBRN. This consideration will play a pivotal role in shaping responsible AI practices and global security efforts.

Could AI ever attain consciousness? © Ideogram, AI Generated Image

Terrorism in the Age of Technology

In recent years, the majority of terrorist attacks in Europe have not relied heavily on advanced technologies. Instead, they have often involved low-tech methods, ranging from larger-scale incidents such as bombings or mass shootings to smaller-scale attacks using easily accessible items like cars, vans, knives, and firearms. Notably, jihadist groups such as Al-Qaeda and ISIS have encouraged their followers to embrace simplicity in their attacks, a sentiment also expressed in the first Al-Qaeda English language propaganda magazine, “Inspired“.

Despite this trend towards low-tech methods, concerns about terrorists’ use of unconventional weapons have persisted for decades. The prospect of terrorist organizations acquiring or constructing nuclear weapons has long been regarded as a worst-case scenario, although many experts consider this unlikely. Nevertheless, given the rapid advancement of AI technology, this scenario may seem increasingly plausible.

AI and Terrorism

Emerging technologies, particularly AI, possess immense potential to drive significant advancements across various research domains. Yet, in the wrong hands, they also harbor the potential for malicious applications. Terrorism remains a dynamic threat – that is to say, it is constantly evolving – and given the increasingly widespread use of AI the barriers to entry are expected to diminish, thereby reducing the requisite skills and technical know-how for its use in nefarious activities.

Consequently, the key questions revolve around when AI will become a tool in the arsenal of terrorism, and what realistic expectations the international community should hold in response, as stated in a joint report between UNICRI and UNCCT

In particular, the emergence and widespread adoption of advanced deep-learning models like ChatGPT has sparked concerns regarding their potential misuse by terrorists and violent extremists. There is a fear that these sophisticated language models could empower such individuals or groups to amplify their online and real-world operations. 

With the capabilities offered by large language models, terrorists could potentially improve their learning, planning, and dissemination of activities with heightened efficiency, precision, and influence compared to previous methods.

Indeed, we can already observe how online algorithms often expose people to internet content that the AI understands aligns with their personal as well as political interests to keep them engaged for as long as possible. As far as the political is concerned, the role that AI can play in entrenching and strengthening radical ideologies that could potentially result in acts of terror is clear.

AI and Terrorism, © Ideogram, AI Generated Image

Focus on CBRN

In this context, an increasingly pressing issue within the realm of national security pertains to the proliferation of CBRN weapons. As technology progresses, so too does the potential for these threats to be amplified and exploited. Of particular concern is the integration of AI with these weapons, as it becomes more accessible to non-state actors and individuals alike. The rapid evolution of AI technology often outpaces the capabilities of government regulatory oversight, leading to potential gaps in existing policies and regulations. 

In 2023, LLMs garnered significant attention for their capacity to generate text based on user prompts. These models have demonstrated proficiency in tasks such as code generation, text summarization, and structuring unstructured text. Despite persisting doubts concerning the real-world use of these tools, concerns have been raised about the potential misuse of LLMs in facilitating the proliferation of CBRN weapons. Long story short, a fundamental question arises: could individuals acquire the knowledge necessary to develop such weapons through interactions with an LLM? If yes, how does it compare to the information accessible through conventional internet research?

The answer is given by RAND Corporation in a study published at the end of January 2024. An expert exercise was conducted in which teams of researchers portraying malign non-state actors were given realistic scenarios and instructed to plan a biological attack. Some teams were granted access to both an LLM and the internet, while others were limited to the internet alone. The results suggested that using the current generation of LLMs did not significantly alter the operational risk associated with such an attack.

Nevertheless, given the alarming pace of advancement in this technology, one must take this issue seriously. Regulatory discourse calling for a significant step forward in addressing the convergence of AI and CBRN threats underscores this urgency. A balance must be struck between achieving a comprehensive assessment of AI’s potential for misuse in CBRN development, while also promoting the responsible development and deployment of AI.

Integration of AI and CBRN, © Ideogram, AI Generated Image

Conclusions

Addressing legal gaps concerning disruptive technologies has always been crucial in protecting national security, and given its rapid advancement AI is no exception. A comprehensive international treaty banning certain uses of AI and establishing international organizations endowed with the mandate to enforce the said treaty would be a good step forward in ensuring a safer and more secure world in the age of AI. 

However, navigating this path is complicated for two reasons. The current adversarial geopolitical context hardly facilitates cooperation among the different actors who undoubtedly see AI as an opportunity to gain an advantage over each other. It also remains to be seen how potential shifts in U.S. policies vis-à-vis Europe and its security could impact future trans-Atlantic cooperation on such an issue. 

Moreover, even in the remote event of the adoption of laws or treaties banning certain uses of AI, there remains a critical issue well explained by Kissinger. AI cannot be subject to supervision, making it challenging to ensure a balance in the capabilities held by a specific actor, as was achieved with the regulation of the number of atomic armaments. 

Considering these challenges, it is crucial to find the most responsible path forward to ensure the safe and long-term use of AI in our society, mitigating negative repercussions in terms of terrorism or the widespread use of intelligent CBRN weapons. Given the current global environment, fostering collaboration and communication among governments is paramount, especially as AI poses ubiquitous risks with cross-border dimensions.

Responsible AI in the military, © Observer Research Foundation

Michele Mignogna is a Security & Defense Consultant at NCT Consultants. He holds a double master’s degree in EU and International Law with a focus on European security and defense. Recognized by the French Embassy in Madrid for his thesis on the EU’s defense capabilities after beginning of the war in Ukraine, at NCT he collaborates with government bodies to promote inter-agency and multinational cooperation in defense. At NCT, he also engages in roles opposing SWAT/SOF Units in CBRNe environments.

Giovanni Tricco is a PhD candidate within the international Joint Doctorate in Law, Science, and Technology at the University of Bologna and the Vrije Universiteit Brussel. His research focuses on the governance of AI in outer space. He is the co-lead of the Research Group on AI and Space Law of the Space Law and Policy Project Group at the Space Generation Advisory Council (SGAC). Before joining academia, he worked in tech policy at the Center for European Policy Analysis (CEPA), a Washington D.C. based think tank.

Related articles

Recent articles