Artificial Intelligence in Red Teaming
Al-Azzawi, Mays (2024)
Al-Azzawi, Mays
2024
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2024122037866
https://urn.fi/URN:NBN:fi:amk-2024122037866
Tiivistelmä
Red teaming involves simulating real world attacks on targets such as organizations, infrastructure, or individuals to test their defences and assess the vulnerabilities. Artificial intelligence plays a significant role in red teaming cyberattacks. The thesis explores the impact of AI in red teaming by examining how AI methods can be misused in various scenarios and identifying the typical targets for these attacks. Recent studies highlight the risks associated with large language models (LLMs), a form of advanced AI, and their potentials to reshape the red teaming domain. The thesis aims to conduct a comprehensive review to analysis the role of AI in cyberattacks and its impact on red teaming practices.
Chapter 2 of the thesis analyses the submitted article, summarizing its methodology and outcomes. The article features a scoping review aimed to identify the AI methods employ in red teaming and the nature of their targeted attacks. Chapter 3 presents an extended literature review, which employs narrative re-view and snowball sampling methods to achieve its objectives. The review focuses on the applications of large language models (LLMs) used in red teaming attacks. It explores the role of LLMs and other ad-vanced AI methods in the field of cyberattacks, with an emphasis on recent studies and their targets.
AI is driving transformative changes across the domain of red teaming and advanced AI such as LLMs offer both opportunities and risks. The rise of automated cyberattacks has introduced a new level of so-phistication, making these attacks increasingly difficult to detect. Cybercriminals are leveraging accessible AI tools to execute automated and highly realistic attacks, often requiring minimal human intervention. These LLM based applications not only enable attackers to optimize their strategies but also present seri-ous risks due to vulnerabilities within AI systems, potentially resulting in severe consequences. For in-stance, simulations of AI driven attacks have shown high success rates, highlighting the potential of these tools to enhance the methods of cyberattacks. Tools like Auto-GPT were discussed regarding their abilities for misuse if introduced to the public in the future. Research on AI in cyberattacks is needed to address the threats posed by the use of AI applications in red teaming.
Chapter 2 of the thesis analyses the submitted article, summarizing its methodology and outcomes. The article features a scoping review aimed to identify the AI methods employ in red teaming and the nature of their targeted attacks. Chapter 3 presents an extended literature review, which employs narrative re-view and snowball sampling methods to achieve its objectives. The review focuses on the applications of large language models (LLMs) used in red teaming attacks. It explores the role of LLMs and other ad-vanced AI methods in the field of cyberattacks, with an emphasis on recent studies and their targets.
AI is driving transformative changes across the domain of red teaming and advanced AI such as LLMs offer both opportunities and risks. The rise of automated cyberattacks has introduced a new level of so-phistication, making these attacks increasingly difficult to detect. Cybercriminals are leveraging accessible AI tools to execute automated and highly realistic attacks, often requiring minimal human intervention. These LLM based applications not only enable attackers to optimize their strategies but also present seri-ous risks due to vulnerabilities within AI systems, potentially resulting in severe consequences. For in-stance, simulations of AI driven attacks have shown high success rates, highlighting the potential of these tools to enhance the methods of cyberattacks. Tools like Auto-GPT were discussed regarding their abilities for misuse if introduced to the public in the future. Research on AI in cyberattacks is needed to address the threats posed by the use of AI applications in red teaming.