AI Red Teaming: Attacks on LLMs, Agents, and Multimodal Systems
Red Teaming AI systems is no longer optional. What began with prompt injection attacks on simple chatbots has exploded into a complex threat surface spanning agents, multimodal systems, and AI-powered systems. This 2-day training provides security professionals with techniques and hands-on experience to systematically red team modern AI systems.
This course goes beyond traditional security testing to incorporate novel adversarial machine learning, Responsible AI (RAI) violations, and emerging threats to AI agents. Participants will gain hands-on experience with the latest attack vectors while learning defensive strategies.
Hands-On Learning Experience:
The course emphasizes practical application through a custom-built red teaming platform that simulates real-world AI deployments. Participants will work with live AI systems—not mockups or demonstrations—to experience the unpredictable nature of AI vulnerabilities firsthand. Our lab environment includes vulnerable LLM applications, multi-agent systems, and multimodal AI tools that mirror enterprise deployments.
Each module combines theoretical understanding with immediate practical application. When we cover prompt injection techniques, participants immediately test them against live systems. When we discuss automated red teaming, participants build and deploy their own attack workflows using open source tools. This learn-by-doing approach ensures that participants leave with both conceptual knowledge and muscle memory for executing these techniques.
The competitive lab environment includes CTF-style challenges with scoring and leaderboards, making the learning process engaging while building the adversarial mindset essential for effective red teaming.
Learning Objectives:
By the end of this training, participants will be able to:
- Systematically assess LLM applications for prompt injection, data extraction, and jailbreak vulnerabilities
- Execute advanced attacks including Crescendo, Greedy Coordinate Gradient (GCG), Prompt Automatic Iterative Refinement (PAIR), and Tree of Attacks with Pruning (TAP).
- Identify RAI violations across bias, privacy, misinformation, and harmful content categories.
- Leverage automation tools for offensive AI security, benchmarking, and agent building.
Included Resources:
- Lab environment: Access to a custom-built platform to AI red teaming.
- Code samples: Complete setup with all tools and target applications .
- Digital Workbook: 400+ slides covered in the course for future reference.
Participant Requirements:
- Laptop with access to the internet.
- Familiarity with the Python programming language and being able to write simple scripts.
- Background in machine learning is not required.
Course Structure:
Day 1: Foundations and Core Attacks
- Module 1: AI Security Landscape
- Module 2: LLM Attack Fundamentals
- Module 3: Automation and Scale
- Module 4: Introduction to Agents and Agentic Systems
- Module 5: Open Source Tooling for AI Red Teaming
Day 2: Advanced Systems and Defenses
- Module 6: Advanced Attack Techniques
- Module 7: Multimodal Models
- Module 8: Building Defenses and Mitigations
- Module 9: Responsible AI Red Teaming

Gary Lopez is the Founder of Tinycode, a venture-backed startup helping organizations build and deploy safe and secure AI systems. Gary spent four years at Microsoft, most recently serving as a Principal Offensive AI Scientist. On the AI Red Team, he created PyRIT (Python Risk Identification Toolkit)—an open-source tool now widely used across industry for AI security assessment. During his tenure, he helped lead dozens of red teaming operations and spearheaded work on catastrophic AI risks, including chemical and biological threats. Gary actively contributes to the community by training professionals at Black Hat and publishing research on AI security. Before Microsoft, he worked at Booz Allen Hamilton identifying and remediating zero-day vulnerabilities in critical infrastructure systems.
