AI SecureOps: Attacking & Defending AI Applications & Agents

  • Dates: 2 days, Between May 11 and 13 2026 (TBD)
  • Difficulty: Beginner
  • Session Format: On-Site
  • Language: English

Description

Outcome

By the end of this training, you will be able to:

  • Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as instruction injection, agent control bypass, remote code execution for infrastructure takeover as well as chaining multiple agents for goal hijacking.

  • Conduct AI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.

  • Execute and defend against adversarial attacks, including prompt injection, data poisoning, jailbreaks and agentic attacks.

  • Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.

  • Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.

  • Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.

  • Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.

  • Implement an incident response and risk management plan for enterprises developing or using GenAI services.

Detailed Outline

Introduction

  • Introduction to LLM and AI.
  • Terminologies and architecture.
  • Transformers, Attention & their security implications(hallucinations, jailbreaks etc).
  • Agents, multi-agents and multi-modal models.

Elements of AI Security (1 lab)

  • Understanding AI vulnerabilities with case studies on AI security breaches.
  • OWASP LLM Top 10 and MITRE mapping of attacks on AI supply chain.
  • Threat modeling of AI Applications.

Adversarial LLM Attacks and Defenses (6 labs)

  • Direct and indirect prompt injection attacks and their subtypes.
  • Advanced prompt injections through obfuscation and cross-model injections.
  • Breaking system prompts and their trust criteria.
  • Indirect prompt injections through external input sources.

Responsible AI & Jailbreaking (6 labs)

  • Jailbreaking public LLMs covering adversarial AI, offensive security, and CBRN use-cases.
  • Responsible AI frameworks and benchmarks.
  • Model alignment, system prompt optimization, and defense.
  • Building Enterprise-grade LLM Defenses (2 labs)
  • Deploying LLM security scanner, adding custom rules, prompt block-lists, and guardrails.
  • Writing custom detection logic, trustworthiness checks, and filters.
  • Building security log monitoring and alerting for models using open-source tools.
  • LLM security benchmarking and continuous reporting.

Red & Blue Teaming of Enterprise AI applications(4 labs)

  • Business control flow testing for risky responses & misaligned behavior of applications.
  • Using Colab notebooks for automation of API calls and reporting
  • Vector database and model-weight tracing for root-cause investigation.
  • Rainbow teaming through a 3-way LLM implementation: target, attacker, and judge with self-improving attack prompts.

Attacking & Defending Agentic Systems (5 labs)

  • Attacking LLM agents for task manipulation, risky behavior and PII disclosure in RAG.
  • Injection attacks on AI agents for code and command execution.
  • Compromising backend infrastructure by abusing over-permissioning and tool usage in agentic systems.
  • Multi-agent attacks causing privilege too calls, goal manipulation & chained escalations.

Building AI SecOps Process

  • Summarizing the learnings into a SecOps workflow.
  • Monitoring trustworthiness, safety and security of enterprise AI applications.
  • Implementing NIST AI Risk Management Framework (RMF) for security monitoring.

Why should people attend your course?

  • Practical, hands-on labs, simulating real attacks on AI Applications and Implementing Defense controls on applications to measure the effectiveness of controls.
  • Focus on technical discussion, attendee engagement through open-ended questions, brainstorming, and security policy/controls related discussions.
  • Continued learning experience since the shared labs are always online with a shared channel of discussion over a dedicated Discord server.

Top 3 Takeaways

  • Participants will gain expertise in identifying and countering advanced AI-based adversarial attacks and implementing their countermeasures.
  • Skills to build and deploy comprehensive LLM defenses, including custom guardrails and security scanners, ensuring robust protection for both public and private AI services.
  • Knowledge in utilizing and deploying cutting-edge AI tools and models for security purposes, including RAG for custom LLM agent training and securing the AI supply chain.

Who Should Take This Course

  • Security professionals seeking to update their skills for the AI era.
  • Red & Blue team members.
  • AI Developers & Engineers interested in the security aspects of AI and LLM models.
  • AI Safety professionals and analysts working on regulations, controls, and policies related to AI.
  • Product Managers & Founders looking to strengthen their PoVs and models with security best practices.

What should students bring

  • API key for OpenAI.
  • Google Colab account.
  • Complete the pre-training setup before the first day.

What will students be provided with

  • One year access to a live interactive playground with various exercises to practice different attack and defense scenarios for GenAI and LLM applications.
  • "AI SecureOps" Metal coin for CTF players.
  • Complete course guide containing 200+ pages in PDF format. It will contain step-by-step guidelines for all the exercises, labs, and a detailed explanation of concepts discussed during the training.
  • PDF versions of slides that will be used during the training.
  • Access to Slack channel for continued engagement, support, and development.
  • Access to Github account for accessing custom-built source codes and tools.
  • Access to HuggingFace models, datasets, and transformers.

Key Learning Objectives

  • Participants will gain expertise in identifying and countering advanced AI-based adversarial attacks and implementing their countermeasures.
  • Skills to build and deploy comprehensive LLM defenses, including custom guardrails and security scanners, ensuring robust protection for both public and private AI services.
  • Knowledge in utilizing and deploying cutting-edge AI tools and models for security purposes, including RAG for custom LLM agent training and securing the AI supply chain.

Who Should Attend?

  • Security professionals seeking to update their skills for the AI era.
  • Red & Blue team members.
  • AI Developers & Engineers interested in the security aspects of AI and LLM models.
  • AI Safety professionals and analysts working on regulations, controls, and policies related to AI.
  • Product Managers & Founders looking to strengthen their PoVs and models with security best practices.

Prerequisite Knowledge

  • Familiarity with AI and machine learning concepts is beneficial but not required.
  • Ability to run Python codes and notebooks.
  • Familiarity with common GenAI applications like OpenAI.

Hardware Requirements

  • API key for OpenAI. - Google Colab account. - Complete the pre-training setup before the first day.

Bio

Abhinav Singh ,

Abhinav Singh is a seasoned cybersecurity leader, researcher, and author with over 15 years of experience across global technology companies, startups, and financial institutions. He is the author of the widely acclaimed Metasploit Penetration Testing Cookbook (three editions) and Instant Wireshark Starter. Abhinav’s contributions span patents, open-source tools, and numerous publications in leading security and privacy portals. He actively advises startups and serves on editorial and review boards for premier industry and academic events such as RSA, NeurIPS, CSA, ISSA, and OWASP, helping shape the future of cybersecurity research and practice. A frequent speaker and trainer at international conferences including Black Hat, RSAC, and DEFCON, Abhinav is known for his ability to translate complex security concepts into practical, real-world strategies. His expertise spans AI, cloud, data, and enterprise security, with a strong focus on how emerging technologies are redefining both attack and defense.

Return to training sessions