AI SecureOps: Attacking & Defending GenAI Applications and Services

  • Dates: May 10 and 11 2025
  • Difficulté: Moyen
  • Format: En personne
  • Langue: Anglais

Description

By 2026, Gartner, Inc. predicts that over 80% of enterprises will engage with GenAI models, up from less than 5% in 2023. This rapid adoption presents a new challenge for security professionals. To bring you up to speed from intermediate to advanced level, this training provides essential GenAI and LLM security skills through an immersive CTF-styled framework. Delve into sophisticated techniques for mitigating LLM threats, engineering robust defense mechanisms, and operationalizing LLM agents, preparing them to address the complex security challenges posed by the rapid expansion of GenAI technologies. You will be provided with access to a live playground with custom built AI applications replicating real-world attack scenarios covering use-cases defined under the OWASP LLM top 10 framework and mapped with stages defined in MITRE ATLAS. This dense training will navigate you through areas like the red and blue team strategies, create robust LLM defenses, incident response in LLM attacks, implement a Responsible AI(RAI) program and enforce ethical AI standards across enterprise services, with the focus on improving the entire GenAI supply chain. This training will also cover the completely new segment of Responsible AI(RAI), ethics and trustworthiness in GenAI services. Unlike traditional cybersecurity verticals, these unique challenges such as bias detection, managing risky behaviors, and implementing mechanisms for tracking information are going to be the key challenges for enterprise security teams.

By the end of this training, you will be able to...
  • Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as cross-site scripting, SQL injection, insecure agent designs, and remote code execution for infrastructure takeover.
  • Conduct GenAI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
  • Execute and defend against adversarial attacks, including prompt injection, data poisoning, model inversion, and agentic attacks.
  • Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
  • Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
  • Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.
  • Utilize open-source tools like HuggingFace, OpenAI, NeMo, Streamlit, and Garak to build custom GenAI tools and enhance your GenAI development skills.
  • Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
  • Implement an incident response and risk management plan for enterprises developing or using GenAI services.
Why should people attend your course?
  • Practical, hands-on labs, simulating real attacks on AI Applications and Implementing Defense controls on applications to measure the effectiveness of controls.
  • Focus on technical discussion, attendee engagement through open ended questions, brainstorming and security policy/controls related discussions.
  • Continued learning experience since the shared labs are always online with a shared channel of discussion over a dedicated slack channel.

The CTF labs utilizes GenAI in various ways and attendees will get a feel of how to build their own test cases, automations and LLM validators. For example, the CTFs utilize auto evaluation, where the results of jailbreaks and prompt injections are automatically evaluated using a judge LLM. The CTF uses slack to respond to an LLM that controls the workload on the CTF platform.

Top 3 takeaways students will learn
  • Participants will gain expertise in identifying and countering advanced AI based adversarial attacks and implementing their countermeasures.
  • Skills to build and deploy comprehensive LLM defenses, including custom guardrails and security scanners, ensuring robust protection for both public and private AI services.
  • Knowledge in utilizing and deploying cutting-edge AI tools and models for security purposes, including RAG for custom LLM agent training and securing the AI supply chain.
Who Should Take This Course
  • Security professionals seeking to update their skills for the AI era.
  • Red & Blue team members.
  • AI Developers & Engineers interested in the security aspects of AI and LLM models.
  • AI Safety professionals and analysts working on regulations, controls and policies related to AI.
  • Product Managers & Founders looking to strengthen their PoVs and models with security best practices.
Student Requirements
  • Familiarity with AI and machine learning concepts is beneficial but not required.
  • Ability to run python codes and notebooks.
  • Familiarity with common GenAI applications like OpenAI.
What should students bring
  • API key for OpenAI.
  • Google Colab account.
  • Complete the pre-training setup before the first day.
What will students be provided with
  • One year access to a live interactive playground with various exercises to practice different attack and defense scenarios for GenAI and LLM applications.
  • "AI SecureOps" Metal coin for CTF players.
  • Complete course guide containing 200+ pages in PDF format. It will contain step-by-step guidelines for all the exercises, labs, and a detailed explanation of concepts discussed during the training.
  • PDF versions of slides that will be used during the training.
  • Access to Slack channel for continued engagement, support and development.
  • Access to Github account for accessing custom-built source codes and tools.
  • Access to HuggingFace models, datasets and transformers.

Objectifs clés d'apprentissage

  • Participants will gain expertise in identifying and countering advanced AI based adversarial attacks and implementing their countermeasures.
  • Skills to build and deploy comprehensive LLM defenses, including custom guardrails and security scanners, ensuring robust protection for both public and private AI services.
  • Knowledge in utilizing and deploying cutting-edge AI tools and models for security purposes, including RAG for custom LLM agent training and securing the AI supply chain.

À qui s'adresse cette formation ?

  • Security professionals seeking to update their skills for the AI era.
  • Red & Blue team members.
  • AI Developers & Engineers interested in the security aspects of AI and LLM models.
  • AI Safety professionals and analysts working on regulations, controls and policies related to AI.
  • Product Managers & Founders looking to strengthen their PoVs and models with security best practices.

Connaissances prérequises

  • Familiarity with AI and machine learning concepts is beneficial but not required.
  • Ability to run python codes and notebooks.
  • Familiarity with common GenAI applications like OpenAI.

Exigences matérielles

Regular laptop with access to Internet, OpenAI and Google Colab. No hardware GPU required. Free GPU access through Google Colab.

Bio

Abhinav Singh ,

Abhinav Singh is an esteemed cybersecurity leader & researcher with over a decade of experience across technology leaders, financial institutions, and as an independent trainer and consultant. Author of "Metasploit Penetration Testing Cookbook" and "Instant Wireshark Starter," his contributions span patents, open-source tools, and numerous publications. Recognized in security portals and digital platforms, Abhinav is a sought-after speaker & trainer at international conferences like Black Hat, RSA, DEFCON, BruCon and many more, where he shares his deep industry insights and innovative approaches in cybersecurity. He also leads multiple AI security groups at CSA, responsible for coming up with cutting-edge whitepapers and industry reports around safety and security of GenAI.

Return to training sessions