Practical AI Security - Go Beyond Theory: Build, Break, and Defend

  • Dates: May 11, 12 and 13 2026
  • Difficulté: Moyen
  • Format: En personne
  • Langue: Anglais

Description

Before you can secure or break AI apps, you need to understand how it’s built. Below is our Build, Break, Defend approach.

Build - Full Stack Security Agent

Our AI Security Training is a hands-on training focused on learning AI security from first principles and with an engineering mindset. We heavily focus on building a fundamental understanding of how real-world GenAI applications are built, based on our experience working with AI-native engineering teams.

We will use hands-on labs to interact with LLM APIs, then going deep into embeddings, VectorDBs, RAG, Agentic systems, MCPs, Langsmith etc and essential tooling around them—all with real-world examples and labs.

Once we understand do labs around these concepts, we will actually go ahead and build our own threat model agent.


Break - Offensive technique, tooling and threat model AI Stack

We then dive into the offensive component with real-world apps in our labs. Some examples of the labs we cover in the course include:

  • Diving into classic Prompt Injection and Indirect Prompt Injection attacks using our Email Assistant bot we've built
  • Sensitive Information Disclosure issues
  • MCP attacks — We will build MCP Servers (Local/Remote), SSE vs stdio, and then go into MCP attacks using custom MCP servers we built
  • Attacks in Agentic architecture
  • Model Backdoors — Real-world backdoor example from Hugging Face; learn how adversaries embed hidden behavior into AI models
  • Threat Model AI Application workflows and how to think about the application layer when combined with LLMs

Defend - Practical tools and techniques

We will then go over practical techniques—covering both tools and architecture-level thinking on how to secure AI applications:

  • Practical defense techniques using our labs
    • inline LLM guardrails
    • MCP Gateways for observability and detection
  • Go over each attack we demonstrated and fix them at App layer or make architecture changes
  • Agentic Security Architecture
  • We will look at AI Security Tooling and how you can implement them in SDLC - this includes during code generation and implementing tooling to detect bugs at scale

By the end, you'll know how production-grade GenAI apps are built, how to assess them for risks, and how to provide actionable recommendations.

Objectifs clés d'apprentissage

  1. Build a deep understanding of the full modern AI stack, including LLMs, Agents, MCPs, embeddings, vector stores, and orchestration layers

  2. Develop a clear mental model for assessing AI-enabled systems from an offensive security perspective, learning how to map attack surfaces, identify weak points, and think like an adversary.

  3. Get hands-on with the practical tooling used by both attackers and defenders,

À qui s'adresse cette formation ?

Penetration testers, SOC analysts, software developers, security engineers

Connaissances prérequises

  • Familiarity with the Python programming language and being able to write simple scripts.
  • Background in machine learning is not required.

Exigences matérielles

  • Laptop: Personal laptop recommended (with admin privileges) - Memory: 16 GB RAM or higher - Storage: Minimum 10 GB free space

Bio

Harish Ramadoss ,

Harish Ramadoss has several years of expertise in Product Security, Red Teaming, and Security Research.

Previously, he was a Principal at Trustwave Spiderlabs, where he led their Application Security efforts. He joined Rippling as a founding member of the Security Engineering team and leads their AI Security and Appsec efforts.

Harish built DejaVu, an open-source deception platform. He has presented at Black Hat, DEFCON, HITB, and other conferences globally.

Return to training sessions