The purpose of the Red Team Training is to understand the underlying concept of red teaming. The training will cover payloads generation, lateral movement techniques, initial foothold and internal reconnaissance. The training is aiming to provide a deep understanding of all the previously described aspects of a red team. Click here for Training Syllabus
Charles F. Hamilton (Mr.Un1k0d3r) , CYPFER
Charles Hamilton is a Red Teamer with over ten years of experience delivering offensive testing services for various government clients and commercial sectors. In recent years, Charles has specialized in covert Red Team operations targeting complex and highly secured environments. These operations have enabled him to refine his skills in stealthily navigating client networks without detection.
Since 2014, he has been the founder and operator of the RingZer0 Team website, a platform dedicated to teaching hacking fundamentals. The RingZer0 community currently boasts over 50,000 members worldwide. Charles is also a prolific toolsmith and trainer who has delivered this training more than 20 times, both online and onsite. He is a speaker in the InfoSec industry, known under the handle Mr.Un1k0d3r.
Dive deep into cutting edge techniques that bypass or neuter modern endpoint defenses. Learn how these solutions work to mitigate their utility and hide deep within code on the endpoint. The days of downloading that binary from the internet and pointing it at a remote machine are over. Today’s defenses oftentimes call for multiple bypasses within a single piece of code. Click here for Training Syllabus
As cloud innovation gives birth to new technologies and new threats, now is the time to modernize your cloud security skills and bring them up to the industry standard. Join this hands-on, 4-day course to push your cloud hacking and vulnerability remediation skills to the next level and widen your career prospects. Get your hands dirty with our popular virtual labs and learn from experienced, practicing penetration testers with a legacy of training at Black Hat. Click here for Training Syllabus
Do you feel pretty good about your Web Application Security testing methodology, but think you might be able to get more out of your tools? Years of experience providing instruction on the process of conducting Web Application Security assessments has made it clear. Even the most experienced testers lack a complete understanding of everything that is available in the industry's #1 Web Application Security testing tool: PortSwigger's Burp Suite Pro. It's time to fix that with Practical Burp Advanced Tactics (PBAT). Click here for Training Syllabus
Workshops are first-come, first-serve and have limited capacity. Some workshops may be streamed for additional passive participation.
Command & Control (C2) is the backbone of modern offensive operations - and one of the most reliable detection opportunities for blue teams.
This hands-on workshop provides a unified view of C2 fundamentals for both offensive and defensive practitioners. Using the open-source Mythic framework, participants will deploy agents, handle callbacks, execute tasking with a focus on opsec, and design real detection logic based on their own generated telemetry.
The session will also cover basic C2 infrastructure design including redirectors and domain fronting, an overview of Mythic agent feature sets, and a high-level comparative analysis of major C2 frameworks used in industry today. Students should leave armed with practical introductory experience operating and detecting C2 activity across multiple platforms.
Logan MacLaren Staff Offensive Security Engineer, Huntress
Logan is the lead Offensive Security engineer at Huntress where he is responsible for planning and executing red team operations as well as bolstering incident response capability through purple team exercises. He has been a long time enthusiast in the security space, building a career spanning big data analytics, bug bounty, and offensive security.
Outside of his day job, Logan can often be found building and participating in CTF challenges, bug hunting in open source software, or learning new skills at conferences across the continent. He has had the honour of speaking at several DEFCON villages, NorthSec conferences, as well as multiple BSides and OWASP Ottawa events.
Workshops are first-come, first-serve and have limited capacity. Some workshops may be streamed for additional passive participation.
Type: Intermediate–Advanced Focus: Adversary emulation, detection engineering, IR workflows Style: Fast, offensive-defensive, “learn by attacking and defending”
Cloud platforms like Amazon Web Services (AWS) are foundational to many critical infrastructures and enterprise applications, making them prime targets for attackers. In this session, we will not only explore the most relevant attack vectors cybercriminals use to compromise AWS infrastructures but will also simulate these attacks using known threat actor techniques in an adversary emulation context. From initial access to hardcore persistence, this talk will provide a comprehensive look at how attackers operate in AWS environments.
We will take a technical journey through the tactics, techniques, and procedures (TTPs) employed by attackers at every stage of the threat lifecycle, aligned with the MITRE ATT&CK framework. We’ll start by reviewing common methods of initial access, such as exploiting exposed credentials or vulnerabilities in services like IAM, Lambda, and EC2. From there, we’ll detail how attackers escalate privileges, move laterally, and evade detection from tools like CloudTrail.
The session will conclude with an in-depth look at advanced persistence techniques in AWS, including the manipulation of IAM policies, backdooring Lambda functions or Docker containers, and tampering with logs. Along the way, we’ll demonstrate how security teams can implement defensive and detection strategies to mitigate these risks. By leveraging AWS-native services and third-party tools, attendees will learn how to enhance their incident response capabilities.
This hands-on workshop will give attendees practical, technical insights into AWS security, adversary behavior, and how to better defend against sophisticated, persistent attacks. A full hands-on experience, this presentation ensures deep technical immersion.
Requirements: Participants should have the following ready before the training: AWS CLI installed Terraform installed GitHub account for cloning lab repos Knowledge of AWS Security Fundamentals
An email with detailed setup instructions will be sent beforehand. Provided Material: Github Repository with the solution to the workshops
Final Notes This training is designed for security engineers, SOC analysts, incident responders, and anyone who wants to truly understand AWS security through hands-on work. By the end of the session, you’ll have a deep understanding on how real attack and defense techniques work in AWS, being able to understand the hardening requirements, replicate attacks, generate detection use cases, and execute forensic techniques.
Santiago Abastante CTO, Solidariy Labs
Former Police Officer from Argentina, now a Cloud Incident Responder and Security Engineer with over 10 years of IT experience. A Digital Nomad and international speaker, I've presented on Cloud Security and Incident Response at Ekoparty, FIRST, Virus Bulletin (three times), Hack.Lu, and various BSides events worldwide. I hold a Bachelor's degree in Information Security and an MBA (Master in Business Administration).
Workshops are first-come, first-serve and have limited capacity. Some workshops may be streamed for additional passive participation.
AI agents represent a fundamental shift for security practitioners. They can automate tedious workflows, act as a co-pilot while you build custom tooling that was previously out of reach, and - when integrated into a well-designed system - serve as an intelligent analyst alongside you.
This workshop shows you all three. You'll learn to direct AI agents effectively, then apply those skills to customize and use a complete threat hunting system that combines deterministic processing with AI-assisted analysis.
What You'll Build A working threat hunting pipeline:
The deterministic layer does the heavy lifting. The agent provides contextual analysis on what surfaces. You make the final call.
What You'll Learn Beyond the system itself, you'll learn the practices that make agent collaboration effective: - Structuring projects so agents understand your environment, optimize outputs, and retain "memory" - Integrating systems that ensure you not only become effective at delivering results, but ensure you continue learning while working with agents ("anti-brainrot systems") - Context management + intuition - learn how to optimize your interaction with agents - Learn how to extend agent capabilities, when MCPs are the right call, when they are not - Agentic coding best practices - staying on top of what's being built, not outsourcing your thinking - Building reusable skills for repeatable security workflows - Hooks and guardrails for safe, automated agent operation
Who Should Attend Threat hunters, detection engineers, SOC analysts, and security practitioners who want to integrate AI agents into their workflow - whether for building tools, automating analysis, or hunting threats.
Requirements - Laptop with terminal access - Model access - I will be using Claude Code, but the course is agnostic - you can use any model to provide inference.
Faan Rossouw Researcher/Instructor, Active Countermeasures + AntiSyphon Training
Faan Rossouw is a security researcher at Active Countermeasures and instructs at Antisyphon Training, where he teaches courses on threat hunting and offensive security tooling. He's currently building AionSec.ai - courses designed to help security practitioners leverage AI agents in their work. Originally from South Africa, Faan is now based in Val-David, Quebec.
Workshops are first-come, first-serve and have limited capacity. Some workshops may be streamed for additional passive participation.
In this hands on workshop students will get a chance to investigate an unknown PCB and follow a process of reverse engineering its function and assembly. No prior hardware experience is needed. Bring your own laptop with windows and USB-A ports.
Mr. Gardiner is an independent consultant at Yellow Flag Security, Inc. presently working to secure commercial transportation at the NMFTA and connected transportation with TMNA. With more than ten years of professional experience in embedded systems design and a lifetime of hacking experience, Gardiner has a deep knowledge of the low-level functions of operating systems and the hardware with which they interface. Prior to YFS Inc., Mr. Gardiner held security assurance and reversing roles at a global corporation, as well as worked in embedded software and systems engineering roles at several organizations. He holds a M.Sc. Eng. in Applied Math & Stats from Queen’s University. He is a DEF CON Hardware Hacking Village (DC HHV) and Car Hacking Village (CHV) volunteer. He is GIAC GPEN and GICSP certified and a GIAC advisory board member, he is also chair of the SAE TEVEES18A1 Cybersecurity Assurance Testing TF (drafting J3061-2), contributor to several ATA TMC task forces, ISO WG11 committees, and a voting member of the SAE Vehicle Electronic Systems Security Committee. Mr. Gardiner has delivered workshops and presentations at several world cybersecurity events including the Cybertruck Challenge, GENIVI security sessions, Hack in Paris, HackFest and DEF CON main stage.
Workshops are first-come, first-serve and have limited capacity. Some workshops may be streamed for additional passive participation.
Ransomware negotiation is often framed as a simple decision... to pay or not to pay that is the question... But in practice, it is a structured coercive exchange conducted under a lot of pressure, incomplete information, and deliberate psychological manipulation and lies.
The Ransomware Negotiation Lab is a three-hour, hands-on workshop designed to simulate the mechanics of modern cyber extortion. Participants will work through a realistic ransomware scenario built around a fully developed Data Leak Site aka a DLS, stage data disclosures, and negotiation transcripts modeled on observed threat actor behaviour and data.
Rather than reviewing theory alone, attendees will actively analyze leak site posts to evaluate the credibility of proof packs. identify attacker leverage points, and conduct guided negotiations exercises in small groups. The lab will also look at timed scenarios to add simulated pressure on escalating ransom pressure, media inquires, partial data releases, and secondary extortion threats will require participants to adapt their strategy in real-time.
Tammy Harper Senior Threat Intelligence Researcher, Flare
Tammy Harper is a Senior Threat Intelligence Researcher at Flare focused on ransomware groups, extortion strategy, and leak site operations. Her work analyzes how threat actors construct leverage and weaponize uncertainty during negotiations. She speaks regularly on the operational and psychological mechanics of modern cybercrime.
Workshops are first-come, first-serve and have limited capacity. Some workshops may be streamed for additional passive participation.
There’s no shortage of acronyms being invented every week in the realm of security engineering. Instead of wading through these buzzwords that might not even be around by the end of the year, we’ll dig into the principles that we think make for a good security program. We’ll then apply these principles with practical hands-on exercises where we’ll use free and open source security tools to build continuous security automation and alerting similar to ones we’ve built when starting new security programs.
Mark El-Khoury , Astarte Security
Mark started as an offensive security consultant and pentester, then moved to the defensive side, leading cybersecurity in various industries, including: Gaming, fintech, and biometrics. Mark is a conference speaker, holds security certifications, and taught at a bootcamp. Mark is now Director of Security Engineering at Movable Ink.
Workshops are first-come, first-serve and have limited capacity. Some workshops may be streamed for additional passive participation.
In the rush to adopt modern cloud architectures, organizations often prioritize velocity over security, leaving critical gaps in their infrastructure. This workshop bridges the gap between offensive exploitation and defensive engineering, using a real-world scenario deployed on Google Cloud Platform (GCP).
Participants will be given access to a "production-grade" environment managed with InfraStream, a manifest-driven infrastructure platform. Inside this environment lies a set of microservices written in Go, which appear functional but contain a critical flaw: a Server-Side Template Injection (SSTI) vulnerability. However, the infrastructure is hardened: The server runs in a scratch-based container with some very restrictive network rules that prevents both bind and reverse shell from being effective.
The workshop is divided into two phases:
The Red Team Phase: Attendees will get their hands dirty analyzing the Go application code and crafting payloads to exploit the SSTI vulnerability. The goal? Get a fully interactive shell on the underlying container and attempt to pivot through the default GCP network to compromise adjacent services. While the initial vulnerability is pretty simple to exploit, the real challenge here lies in leveraging it through the hardening, which will involve hooking the server's code and advanced shellcoding to implement a backdoor. The Blue Team Phase: Once the compromise is confirmed, we will switch gears to remediation. We will modify InfraStream's manifests to apply practical defense-in-depth strategies. Participants will learn how to implement hardened docker runtime deployments, enforce strict network policies, and enable mTLS within the service mesh—effectively restricting the impacts of the RCE and limiting lateral movement. We will also fix the root cause that allowed the process hooking step to take place.
By the end of this session, attendees will understand the mechanics of Go template injection, advanced techniques to leverage vulnerabilites in hardened infrastructure and how to leverage infrastructure-as-code to enforce security baselines that make even vulnerable applications resilient to attack.
Ashley Manraj Chief Technology Officer, Pvotal Technologies Inc.
I’ve built my career at the intersection of security and speed. Today, as AI agents write our code, that intersection has become the most critical frontier in technology. The challenge is no longer creation, but control: how do we secure and maintain the autonomous systems built for us?
Through our work in secure digital transformation at Pvotal, we realized the answer wasn't just better tools, but a new foundation. We needed a control plane designed for this new era. This was the genesis of Infrastream.
Think of it as the factory floor for modern development. Developers and AI agents declare their "intent," and Infrastream's executors work to build and maintain that intent as a secure, compliant, and observable reality. Our mission is to make security an invisible, scalable, and simple-by-design layer, so teams can finally move at the speed of innovation without one off compromise.
Philippe Dugre(zer0x64) DevSecOps Engineer, Pvotal Technologies, NorthSec
Professional cryptography and assembly aficionado™ I've been in the field of offensive security testing for about 10 years. During that time, I worked primary on cryptography architectures and implementations for end2end password management, application penetration testing and modern cloud/IaC platform security engineering. I've been a challenge designer at Northsec since 2020. Most returning participants knows me for always using Rust and Webassembly in my challenges along with always coming up with over-the-top and outlandish reversing, pwning and cryptographic attack scenario. That, or they just know me as the emulator guy.
Talks will be streamed on YouTube and Twitch for free.
Private key leaks represent a critical security vulnerability, with over 600,000 leaked keys on GitHub in 2024, yet their real-world impact remains largely unknown due to the challenge of linking these mathematical objects to their operational usage. We present the first systematic analysis mapping leaked private keys to active certificates, combining GitGuardian's dataset of 945,560 unique leaked private keys with Google's historical Certificate Transparency databases. Our methodology successfully mapped 42,690 private keys to 139,767 certificates, revealing the impact of private keys leaked on GitHub and DockerHub. Using custom online and offline validation, we identified 2,622 valid certificates, enabling website impersonation and MITM attacks. Our analysis reveals systematic failures in certificate revocation practices, with only 80 certificates revoked via CRL/OCSP and just 3 properly marked as key-compromised. Finally, we successfully attributed certificates to 600 organizations across critical industries, though many could not be mapped to identifiable owners. With 20% of valid certificates having been exposed for over two years, our large-scale responsible disclosure campaign sent thousands of emails and revealed significant challenges in reaching certificate owners.
Gaetan Cybersecurity Researcher, GitGuardian
Gaetan is a security researcher with a decade of experience uncovering software vulnerabilities. After establishing himself in offensive security in 2015, he transitioned to security research in 2022, bringing his hands-on expertise in application security. His track record includes uncovering significant vulnerabilities in enterprise-grade systems like Cisco Nexus and Apache HTTPD. Gaetan loves sharing his knowledge through blog posts, speaking at conferences, or hands-on security training sessions at universities and private organizations.
Talks will be streamed on YouTube and Twitch for free.
Red teaming and penetration testing are core practices of the cyber security audit landscape. Both of these practices rely on the ability to execute offensive software tools that are normally detected as malicious by antivirus software. To achieve the execution of these tools on systems where antivirus software are installed, operators rely on several techniques to evade detection. In practice, detection evasion is, too often, ill-informed guesswork. A better methodology for evasion would allow for more efficient, and therefore more affordable campaigns thus contributing to more cyberresilient organisations.
This presentation will discuss some of my ongoing Ph.D. research into methodologies for deducing information about detection capabilities present in antivirus software solutions. I propose a black-box approach based on software probes, mutations and the logical implications of their detection to identify antivirus capabilities. Correct identification of these capabilities would allow evasion techniques to be applied intently and minimally, reducing chances of unexpected detections and decreasing time spent on evading antivirus software.
Talks will be streamed on YouTube and Twitch for free.
This talk covers a big Security Operation Center (SOC)’s journey through maturing our detection engineering practice by implementing detection as code (DaC) principles.
What we will cover: 1. Our starting point (where a lot of SOCs are): no DaC, manually modifying rules in a SIEM; 2. What is DaC and why it’s a game-changer for detection engineers; 3. Why we chose Sigma as the backbone of our DaC practice; 4. Our gradual transition to DaC 5. A real case study of how Sigma + DaC made changing SIEM so much easier.
Intended audience: people who create or manage detection rules in a SOC, people who want to increase the quality and stability of the rules you maintain and people who are interested in how DevOps principles can be applied to security operations.
Émilio works at a large Canadian organization doing software development, detection engineering and incident response. He's a co-organizer of MontréHack (a monthly cybersecurity workshop) and NorthSec's VP CTF.
Outside the cybersecurity world, he's passionate about urbanism and the economics of housing. He will gladly explain how exclusionary zoning and parking mandates are the reasons you can't buy a home to anyone who dare ask.
Talks will be streamed on YouTube and Twitch for free.
Security Operation Centers (SOCs) are used by companies to defend themselves against cyber-attacks. These SOCs monitor logs collected from the enterprise network such as process activity, authentication events and netflow, to identify attacks or compromises. These security teams must navigate numerous alerts generated from a wide range of security controls using both rules and Machine Learning (ML) to identify malicious activity. This is even more so the case in large-scale SOCs, or for companies offering Managed Detection and Response (MDR).
This talk showcases a multi-step approach used in a modern large-scale managed SOC that manages thousands of enterprise networks, demonstrating how it can successfully identify a real infostealer attack through multiple layers of filtering and processing. Through a two-week period containing 9.7 trillion event logs, the presented approach combines alert deduplication, individual rule-based and ML based detectors, alert suppression, and a supervised ML based alert prioritization model to dramatically reduce the noise, so that security analysts can pinpoint the infostealer activity.
François Labrèche Principal Data Scientist, Sophos
François Labrèche is a Principal Data Scientist at Sophos, who focuses on applying machine learning approaches to research problems related to security alerts and vulnerabilities. He focuses on using machine learning to improve the prioritization of alerts and vulnerabilities, in the context of XDR and vulnerability management. He has a Ph.D. from École Polytechnique de Montréal, and has published research papers on the topics of threat research, spam detection, malware analysis and machine learning applied to cybersecurity. He has presented at ACSAC, CAMLIS, NorthSec, BSides Montreal, University College London and École Polytechnique de Montréal, and has published papers in conferences such as the ACM CCS and eCrime.
Talks will be streamed on YouTube and Twitch for free.
Windows shortcut (.LNK) files have remained a popular attack vector over several decades, yet their underlying format is still largely archaic and remains the "gift that keeps on giving" by presenting new opportunities for abuse, even in 2026.
If you believe minor bypasses like adding spaces to an LNK's target (CVE-2025-9491) are the limit of LNK exploitation, this session will change your mind.
We will show previously undocumented LNK techniques that actually allow for more deceptive payload delivery/command execution. We will look at why these new techniques 'work', compare them to existing LNK tricks, and discuss the implications for defenders.
The research methodology behind these new findings, which involved black-box testing of Microsoft's LNK implementation, will be discussed during this session; demonstrating how adopting the "hacker's mindset" helped uncover these LNK tricks.
Next to this, this session will introduce an open-source tool designed to assist security professionals, red teams, and researchers in generating and experimenting with advanced LNK payloads. This tool aims to enhance the ability to simulate and defend against shortcut-based attacks, thereby improving Windows endpoint security.
Wietze has been hacking around with computers for years. Originally from the Netherlands, he currently works as a Lead Threat Detection & Response Engineer in London. As a cyber security enthusiast and threat researcher, he has presented his findings on topics including attacker emulation, PowerShell obfuscation, DLL Hijacking and command-line shenanigans at a variety of security conferences. By sharing his research, publishing related tools and his involvement in the open-source projects such as LOLBAS, HijackLibs and ArgFuscator, he aims to give back to the community he learnt so much from.
Talks will be streamed on YouTube and Twitch for free.
Internet shutdowns are often described as a single action — “turning the Internet off.” In practice, they are the result of carefully orchestrated, multi-layered technical controls applied across national infrastructure. Building on my previous talk at BSides, which introduced the fundamental mechanisms of Internet censorship and shutdowns, this session presents a deeper and more comprehensive technical analysis of the 2026 Internet blackout in Iran.
This talk treats large-scale censorship not as a political phenomenon, but as a network engineering and security operation. We examine who has the technical authority to execute shutdowns, how different censorship techniques are layered and coordinated, and when specific tactics are selectively deployed to maximize impact while maintaining internal network functionality.
The analysis spans multiple layers of the stack. At the routing level, we examine BGP route withdrawals, path manipulation, and international transit isolation. At the access and transport layers, we analyze ISP-level service suppression, mobile network data blackouts, and traffic throttling. At the protocol and application layers, we explore deep packet inspection (DPI), protocol fingerprinting, encrypted traffic degradation, and selective blocking of VPNs, QUIC, and TLS-based services.
Special attention is given to the role of national intranet architectures, which allow domestic services to remain reachable while international connectivity collapses, creating the illusion of partial availability. The session also addresses the technical limits of alternative access methods, including satellite Internet, and why such technologies are not a universal solution under state-scale controls.
Using timelines, traffic behavior, and protocol-level indicators, the talk demonstrates that modern Internet shutdowns are graduated, adaptive, and measurable rather than binary events. Attendees will learn how these techniques manifest on the wire, how they can be detected from inside and outside the affected region, and why many common circumvention strategies fail under coordinated, nation-state enforcement.
This presentation is intended for security professionals, network engineers, and researchers interested in Internet resilience, censorship measurement, and large-scale network interference, offering a technically grounded continuation of prior research and real-world observations.
Reza Sharifi Executive Consultant - Cybersecurity Specialist, CGI Deutschland
I’m a cybersecurity professional with a background in network security and internet infrastructure research. My focus is on the intersection of technology and civil liberties, particularly how network-layer protocols are used—and misused—by state actors to control access to information.
Talks will be streamed on YouTube and Twitch for free.
GitHub gives attackers something they love: a place where identity, automation, and production changes meet. Once they’re in, the path from “read access” to “shipping malicious code” can be disturbingly short.
In this talk, we walk through realistic attack paths into GitHub organizations, starting with initial access techniques like device-code phishing and the abuse of trusted GitHub Apps (including the GitHub CLI). From there, we explore how different credential types enable access long-lived Personal Access Tokens that often persist on developer machines, and short-lived automation credentials like GITHUB_TOKEN that can still leak through logs, artifacts, or misconfigured workflows and then be leveraged to move laterally or expand privileges.
We highlight tactics we’ve developed and researched post-initial access: how you can abuse sensitive workflows, exploit approval and review dynamics, and find paths around policy guardrails like “protected” pipelines and code-signing rulesets. We’ll also discuss tradeoffs attackers make to reduce forensic visibility and delay detection in environments where GitHub’s native telemetry is limited.
We close with practical defender takeaways: detection strategies and response playbooks focused on the signals that matter and how to improve monitoring coverage in the places GitHub is hardest to observe.
Attendees will leave with a shared framework that’s useful on both sides of the table. Defenders will get a checklist for reducing risk across identities, tokens, integrations, and Actions workflows plus concrete ideas for building higher-signal detection and response in places where visibility is lacking. Red teams will gain a realistic map of where GitHub controls tend to break down in practice, along with a set of hypotheses to test during assessments that go beyond “find a secret in a repo.” The goal is to walk out with sharper intuition for how small weaknesses chain into meaningful impact, and practical ways to either validate that risk (red teams) or eliminate it (blue teams) without grinding delivery to a halt.
Andrew Buchanan Senior Red Team Operator, Figment
Andrew is a Senior Red Team Operator at Figment, the world’s leading independent staking infrastructure provider. With over six years of Red Team experience, Andrew brings deep expertise across offensive security, adversary simulation, and real-world attack execution.
Prior to joining Figment, Andrew held cybersecurity roles at one of Canada’s largest financial institutions, conducting advanced red team engagements and security assessments across highly complex enterprise environments.
At Figment, Andrew plans and executes red team operations, penetration tests, and targeted security assessments with a focus on initial access, execution, cloud attack surfaces, and social engineering. As an initial access and social engineering specialist, he has designed and delivered numerous successful campaigns that closely mirror real-world threat actors. Andrew’s work helps ensure Figment continuously tests and strengthens its defences ensuring that Figment's institutional customers can trust they're using the most secure staking product on the market.
Max CM Security Architect and Red Team Lead, Figment
Max Courchesne-Mackie is a cyber security professional with over a decade of experience spanning defense, red teaming, and blockchain security. Max currently serves are a Security Architect at Figment, the leading independent staking infrastructure provider globally. He began his career in the defense industry focused on offensive security, a discipline that remains his core passion and informs his pragmatic approach to risk. Today, Max designs and reviews secure systems for the blockchain industry - an environment facing relentless, rapidly evolving threats. He partners with engineering and product teams to harden architectures, pressure-test assumptions, and translate attacker tradecraft into practical controls. Max's recent work centers on threat modeling for decentralized systems, secure key and wallet management, and building detection/response mechanisms that assume breach.
Connor Laidlaw Senior Application Security Engineer, Figment
Connor is a Senior Application Security Engineer at Figment, the world's leading independent staking infrastructure provider. His career spans a diverse range of security domains, including low-level vulnerability research, offensive security for ticket scalping operations, and engineering defenses to protect applications from abuse.
At Figment, Connor serves as the security subject matter expert for all customer-facing applications. He proactively identifies security concerns at every stage of the software development lifecycle and partners with engineering teams to architect robust solutions. Connor is also spearheading an initiative to integrate AI into Figment's security program, including the development of highly specialized offensive security agents powered by deep contextual awareness of Figment's environment—ensuring that Figment's institutional customers can trust they're using the most secure staking product on the market.
Talks will be streamed on YouTube and Twitch for free.
Security researchers push the boundaries of what’s possible. (Nation-state) threat actors push the boundaries of what’s exploitable. In many cases, threat actors adopt public research for their operations, but there are also many examples where threat actors use novel techniques to compromise cloud environments before researchers publish their findings.
In this talk, a cloud security researcher and a threat intelligence analyst team up to explore how cutting-edge cloud attack research is rapidly weaponized by espionage threat groups. We’ll walk through real-world examples where newly published techniques – intended to educate defenders – were adopted and operationalized by nation-state actors targeting cloud environments. The focus of the talk will be on Entra ID and Microsoft 365 attacks, exploring both the technical mechanics behind the tools and techniques, why threat actors are interested in utilizing them and real-world example of research adoption. Examples of techniques cover include device code phishing, authorization code phishing (ConsentFix) and the adoption of open source security tools.
This session highlights how attack paths that may seem highly theoretical at first glance can pose a significant and immediate threat to organizations operating in the cloud. What starts as a proof-of-concept in a blog can quickly become a part of a threat actor’s playbook.
Dirk-jan Mollema Security Researcher, Outsider Security
Dirk-jan Mollema is a security researcher focusing on Active Directory and Microsoft Entra (Azure AD) security. In 2022 he started his own company, Outsider Security, where he performs penetration tests and reviews of enterprise networks and cloud environments. He blogs at dirkjanm.io, where he publishes his research, and shares updates on the many open source security tools he has written over the years. He presented previously at TROOPERS, DEF CON, Black Hat and BlueHat, is a current Microsoft MVP and has been awarded as one of Microsoft’s Most Valuable Researchers multiple times.
Talks will be streamed on YouTube and Twitch for free.
In March of 2025, the Model Evaluation & Threat Research (METR) group introduced AI task time horizons as a method for measuring the length of tasks that models can autonomously complete coherently. They demonstrated rapid growth in capabilities across frontier systems: effectively showing a doubling every \~7 months. While this framework has primarily been applied to general software and knowledge work, its implications for adversarial domains remain largely unexplored.
In this talk, I present work I've done with Sean Peters and Jack Payne, extending METR’s methodology to offensive cybersecurity workflows, alongside a complementary human baseline study to ground and interpret model performance.
Motivated by the desire to better understand offensive model capabilities, we assembled realistic multi-step offensive task sequences by leveraging a suite of industry standard benchmarks. Both human participants and frontier models were evaluated across increasing task lengths to quantify sustained autonomy, coherence, and failure modes.
Initial results indicate that AI task horizons in offensive cyber are already meaningful and extending rapidly. In several domains, models can chain complex tool-driven actions resembling early-stage intrusion playbooks rather than isolated exploitation steps. The human study provides critical context, highlighting where models approach or diverge from human performance as task length increases.
The talk will cover the experimental design, empirical findings, and key limitations, emphasizing how horizon-based evaluation combined with human grounding surfaces trends that may not be observable by standalone, static benchmarks.
Finally, this work is positioned as exploratory research. It raises questions about whether similar horizon trends appear in defensive workflows: how could we measure defensive task horizons, and what methods would allow meaningful comparisons to offensive performance? If the trend does not replicate in defense, what interventions, tooling, or policy changes could help close the gap? This framing invites further investigation and provides a roadmap for research and practitioner engagement in understanding and mitigating offense–defense asymmetries under AI automation.
Jeremy Miller Sr. Manager, Cybersecurity Strategy.& Research, OffSec
Jeremy Miller is an offensive security leader and educator, currently focused on how AI automation is reshaping adversarial capability. He spent over a decade at Offensive Security in technical and leadership roles across content development, training, and workforce development programs, bridging hands-on offensive methodology with pedagogy and strategy.
His current research, in collaboration with Sean Peters and Jack Payne, applies the METR AI task time horizon framework to realistic offensive cyber workflows, grounded by complementary human studies to measure autonomy scaling in adversarial domains.
Jeremy’s interests center on offense–defense asymmetry, empirical evaluation of autonomous systems, and translating AI security and safety research into practical implications for decision makers.
Talks will be streamed on YouTube and Twitch for free.
When you're protecting a billion-user platform, attackers don't wait. Scraper bots, fake account farms, and credential stuffing campaigns operate at machine speed and your defenses need to be faster. This talk dissects the architecture of sub-millisecond anti-abuse detection systems that must make security decisions without adding perceptible latency to legitimate users.
We'll examine real-world defensive infrastructure using Venice (LinkedIn's derived data platform) as a case study, revealing the architectural patterns, trade-offs, and failure modes of ultra-low latency security systems. You'll learn how embedded data stores enable <1ms threat intelligence lookups, how precomputed reputation scores defend against distributed attacks, and critically—where the weaknesses lie.
Manu Jose Senior Manager, LinkedIn
With over 20+ years of software engineering experience, I am a technically oriented, high-energy, and empathetic leader who is passionate about building scalable, reliable, and innovative solutions for machine data. I am currently a Sr. Manager at LinkedIn, where I lead the Venice Project, a cutting-edge initiative that leverages online deep learning to improve user experience and engagement on the platform. I am driven by the mission of creating economic opportunity for every member of the global workforce, and I value collaboration, diversity, and continuous learning in my team.
Talks will be streamed on YouTube and Twitch for free.
Abstract: Traditional defensive measures alone are proving insufficient against determined adversaries. This talk introduces a systematic approach to implementing effective deception solutions by using BloodHound's OpenGraph framework to map and deploy deceptive attack paths across AD and third-party enterprise technologies.
This talk moves beyond basic honeypots and canary tokens. This presentation demonstrates how to build discoverable deceptions that actually entice attackers. We'll explore how understanding existing attack paths in your environment is crucial to creating believable deceptions that adversaries will naturally encounter and attempt to exploit.
Key Topics Covered: - Attack Path-Driven Deception Design: Using attack path analysis to identify optimal deception placement points and create realistic adversary scenarios - OpenGraph for Deception Mapping: Extending beyond Active Directory to model deceptive attack paths across Git repositories, configuration management systems, and cloud services - Practical Implementation Examples: Live demonstrations including AD CS deception using Certiception, repo-based deceptions with GitHound, infrastructure deceptions through AnsibleHound and SCCMHound
Joshua Prager Managing Consultant, SpecterOps Inc.
Josh Prager has over 13 years’ experience focusing on DoD red team infrastructure, cyber threat emulation and threat hunting. As a former threat hunter in the Federal industry, he provided various cyber threat emulation and threat hunting assessments throughout DOD environments. As a principal consultant at SpecterOps, he guides clients in developing the maturity of their detection and response programs, building their detection engineering capabilities, and ensuring detective and preventive coverage of offensive techniques.
Talks will be streamed on YouTube and Twitch for free.
We can't trust the images and videos we see online anymore. Recent generative AI improvements support the creation and modification of convincing digital media in quasi real time. We live in an era where these fakes are routinely shared online to influence public opinion, even by elected officials themselves!
Fortunately, technologies exist to embed cryptographic signatures and watermarks in these digital assets, proving their origin. The C2PA specification is being adopted by many technology providers, camera manufacturers, and news media organizations. Major deployments have started in 2025 and will accelerate in 2026.
In high-risk contexts (conflict zones, protests, corruption reporting) creators might be reluctant to share certified images and videos for fear of retribution. Is there a way to reconcile the need for authenticated assets and the privacy of their creators? The answer is yes!
In this talk, we'll explore cryptographic options to provide privacy to those who capture and share digital assets, enabling anonymous yet verifiable content. We'll present an open-source prototype that augments the C2PA specification by using blind signatures and zero-knowledge proofs to hide the signer's identity. These technologies offer the best of both worlds: enabling the public, reporters, and whistleblowers to share sensitive authentic digital media with strong privacy protections, which would increase trust in our content ecosystems.
Christian Paquin Principal Research Software Engineer, Microsoft Research
Christian is a security specialist in the Microsoft Research Cryptography team with a mission to bridge the gap between academic research and real-world systems. With 25 years of experience, Christian has been involved in many industry-wide initiatives such as the development of privacy enhancing identity technologies (such as anonymous credentials), the ongoing post-quantum cryptographic migration, and the Coalition for Content Provenance and Authenticity (C2PA) to fight online disinformation. Christian shares some of his work results on his blog.
Talks will be streamed on YouTube and Twitch for free.
When you think of hacking browsers, you perhaps think of V8 heap exploitation, deep-dive fuzzing, crazy sandbox escapes, and so on. But what if I told you that you can still find vulnerabilities in major browsers that don’t require any technical knowledge? Bugs you can even run into by accident!
In this talk, I’ll take you through my journey of how I “accidentally“ found a vulnerability in Google Chrome. And how that led me to find 2 more vulnerabilities in Chrome as well as 2 vulnerabilities in Mozilla Firefox and many more bugs in other products.
So if you’re keen to find out how I could, with minimal user-interaction, steal your private GitHub repositories, then this talk is for you!
Robbe Van Roey Offensive Security Lead, Toreon
Hi! I’m Robbe Van Roey 👋
I’m a hacker. I like breaking stuff. I’m a penetration tester at Toreon, I’ve worked for a bug bounty company, and I’ve found 35+ CVEs. I love hacking web apps, mobile applications, AI systems, and Active Directory. I’m also a teacher. I teach developers about secure coding, I teach beginners about Red Teaming for Hack The Box and I’ve created a bunch of YouTube videos on my channel.
In the online realm, you may know me as PinkDraconian. Come up to me and say hi!
My life motto is “Hacking you so you don’t get hacked“ and I’d like to show you part of that ideology during my talk. See you there!
Talks will be streamed on YouTube and Twitch for free.
Have you ever wondered how to run code inside a different process? Or, for that matter, why you would WANT to run code in another process?
I originally entered the security world writing cheats for Windows games - Starcraft, Warcraft II, and similar late-90s games. The tools are functionally lost to the ages, but the techniques I used have served me for years: not only can you use process injection to cheat at video games, it's useful for so much more: adding, changing, bypassing, or even calling code in a foreign process can help with fuzzing, reverse engineering, malware detection, and so much more!
But for a technique so commonly used, there isn't really a "standard" way to do it, especially on Linux!
One day, I read a blog discussing how hard it was to do on Linux. I thought, "that can't be right, it's easy on Windows!" and set out to prove them wrong. Days later, I had accidentally written a debugger and learned way, way too much about the ptrace API and /proc filesystem!
In this talk, I'll demonstrate the tooling I built and why it might be more useful than you might think to do this yourself!
Ron Bowes Principal Security Researcher, GreyNoise Intelligence
Ron Bowes is a Principal Security Researcher on the GreyNoise Labs team, which tracks and investigates unusual--typically malicious--internet traffic. His primary role is to understand and track the big vulnerabilities of the day/week/month/year; often, that means parsing vague vendor advisories, diff'ing patches, reconstructing attacks from log files, and--most complex of all--installing and configuring enterprise software. When he's not at work, he runs the BSides San Francisco Capture the Flag contest, is a founder of The Long Con conference in Winnipeg, takes improv classes, and continues his project to finish every game in his Steam library.
Talks will be streamed on YouTube and Twitch for free.
En 1865, Jules Verne envoie des hommes sur la Lune depuis la Floride. En 1969, Apollo 11 décolle de Cap Canaveral. En 1984, William Gibson décrit le cyberespace comme une "hallucination consensuelle". Quarante ans plus tard, nous y vivons. La science-fiction n'est pas une prédiction — c'est un laboratoire d'idées où le futur se prototype avant d'exister.
Cette conférence propose un voyage entre imaginaire et innovation, entre les pages des romans d'hier et les laboratoires d'aujourd'hui.
Dans un premier temps, nous revisiterons quelques anticipations célèbres : les tablettes tactiles de Star Trek, les oreillettes de Fahrenheit 451, la vidéosurveillance de 1984, les voitures autonomes de Total Recall.
Puis nous plongerons dans des innovations moins médiatisées mais plus disruptives. Que pouvons-nous puiser dans la Science-Fiction pour deviner ce que notre futur proche nous réserve avec les découvertes actuelles : IA générative, calcul quantique.
Enfin, nous explorerons un territoire encore plus radical : l'informatique biomoléculaire. Des chercheurs travaillent aujourd'hui sur des systèmes de calcul qui se nourrissent de sucre et de lumière. Stockage de données dans des molécules, calcul biologique, interfaces vivantes — nous sommes à l'aube d'une révolution dont peu de gens mesurent l'ampleur. Là encore, que nous raconte les grandes imaginations sur ces sujets en devenir ?
Pour conclure, nous combinerons ces briques pour imaginer des scénarios possibles. Certains existent déjà dans la littérature de science-fiction. D'autres restent à écrire. Vous repartirez j'espère avec l'envie d'identifier les futurs souhaitables et ceux que nous voulons éviter.
Car penser le futur n'est pas un luxe intellectuel. C'est une responsabilité. En tant que technologues, chercheurs, hackers, citoyens, nous avons le pouvoir d'orienter la trajectoire. La science-fiction d'hier est la science d'aujourd'hui. La science-fiction d'aujourd'hui sera le monde de nos enfants.
Le futur ne se prédit pas. Il se choisit.
Xavier Facélina Executive Vice President, SECLAB
Xavier Facélina est co-fondateur de SECLAB, entreprise française spécialisée dans la cybersécurité des infrastructures critiques. Autodidacte, il a quitté l'école avant le bac pour se former seul à l'informatique et n'a jamais cessé depuis. En 20 ans, il a accompagné des opérateurs d'importance vitale dans les secteurs de l'énergie, de la défense et de l'industrie. Il possède encore un Minitel en état de marche. Il préfère les questions aux réponses et croit que la meilleure façon de prédire le futur, c'est de l'inventer.
Talks will be streamed on YouTube and Twitch for free.
This presentation will focus on AnsibleHound, a collector that adds Ansible WorX and Ansible Tower attack paths to BloodHound. Additionally, we will conduct a thorough exploration of Ansible exploitation and abuse through attack path management. This will enable both attackers and defenders to identify hybrid attack paths.
Our presentation will provide you with three key takeaways:
Charl-alexandre Le Brun Senior Penetration Tester, Desjardins
Charl-Alexandre is a dedicated member of the information security community. With several years of experience as a penetration tester, he is driven by a strong passion for developing innovative tools and techniques that advance the field and contribute to the broader community.
Simon Lachkar Offensive Team Lead, Desjardins Group
Simon leads the full-scope penetration testing team at Desjardins Group, one of Canada's largest financial institutions. Previously, he worked as a technical team leader and penetration tester in Canada and France. Simon has recently been involved in developing the AnsibleHound project.
Talks will be streamed on YouTube and Twitch for free.
For years, we wrote the defensive manuals. We built the "Living Off The Pipeline" (LOTP) inventory and released poutine to help you find the vulns. We even spoke at NorthSec about the theoretical risks of Build Pipeline compromise.
We have bad news: The Threat Actors were "in the room" taking notes.
In early 2025, we found the "smoking gun." A Threat Actor on BreachForums laid out the full attack plan for a 0-day compromise of a major Open Source project, giving a direct shout-out to our poutine scanner and LOTP research as the source. Our defensive work has become their offensive playbook.
In this talk, we stop playing defense.
Introducing SmokedMeat: The "Metasploit for CI/CD."
Our research team has a saying: 2025's Build Pipelines look like the average 2005 PHP Web App in terms of secure coding. They are wide open to "pwn requests" and command injections that lead to secrets exfiltration or privilege escalation via overprivileged tokens. SmokedMeat is the first Open Source Red Team framework designed to commoditize these compromises, demonstrating exactly what happens when a Threat Actor turns your infrastructure against you.
We will demonstrate a full exploitation chain: pivoting from unprivileged anonymous access on public repositories to private repository and intellectual property theft, the "gone in 60 seconds" jump from a workflow runner directly to permanent Cloud Admin, and the ability to escape ephemeral job contexts to implant permanent backdoors on your build infrastructure.
The era of "awareness" is over. This talk is a live demonstration of why your current CI/CD security strategy is already obsolete.
François Proulx VP of Security Research, BoostSecurity.io
François Proulx is the VP of Security Research at BoostSecurity.io and the co-creator of the poutine Open Source CI/CD scanner. He co-founded the "Living Off The Pipeline" (LOTP) project to describe the abuse of build tools for lateral movement. After spending years teaching defenders how to secure their workflows, he is now demonstrating how attackers are dismantling them.
Talks will be streamed on YouTube and Twitch for free.
This talk will expand on concepts explored in my NSEC 2025 talk "Stolen Laptops : A brief overview of modern physical access attacks"
We will deep-dive into the subject of Direct Memory Access attacks against modern windows operating systems, exploring together some of the primary countermeasures employed to protect computers from physical attackers.
Notably, we will discuss the implementation and interaction of various defensive technology at the physical, firmware, and operating system layers.
This includes things like UEFI security, hardware whitelisting, firmware DMA protection and virtualization features (VT-d, VT-x, AMD-Vi), and their interaction with critical OS layer protection mechanisms including Virtualization-Based Security (VBS) and Kernel DMA Protection. We will discuss techniques used by attackers to neutralize or bypass these mechanisms to enable a DMA attack against Windows 11.
The talk culminates with an in-depth presentation of a novel tool I developed called DMAReaper. The tool allows attackers with physical access to Disable Kernel DMA Protection via a pre-boot DMA attack even when a system has all modern protection mechanisms enforced.
We will discuss the research that supported the tool's creation and the precise operations being performed against system RAM in order to locate and destroy the DMAR ACPI table required for Kernel DMA Protection to function. This talk includes a multiple video demonstrations of the tool being used to compromise a modern workstation running Windows 11.
Pierre-Nicolas Allard-Coutu Senior Penetration Tester, Bell Canada
Pierre-Nicolas Allard-Coutu is a senior penetration tester and offensive security R&D lead at Bell Canada's Security Testing and Incident Response team (STIRT). He is a seasoned red team operator with many years of experience specialized in the development of malware payloads and payload delivery systems. More recently, he has spearheaded the creation of physical penetration test methodologies including novel exploitation techniques aimed at compromising UEFI pre-boot environments and enabling Direct Memory Access vectors against modern laptops. He is currently the top public contributor to the Quebec Government Cyber Defense Center's vulnerability disclosure program, and part of the HackFest Challenge design team. The type of person who could never resist placing "><script>alert(1);<!-- in his bio.
Talks will be streamed on YouTube and Twitch for free.
Que ce soit la Chine, la Russie, la Corée du Nord, l’Iran ou encore Israël, le Canada demeure année après année une cible significative de la cyberconflictualité, agitant l’espace numérique mondial. Entre campagne de cyberespionnage, d’opérations d’influence en ligne ou encore de tentatives de cybersabotage, le cyberespace n’a jamais eu une place aussi prépondérante sur la sphère géopolitique que depuis les dernières années.
Recensant les cyberincidents à caractère géopolitique ayant touché le Canada depuis 2010, l’observatoire des conflits multidimensionnels de la Chaire Raoul-Dandurand présente annuellement depuis 2021 un rapport faisant un état de lieux de ces derniers, répondant aux questions les plus notables sur le sujet : Quels sont les types de cyberincidents les plus fréquents? Quelles sont les cibles connues? Quels groupes de pirates ont visé le Canada? Mais également et avant tout ; d’où proviennent ces attaques?
Cette conférence aura comme objectif de présenter en avant-première les résultats du rapport de 2026, présentant ainsi les grandes tendances de l’année 2025, ses nouveautés, mais également les nouveaux défis rencontrés.
Dans un premier temps, nous nous pencherons sur le type d’incident avec le plus de représentation depuis la création du répertoire ; le cas du cyberespionnage et les particularités de ceux-ci pour l’année qui vient de passer.
Dans un second temps, nous analyserons un phénomène qui, déjà que bien présent lors des années antérieures, à réellement vu un boom cette année ; les cas de campagne de manipulation de l’information avec une analyse pour la première fois d’un cas d’hypertrucage sexuel au sein d’une campagne de ce type.
Dans un troisième et dernier temps, nous parlerons d’une tendance qui semble s’accentuer depuis les dernières années, mais qui se recense pour la première fois de manière publique en sol canadien ; les cas de cybersabotage.
Philippe Marchand Researcher, Chaire Raoul-Dandurand
Philippe Marchand est chercheur et coordonnateur à l'observatoire des conflits multidimensionnels de la chaire de recherche Raoul-Dandurand en études stratégiques et diplomatiques. Politologue, il se spécialise sur l'utilisation des téléphones cellulaires par la population civile au sein des confits, mais également sur le caractère géopolitique des cyberattaques sur les États.
Talks will be streamed on YouTube and Twitch for free.
As organizations scale, traditional security review models don’t. Centralized security teams become bottlenecks, threat modeling remains expert-only, and DevOps teams ship designs without structured security insight—creating compounding security debt. This talk shares how a security team at Ubisoft transformed threat modeling from a niche exercise into an everyday DevSecOps practice now spreading across multiple software development teams. We’ll walk through the real transformation journey: engaging leadership to recognize the limits of centralized security, designing a shift-left strategy centered on practitioner ownership, and embedding threat modeling from theory into sustained practice. Beyond mechanics, this session explores the human side of scale: driving adoption without mandate fatigue, selling the "what's in it for me?", and enabling managers and teams to own security outcomes. You’ll leave with practical lessons, adoption patterns that worked (and failed), and a realistic roadmap for scaling threat modeling in large software organizations—without scaling your security team.
Kristine Barbará Director, Security Engagement & Awareness, Ubisoft Entertainment
Kristine Barbara is a security transformation leader at Ubisoft, focused on making security part of how software and games are built—not an afterthought. She has led global programs spanning security culture and behavior change at scale, blending change management and community enablement. Known for turning complex risk into actionable practice, Kristine helps teams adopt fundamental security practices across global teams.
Talks will be streamed on YouTube and Twitch for free.
Serverless architectures continue to evolve and so does their attack surface. Azure Function Apps have undergone a significant architectural transformation with the introduction of the Flex Consumption plan, identity-based service connections, private networking, OpenAI integrations, and hybrid hosting models. While these features expand functionality and scalability, they also introduce new and often overlooked security misconfigurations.
Azure Functions remain a powerful serverless compute platform capable of interacting with a wide range of cloud and on-premises services. However, recent platform enhancements have created novel abuse primitives that can be leveraged by attackers for persistence, lateral movement, and stealthy post-exploitation operations.
This talk explores modern techniques for gaining access to Azure Function App source code and configuration data across contemporary deployment models, including Flex Consumption and container-backed hosting. We demonstrate how identity-based service connections, managed identities, and Key Vault or App Configuration references can be abused to access downstream cloud resources without relying on traditional secrets. We also present new approaches for deploying stealthy backdoors across multiple runtimes, including .NET isolated, Python, Node.js, and Java. Additionally, we examine authenticated Function App misconfigurations that allow unintended user access and execution.
We further analyze advanced networking scenarios enabled by recent platform features such as VNet-integrated serverless functions and private triggers and show how they can be exploited to pivot between cloud and internal environments. The talk also highlights how Azure Function Apps can be repurposed as resilient command-and-control redirectors or staging infrastructure, blending seamlessly into legitimate serverless traffic and cloud telemetry.
Through updated real-world penetration testing case studies, we demonstrate modern escalation paths originating from Function Apps that lead to privileged Azure control and hybrid identity compromise. By uncovering these feature-driven abuse cases and providing actionable detection and hardening guidance, this research equips both defenders and cloud pentesters to secure the next generation of Azure serverless deployments.
Chirag Savla , White Knight Labs
Chirag Savla is a cyber security professional with 10+ years of experience. His areas of interest include penetration testing, red teaming, azure and active directory security, and post-exploitation research. For fun, he enjoys creating open-source tools and exploring new attack methodologies in his leisure. Chirag has worked extensively on Azure, Active Directory attacks and defense, and bypassing detection mechanisms. He is the author of multiple open source tools such as Process Injection, Callidus, and others. He has presented at many conferences and local meetups and has trained people in international conferences like Blackhat, BSides Milano, Wild West Hackin’ Fest, HackSpaceCon, VulnCon and NorthSec.
Talks will be streamed on YouTube and Twitch for free.
What happens when you give an AI agent a Kali box, point it at an enterprise network, and tell it to get domain admin? And what happens when another AI agent is running the SOC on the other side?
APTL is an open source, Docker-based purple team lab that brings up an isolated enterprise environment (Active Directory, databases, web apps, file servers, email) and with a full OSS SOC stack (Wazuh SIEM, Suricata IDS, MISP, TheHive, Shuffle SOAR) and an MCP server layer that gives AI agents programmatic control over both sides.
One command, everything up. Tell the agents to go. AI agents attacking and defending autonomously.
This talk is a live demo. We'll spin up APTL, launch an AI red team agent against TechVault Solutions (our fictional target company), and watch it perform autonomous reconnaissance, identify attack paths, chain exploits, and attempt lateral movement, while the blue side detects, triages, and responds in real time. All telemetry is captured: SIEM alerts, IDS events, case management, SOAR playbook executions, and full MCP traces. We will talk through success and failure modes, and laugh at some of the epic fails.
APTL is MIT licensed and on GitHub and runs on commodity hardware using consumer-grade AI services. That's the point. This is what autonomous cyber offense and defense looks like with tools anyone can download today. Participants can pull the repo and play after the talk.
Brad Edwards Domain Consultant, Security Operations Transformation, Palo Alto Networks
Brad Edwards is a Domain Consultant at Palo Alto Networks, specializing in security operations. He has 15 years of law enforcement experience as an RCMP constable, including digital forensics and economic crime. After leaving the RCMP, Brad worked as an enterprise software developer, then led the British Columbia Lottery Corporation’s Security Operations program. He researches autonomous cybersecurity operations, focusing on street-level threats most likely to impact organizations.
Talks will be streamed on YouTube and Twitch for free.
Is ADINT, Advertising-based Intelligence, the new trend for Computer Network Exploitation (CNE) initial access and commercial surveillance solutions ? ADINT defines the exploitation of online advertising processes to collect, correlate, and operationalize large-scale data for intelligence gathering. By weaponizing the advertisement Real-Time Bidding (RTB) process, this technique turns an omnipresent commercial ecosystem into a dual-use surveillance tool.
While initially leveraged for granular geolocation and real-time geofencing through mobile advertising identifiers and metadata correlation, the stakes of ADINT have escalated significantly. It now also serves as an initial access vector for commercial spyware solutions, reshaping the economics of the commercial surveillance vendor (CSV) market as traditional zero-click vulnerabilities become increasingly scarce and costly.
This presentation provides a comprehensive overview of the current ADINT landscape and operational use cases. It will outline the evolution of ADINT and propose a categorization into three operational tiers: Passive ADINT, characterized by the passive collection and correlation of RTB bidstream data; Active ADINT, which employs on-demand micro-targeting and geofencing for real-time target validation; and Offensive ADINT, where the ad delivery mechanism itself is repurposed as a zero-click intrusion vector for initial access.
Based on documented cases, the presentation will also examine how commercial surveillance vendors weaponize ADINT by exploiting the structural opacity of the AdTech industry. By rebranding intrusive monitoring as legitimate analytics, these firms leverage regulatory arbitrage to circumvent dual-use export controls, highlighting the urgent need for stakeholders to adapt their defensive and policy responses.
Talks will be streamed on YouTube and Twitch for free.
5G networks are being opened up at every layer and attackers are paying attention. On the radio interface, we assess what operators actually deploy: is encryption enabled? Is integrity protection enforced on signaling and user plane? Are null ciphers still accepted? How well is the network isolated from external access? These fundamentals still fail more often than you'd think.
The 5G core runs on cloud-native REST-based architectures where a single misconfigured network function can expose subscriber data or provide persistence into critical infrastructure. We demonstrate this live using our open-source 5GC API Pentest Burp Suite extension automating NF discovery, IMSI enumeration, credential extraction, and API fuzzing against a 5G core. OpenRAN disaggregates the radio access network into open interfaces between O-RU, O-DU, O-CU, and the RIC - creating attack surfaces that didn't exist in monolithic base stations. And now CAMARA, the industry initiative exposing network capabilities through standardized APIs, gives third parties access to device location, SIM swap, and number verification, with security models still maturing.
This talk walks through real assessments and attacks at each layer from verifying radio protections to exploiting core APIs and examining how some endpoints could enable surveillance and fraud.
Sébastien Dudek is the founder of Penthertz, a French company specializing in wireless and hardware security. With over 15 years of experience in telecommunications security, he has published research on 5G security, Open RAN, baseband fuzzing, mobile network interception, and power-line communication vulnerabilities. He is the creator of RF Swift, an open-source SDR toolkit, V2G Injector/HomeplugPWN, 5GC API Pentest, and LoRa Craft among other security tools.
His clients major defense and (aero)space companies, include automotive, and his work spans from 2G through 5G security, OT/IoT device security, and critical infrastructure protection.