Watch the stream
This talk explores the intersection of philosophy, ethics, security, and AI. As AI systems like LLMs become increasingly ubiquitous in our lives, security practitioners are shifting from testing security to testing for safety - a fundamentally normative issue. A transition to this new paradigm can be an uneasy one for professionals accustomed to the comfort of (relatively) objective processes. I argue that despite some initial discomfort, penetration testers & red teamers - with our rich history of social awareness and ethically motivated action - are well-positioned to tackle AI safety and responsibility challenges. We can do this by reframing what we already know how to do so well in other contexts: balance technical rigour on a robust foundation of humility, curiosity, compassion and epistemological self-awareness.
Jeremy Miller Sr. Manager, Cybersecurity Strategy.& Research, OffSec
Jeremy Miller is an offensive security leader and educator, currently focused on how AI automation is reshaping adversarial capability. He spent over a decade at Offensive Security in technical and leadership roles across content development, training, and workforce development programs, bridging hands-on offensive methodology with pedagogy and strategy.
His current research, in collaboration with Sean Peters and Jack Payne, applies the METR AI task time horizon framework to realistic offensive cyber workflows, grounded by complementary human studies to measure autonomy scaling in adversarial domains.
Jeremy’s interests center on offense–defense asymmetry, empirical evaluation of autonomous systems, and translating AI security and safety research into practical implications for decision makers.