An AI Pentester's Reflections On Risk
About This Session
Despite AI's complex and pervasive growth across products and services, most organizations find difficult to tangibly define AI risk, let alone mitigation and management. Yet industry continues an ever-pressing push toward deeper and more powerful integration of AI technology, which is only accelerated by the reach into agentic software and design patterns.
Drawing from extensive cross-sector engagements, NCC Group's AI/ML security practice lead will analyze the most significant risk vectors we've observed reoccur in AI implementations and the real, impactful vulnerabilities that have emerged from this computing paradigm. This talk outlines:
* How AI impacts Confidentiality, Integrity, and Availability of critical assets within organizations
* Why organizations find it difficult to apply traditional security models to AI systems
* The impact of agentic AI on system security
* How we can apply security fundamentals to AI
* What lessons we can draw from previous paradigm shifts
Attendees will walk away with a clear understanding of AI security's "state of play," including tangible AI risks along with their requisite remediation mechanisms. They'll leave equipped to lead and direct secure AI deployments using state-of-the-art defensive practices adopted by AI-mature organizations at the forefront of modern AI security.
Drawing from extensive cross-sector engagements, NCC Group's AI/ML security practice lead will analyze the most significant risk vectors we've observed reoccur in AI implementations and the real, impactful vulnerabilities that have emerged from this computing paradigm. This talk outlines:
* How AI impacts Confidentiality, Integrity, and Availability of critical assets within organizations
* Why organizations find it difficult to apply traditional security models to AI systems
* The impact of agentic AI on system security
* How we can apply security fundamentals to AI
* What lessons we can draw from previous paradigm shifts
Attendees will walk away with a clear understanding of AI security's "state of play," including tangible AI risks along with their requisite remediation mechanisms. They'll leave equipped to lead and direct secure AI deployments using state-of-the-art defensive practices adopted by AI-mature organizations at the forefront of modern AI security.
Speaker

David Brauchler III
Technical Director - NCC Group
David Brauchler III is an NCC Group Technical Director in Dallas, Texas. He is an adjunct professor for the Cyber Security graduate program at Southern Methodist University with a master's degree in Security Engineering and the Offensive Security Certified Professional (OSCP) certification.
David Brauchler published Analyzing AI Application Threat Models on NCC Group's research blog, introducing new Models-As-Threat-Actors (MATA) methodology to the AI security industry, which provided a new trust flow centric approach to evaluating risk in AI/ML-integrated environments. David also released several new threat vector categories, AI/ML security controls and reference architectures, and recommendations to maximize the effectiveness of AI penetration tests.
David Brauchler published Analyzing AI Application Threat Models on NCC Group's research blog, introducing new Models-As-Threat-Actors (MATA) methodology to the AI security industry, which provided a new trust flow centric approach to evaluating risk in AI/ML-integrated environments. David also released several new threat vector categories, AI/ML security controls and reference architectures, and recommendations to maximize the effectiveness of AI penetration tests.