The AI Risk Summit will drive the conversation forward with consequential dialogue and real-world examples that skip past the hype and provide meaningful guidance on risk management in the age of artificial intelligence. Register today to confirm your spot. Tickets also include access to the 2025 CISO Forum Summer Summit sessions.
Tuesday, August 19, 2025
- David Campbell AI Security Lead - Scale AI
Adversarial AI Risk: Your Next Incident Won’t Be an 0Day
Most AI failures won’t come from novel exploits. They’ll come from assumptions no one tested. This talk breaks down the real threats already happening and shows why red teaming is the best way to catch what others miss. From nation-state actors to prompt-based jailbreak kits, you’ll learn how adversaries think and how to get ahead of them. If your model is in production, it’s already in scope.
- Austin Bosarge Chief Corporate Officer - QuSecure
Preparing for the Quantum Threat: A CISO’s Roadmap
Quantum computing is no longer theoretical — it’s a looming disruptor of today’s cryptographic standards. This session equips CISOs with a clear understanding of the quantum threat landscape, timelines to watch, and practical steps to start building quantum-resilient security strategies now
- Millie Huang Staff Data Scientist - Salesforce
When AI Agents Go Rogue: Unmasking Risky Enterprise AI Behavior with Unsupervised Learning
As enterprises rapidly adopt AI agents (e.g., Salesforce's Agentforce), a critical risk emerges: misconfigured or compromised agents performing anomalous, potentially harmful, data operations. This presentation unveils an original, practical methodology for detecting such threats using unsupervised machine learning.
Drawing from a real-world Proof-of-Concept, we demonstrate how behavioral profiling—analyzing features engineered from system logs like data access patterns, query syntax (SOQL keyword analysis), and IP usage, along with signals from the content moderation mechanisms embedded within the LLM guardrails such prompt injection detection and toxicity scoring—can distinguish risky agent actions. We explore the creation of 30+ behavioral features and the application of KMeans clustering to identify agents exhibiting statistically significant deviations, serving as an early warning for misuse or overpermissive configurations. We will share insights into observed differences between AI agent and human user profiles, and challenges like crucial data gaps that impact comprehensive monitoring.
This session offers a vendor-neutral, technical deep-dive into a novel approach for safeguarding enterprise AI deployments.
Learning Objectives for Attendees:
1. Understand the novel security risks posed by misconfigured/overpermissive enterprise AI agents.
2. Learn a practical methodology for behavioral profiling of AI agents using unsupervised ML and log data.
3. Identify key data features, feature engineering techniques (e.g., for SOQL analysis), and common data challenges (log gaps, attribution) in AI agent monitoring.
4. Gain actionable insights to develop proactive detection strategies for anomalous AI agent activity and protect sensitive data.
- Wendy Nather Senior Research Initiatives Director - 1Password
Using Incident Response Practice For Stealth Risk Analysis
As a CISO, it's hard to get the attention of executive leadership amongst all the competing business issues. One big problem is that even if they agree on the potential impact of an incident, they won't agree on the probability of it happening, so your plans get lost in the shuffle. In this session we'll talk about one way to get them to take the risks as seriously as you do: tabletop exercises. Put on your social engineering hats, and prepare for the kind of fun that usually only the red team gets to have.
- David Campbell AI Security Lead - Scale AI
AI Red Teaming Room
{Open House Format - Come by anytime between 10:00AM - 1:00PM}
Step into the AI Red Teaming Room and join experts from Scale AI for an interactive, hands-on experience where you’ll get to play the role of an adversary. In this session, you won’t just learn about AI vulnerabilities — you’ll exploit them. Engage directly in guided exercises designed to expose weaknesses in language models and other AI systems. Try your hand at crafting adversarial prompts to manipulate model behavior, bypass safeguards, and trigger unintended outputs.
Whether you're a security professional, AI researcher, policy expert, or just curious about how AI can go wrong, this is your chance to explore the limits of today's AI systems in a safe, controlled environment. Alongside the red-teaming challenges, you'll learn how these same systems can be defended, evaluated, and improved.
No prior experience with red teaming required — just bring your curiosity. Take 15–20 minutes to stop by, test your skills, and walk away with a deeper understanding of both the power and the fragility of modern AI.
- Harald Ujc CTO - Invenci
From Misfire to Mastery: AI Discovery as Strategic Risk
Join this session presented by Harald Ujc (CTO, Invenci) to learn how small and mid-sized businesses (SMBs) face major risks with AI adoption — not because of the technology itself, but due to poor discovery and problem definition.
- Andrew Carney Program Manager, DARPA AI Cyber Challenge - DARPA
Patching Critical Infrastructure: Lessons from DARPA’s AI Cyber Challenge
DARPA and ARPA-H are on a mission to advance AI-driven cybersecurity and usher in a future where we can patch vulnerabilities before they can be exploited. AI Cyber Challenge Program Manager Andrew Carney will discuss lessons learned from competition and how the program is driving the innovation of responsible AI systems designed to address some of our most important digital issues today: the security of critical infrastructure and software supply chains.
- Trip Hillman Partner, Cybersecurity Consulting - Weaver
Implementing AI Safeguards for Cyber Strategy and Compliance: Insights from OWASP and NIST Framework
In an era where AI technologies are integral to cybersecurity strategies, ensuring robust safeguards and compliance is paramount. This presentation will delve into practical approaches for integrating AI safeguards into cyber strategy, leveraging the OWASP Security & Governance Checklist v1.0 and the NIST AI Risk Management Framework (RMF). Attendees will gain insights on aligning AI initiatives with established security and governance standards, enhancing risk management, and maintaining regulatory compliance. Additionally, participants will be provided with an AI Risk Placemat, outlining key risks and safeguard steps as a checklist to tailor for their environments. Real-world examples and actionable recommendations will be shared to help organizations fortify their AI systems against emerging threats and ensure ethical and secure AI deployment.
- Jason Kramer Senior Software Engineering Researcher - ObjectSecurity
Augmenting AI Security: External Strategies for Threat Mitigation
As AI systems are increasingly deployed in production, securing them requires more than just protecting the models themselves. This session focuses on practical external strategies for safeguarding AI and its surrounding infrastructure. Topics include using resource controls to prevent denial-of-service attacks, applying rate limiting and input validation to secure APIs, detecting anomalies through audit logging and output filtering, and protecting data integrity with version control, automated backups, tokenization, and redaction.
By focusing on the systems and workflows around AI models, including containers, endpoints, and data pipelines, this session offers a broader and more realistic view of where threats emerge and how to stop them. Attendees will leave with actionable insights to improve the security of AI deployments and defend against risks like prompt injection, data leakage, and insecure configurations.
- Malcolm Harkins Chief Security and Trust Officer - HiddenLayer
Economic Impact of Securing AI
As artificial intelligence (AI) becomes an increasingly integral part of global infrastructure, commerce, defense, and daily life, the imperative to secure these systems is no longer a technical concern alone—it is an economic necessity. This keynote explores the intersection of cybersecurity and AI through the lens of economic strategy, risk modeling, and incentive alignment, presenting a holistic framework for understanding and addressing the financial realities of securing AI.
Securing AI systems involves unique challenges: data poisoning, model inversion, adversarial attacks, and algorithmic manipulation. Unlike traditional software, AI models can be subverted not just through code exploits, but through the very data and feedback loops that drive their behavior. The existing investments we have for traditional Cyber Security do protect models - they provide only indirect protection at best. This misunderstanding of the existing controls can create a misalignment of not only spending but incentives among developers, users, regulators, and attackers, which traditional security economics already struggles to address.
In this talk, I will examine how context is key and cash is king. We will examine the existing controls in use in organizations and how those controls do not protect AI models from attacks. We will discuss financial materiality and material risk to help refine how to think about the incentives for companies to invest in AI security when the threats are diffuse and the benefits of prevention may be difficult to quantify. I will walk through how to construct an economic impact analysis for securing AI including how to evaluate various control options on total costs as well as sufficiency of control. I will also share recent trends in cyber security insurance and the lack of coverage benefits for AI models.
Drawing from my real-world experience as a former finance leader in addition to my years running security, this keynote offers a strategic view of AI security as an investment problem, a game-theoretic challenge, and a policy frontier which we all need to navigate to protect the promise of AI and avoid the perils that could occur. Attendees will leave with a deeper understanding of how economic impact and total cost models can inform the design of secure AI systems and influence corporate decision-making
By treating security not as just a cost but as a critical enabler of AI growth, we can move toward a future where AI systems are not only powerful, but are secure, resilient, and trustworthy.
- David Haddad Associate Director, Technology Risk Management - Ernst & Young
Leading A Successful Generative AI Journey: A CIO’s Guide
The potential of generative AI (GenAI) to enhance profitability and productivity is widely recognized. However, skepticism regarding ROI necessitates a strategic approach for chief information officers (CIOs) to effectively leverage GenAI. Working to exploit GenAI opportunities and establish robust GenAI programs, CIOs face challenges including data and infrastructure readiness, cyber risks and regulatory compliance. This presentation explores practical implications, such as the importance of implementing strong cybersecurity measures to protect data, as well as navigating emerging AI regulations that could result in financial penalties and operational disruptions. EY technology, strategy and transactions, and risk management professionals outline strategic approaches for CIOs, emphasizing the identification of GenAI opportunities and defining leadership archetypes based on organizational maturity levels. Various governance strategies and the role of centers of excellence are also discussed. Sourcing strategies highlight the importance of investing in core GenAI capabilities and partnering with external providers. Guidance for managing ROI through consistent measurement across development stages aims to drive strategic alignment with business objectives. The EY team concludes by outlining key steps for a successful GenAI journey. This presentation is a resource for CIOs, technology practitioners and risk management specialists aiming to navigate the complexities of GenAI adoption and risks while driving valuable outcomes.
- Barnaby Simkin Director, Trustworthy AI - NVIDIA
Trustworthy AI Element Out of Context
Foundation models are increasingly deployed in embodied AI systems, such as vision-language-action (VLA) models for humanoid robots, but ensuring their trustworthy performance outside their original development context remains challenging. Current model cards, short documents describing a model’s intended use and performance, often provide only static, high-level metrics that lack the specificity needed for safe reuse in new operational scenarios. In this paper, we propose an evaluation framework that embeds an AI model’s Operational Design Domain (ODD) into its model card and testing regimen. Our approach draws on the automotive concept of a Safety Element out of Context (SEooC), treating pre-trained AI components as modular safety elements developed on assumptions. We introduce novel techniques to characterize and validate model behavior across ODD dimensions: using pre-trained models as feature embedders to generate pseudo-labels for test data segmentation, and applying perturbation methods to stress-test robustness in rare or extreme conditions. By coupling thorough ODD-aligned evaluations with clear documentation of assumptions and results, developers can systematically assess whether an AI model (e.g. a VLA policy in a robot) can be trusted in a new context or if additional training and safeguards are required. This paradigm for trustworthy AI out-of-context facilitates the safe reuse of advanced AI models in embodied systems, accelerating innovation in robotics while managing safety, and ethical risks.
- Celina Stewart Director of Cyber Risk Management - Neuvik
Strong Arming and Appealing to Human-like Fallibility: How Attackers Manipulate AI Tooling
Many organizations have rapidly adopted Generative Artificial Intelligence (GenAI) tooling, using it to enhance productivity, facilitate customer interactions, and boost sales. However, most companies – even those with strong cybersecurity programs and AI governance – lack awareness of the ways GenAI tooling can be manipulated by malicious actors to bypass controls and reveal confidential data.
Using technical case examples, this talk highlights techniques attackers use to manipulate GenAI tools such as chatbots into revealing sensitive information. These include appeals to GenAI’s human-like desire to “get along” and “help” and its propensity to become “distracted” or “intimidated” if competing or forceful requests occur. Then, this talk will then showcase how these techniques are used to supercharge common intrusion tactics such as prompt injection, command injection and privilege escalation during the initial access and exploitation phase of an adversary’s attack path.
Attendees will take away a clear understanding of common methods used by adversaries to manipulate GenAI tools and bypass existing controls, as well as concrete guidance on how to incorporate these techniques into their own penetration testing programs to preemptively identify weaknesses.
- Eric Skinner VP of Market Strategy - Trend Micro
Modern Threats, Smarter Defenses: A Case-Based Look at Proactive Security in the AI Era
Inspired by case reports from Trend’s incident response team, we’ll explore a typical recent attack chain in detail, showing the latest efforts to stay under the radar of detection technologies. But proactive strategies are evolving rapidly, and we’ll replay the attack timeline together to see how exposure management makes all the attacker steps slower and more challenging, if not impossible. We’ll wrap up by reviewing some of the latest AI-specific enterprise risks, and the relevant proactive defense strategies.
- Oliver Friedrichs Co-founder and CEO - Pangea
AIDR? Why AI Demands its Own Detection & Response Strategy
AI is increasingly embedded in all aspects of compute with the real potential for agents, not humans, to soon become the majority users of software. This paradigm shift requires visibility, detection and security control measures comparable to those implemented for other attack surface layers such as networks and endpoints. This session will explore new threats introduced by AI using real-world attack data and present strategies for achieving visibility, detection and control footholds across all AI transit points.
- Mahesh Babu CMO - Kodem Security
Adversarial Intelligence: Production AI Systems Through the Eyes of the Attacker
This presentation explores Adversarial Intelligence - an approach that views the security of AI applications from an attacker’s perspective. Drawing from vulnerability research experience at the NSO Group and building Pegasus, the speaker will highlight how overlooked low and medium vulnerabilities can be combined to execute successful attacks. By examining attack chains and application runtime behavior, attendees will see how gaps often missed by traditional methods are exposed. Attendees will learn about effective tools and techniques for detecting and mitigating these threats, especially in cloud-native and distributed systems. Designed for security practitioners and academics, this session provides a deeper understanding of defending against emerging attack patterns specific to AI applications by adopting their mindset.
- Paul Starrett Founder - Starrett Consulting
Adversarial Machine Learning and AI Forensics
Artificial intelligence is now central to enterprise innovation, risk reduction, and profitability—making legal, regulatory, and risk preparedness a top priority. This presentation explores the AI lifecycle from inception to deployment, highlighting how implementations can be compromised through inadvertence, internal misuse, or external threats. We’ll examine systemic risks across the AI ecosystem and outline practical mitigation strategies. The session concludes with an overview of AI forensics—what to collect, how to do so defensibly, and its role in investigations, litigation, and audits.
Description
Artificial intelligence has become the new norm for enterprise competitive advantage, decreased risk and improved profit. Accordingly, we must be prepared for regulatory, legal and risk as a top priority.
In this presentation, Paul will cover the AI ecosystem, from inception, to development and then to deployment.
From this, we will examine ways in which artificial intelligence implementations can be compromised either through inadvertence or malfeasance. Artificial intelligence risks span the entirety of an ecosystem involving an interdisciplinary synergy that must be examined holistically.
This approach involves first understanding the ways in which AI implementations can be compromised by inadvertence, internal attacks, external threats. We will examine known risks as well as mitigation strategies to reduce risk across the AI-technology spectrum.
We will then review AI forensics which touches on what information should be gathered and how to do so in a forensically sound and defensible manner. This is most relevant as factual support for investigations, discovery in litigation and in audits.
Key Takeaways for Risk Professionals
AI Is a Risk Vector: AI systems introduce unique risks—legal, operational, ethical—that must be integrated into enterprise risk frameworks.
End-to-End Exposure: Risks can arise at any stage—design, development, or deployment—and require continuous, interdisciplinary oversight.
Compromise Is Multidimensional: AI can be undermined through inadvertent design flaws, insider misuse, or external attacks; vigilance must extend beyond traditional cyber controls.
Holistic Risk Mitigation: Effective controls include technical safeguards, governance policies, cross-functional coordination, and continuous monitoring.
AI Forensics Matters: In the event of an incident, knowing what data to preserve and how to collect it forensically is crucial for audits, investigations, and litigation.
Prepare for Regulatory Scrutiny: Emerging global regulations demand documentation, explainability, and defensible processes—risk teams must lead in ensuring compliance.
- Tamir Ishay Sharbat AI Security Researcher, CTO Office - Zenity
The Art of Prompt Injection and Making Your AI Turn on You
Promptware and prompt injections have been making waves across the cybersecurity world in the last year. Allowing hackers to hijack AI applications of any kind (autonomous agents included) for their own malicious purposes, they open the door to high-impact attacks leading to data corruption, data exfiltration, account takeover and even persistent C&C.
But crafting effective prompt injections is an art. And today, we’ll reveal its best kept secrets.
Together we’ll go through the principles of building effective and devastatingly impactful prompt injection attacks, effective against the world’s most secure systems. We’ll demonstrate access-to-impact exploits in the most prominent AI systems out there, including: ChatGPT, Gemini, Copilot, Einstein and their custom agentic platforms. Penetrating through prompt shields as if they were butter, and revealing every clever technique along the way.
We’ll see how tricking AI into playing games leads to system prompt leakage, and how we can use it to craft even better injections. We’ll understand why training LLMs for political correctness might actually make them more vulnerable. Why special characters are your best friend, if you just know where to place them. How you can present new rules that hijack AI applications without even having direct access to them. Ultimately instilling the ability to look at AI applications from a hacker’s perspective, developing the intuition for how to attack each one for the highest impact.
Finally, after dismantling every layer of prompt protection out there, we’ll discuss going beyond prompt shielding, and explore defense-in-depth for AI applications. Suggesting a new way into how we can truly start managing this threat in the real world.
- Ash Ahuja, CISM VP & Executive Partner, Security & Risk Management - Gartner
- Tim Silverline CISO - Rocket Lawyer
- Jarell Mikell Executive Director - Power Systems & Gas Cybersecurity - Southern Company
CISO Perspectives: Navigating the Security Landscape in 2025 [Panel]
In a world where cyber risk is business risk, today's Chief Information Security Officers are not just defenders of data—they are strategic partners driving organizational resilience. Join a high-impact panel discussion featuring several of the industry’s leading CISOs, moderated by Gartner's Ash Ahuja. This candid conversation will explore how security leaders are balancing innovation with risk management, influencing board-level decision-making, and navigating complex threat environments in 2025.
- Jason Ross Product Security Principal - Salesforce
Breaking the Black Box
Traditional security testing is neat and binary: find the bug, exploit the system, check the box. But when your target is a generative AI model that improvises, adapts (and sometimes lies with confidence) things get weird, fast.
This talk dives into the messy, fascinating world of AI red teaming, where success isn’t just about getting in, it’s about provoking behavior, exposing hidden biases, slipping past safety guardrails, and seeing what breaks when the rules bend.
We'll unpack why AI security demands more than traditional exploits, why your tools now need to think, and how testing has evolved from black-and-white checks to full-spectrum investigation.
If you’ve ever wondered how to secure a system that won’t stop changing (or how to test something that can talk back) this talk's for you!
- Alison Cossette CEO/Founder - ClariTrace
AI Is Making the Decisions—Where’s the Control Layer?
Last year’s conversations revealed a growing consensus: AI is no longer a tool—it’s a decision-maker. But while AI has moved upstream into business-critical workflows, our governance models remain stuck downstream—focused on models, not decisions. The result is a widening control gap that compliance frameworks, monitoring tools, and audit logs are no longer equipped to close.
This talk introduces the case for a true control layer for AI—a missing piece of enterprise architecture designed to enforce alignment, trace decision logic, and prevent high-impact failures before they happen.
Key insights include:
Why treating AI like software or data infrastructure misses the mark—and opens the door to systemic failure.
Where enterprise AI governance is failing today: misattributed influence, missing audit trails, and post-hoc compliance.
What’s required to govern decisions, not just models—across predictive, generative, and agentic AI systems.
How forward-leaning CISOs and CAIOs are redefining AI risk as a systems problem, not a tooling gap.
This session is designed for senior security, compliance, and AI leaders navigating the next wave of enterprise risk. It delivers a new mental model for governing AI systems—not just to satisfy regulators, but to protect the business.
- Daniel Ross Head of AI Compliance Strategy - Dynamo AI
From Assumptions to Assurance: Calibrating AI with Institutional Truth
Generative AI has made a number of recent 'up the hill' technical advances, from training time compute to recent advances on inference time, but that hasn't made risk management and compliance executives any more comfortable in deploying large scale AI to consumers. Central to this issue is the ability to apply an organizations or regions definition / perspective / ground truth to the management of the AI, so that its reasoning, safety, and security guardrails align to individual expectations. For example, your definition of ‘safety’ most definitely is not mine, nor others. And with regulators reminding organizations that AI must still comply with existing laws and regulations, the next advancement will be focused on ‘intelligent AI’, AI that can comprehend nuanced requirements, specific to each organization’s ground truth, in a defensible manner. In this talk, we will have a fun and interactive fireside chat discussion on the types of AI risk management controls that allow for a tailored ground truth that risk, legal, compliance, and AI leaders should be looking out for, including the types of evidence and skillsets needed to effectively oversee them.
Learning Objectives:
Awareness of critical AI controls throughout the AI lifecycle that support ground truth identification
Insights into AI risk management function of the future
Interactive engagement, clearly understanding that ground truth is not one size fits all
- Alex Bazhaniuk CTO - Eclypsium
Beneath the Prompt: The Hidden Risks Powering GenAI
As LLMs power more applications across industries, firmware and hardware security is now mission-critical. The attack surface has shifted downward, making AI infrastructure itself the new battleground. Securing GenAI involves both:
- Traditional cybersecurity controls (monitoring, patching, access controls)
- AI-specific governance frameworks (model integrity, supply chain verification)
The message is clear: securing the model is not enough—you must secure the machine it runs on. This talk will highlight the vulnerabilities in the infrastructure powering large language models (LLMs) and generative AI systems. It will focus on the hardware, firmware, and cloud components that support AI, revealing how these foundational layers are increasingly targeted by sophisticated attacks.
- Vishnupriya S Devarajulu Software Engineer - American Express
AI and It's Impact on Data Privacy and Technology
In this session, we will explore the critical role of safeguarding data privacy in the development and deployment of AI-driven software applications. With AI systems increasingly handling sensitive personal information, it is essential to understand the privacy challenges these technologies present. We will discuss how to implement privacy-preserving techniques, including differential privacy, data anonymization, and secure data storage, to protect user information. Through real-world examples and case studies, attendees will gain insights into the practical steps required to balance the innovative capabilities of AI with the necessary safeguards to ensure user trust and regulatory compliance. This session is ideal for anyone working on AI applications who wants to understand how to better safeguard data and respect privacy.
- James Sayles Chief AI Officer and Director of Global GRC - Halliburton
AI Under Fire: Securing Trust, Strategy, and Sovereignty in the Age of Intelligent Threats
As AI reshapes global industries and defense strategies, it also introduces unprecedented risks—deepfakes, adversarial manipulation, IP theft, and geopolitical destabilization. This session dives into how adversaries exploit AI for misinformation, brand attacks, and national security disruption—and what leaders can do to defend against it. Beyond the threats, we’ll explore how to plan an AI integration roadmap that protects intellectual property, embeds cybersecurity by design, and enhances enterprise risk management.
Drawing from real-world defense case studies and high-stakes risk mitigation strategies, this session equips security and business leaders with a battle-tested blueprint for AI resilience. We’ll also tackle the regulatory crossroads: How can we balance innovation with public interest, and what role does international collaboration play in securing responsible AI advancement?
Join Dr. JK. Sayles for a high-impact discussion on building sovereign, secure, and strategic AI ecosystems—where risk is managed, innovation is unleashed, and trust is earned by design.
- Charit Upadhyay Senior Site Reliability Engineer - Oracle
Can You Trust Your AI SOC Analyst? Testing the Limits of LLMs in Security Operations
LLMs are showing up in SOC tools, from log triage to incident summaries. But can we trust their outputs in critical workflows? This session explores the promises and pitfalls of using LLMs in security operations. We’ll evaluate real-world use cases like auto-generating detections, summarizing incidents, and helping with reverse engineering tasks. Through examples and benchmarks, we’ll explore where LLMs shine, where they hallucinate, and how to build secure, auditable pipelines around them. Attendees will leave with a framework to evaluate AI tools in the SOC, and a clear sense of when to automate, when to supervise, and when to just say no.
- Richard Bird Chief Security Officer - Singulr AI
Is AI Ready for Us?
In an era where artificial intelligence is rapidly redefining the boundaries of possibility, an uncomfortable question looms large: What if humans aren’t good enough for AI?
This provocative session, led by Richard Bird—renowned cybersecurity thought leader, identity security evangelist, and pragmatic disruptor known for his observations on data and citizen privacy, as well as API and AI security —will explore this unsettling idea through a lens of hard truths and candid insight.
In this session, we will examine how organizations are racing to integrate AI agents into environments where data classification, protection, identity security and governance have long been neglected. We will shine a light on the paradox of trusting AI with data that we have historically failed to safeguard. Moreover, the talk will delve into the troubling trend of AI startups failing to protect the most vulnerable—our children and marginalized communities—in their quest for innovation at any cost.
- Patrick Walsh CEO - IronCore Labs
Smart Tech, Dumb Moves: AI Adoption Without Guardrails
Generative AI is being rapidly adopted across industries and by consumers alike; but this surge in adoption is outpacing the development of effective security measures. Many of the threats facing AI systems are still poorly understood, and while a few protections exist, much of the ecosystem remains immature and vulnerable, filled with products that are ripe for breach.
This talk explores concrete examples of threats, real-world attacks, and systemic risks that every security professional should understand. It also provides guidance on how to critically evaluate vendors introducing AI features, helping you identify red flags and spot when security precautions are being underattended.
CISOs need to create clear guidelines for the use, adoption, and development of AI that minimize the risks while allowing their organization to benefit from the technology. This talk will shine a light on the threats to private data in AI and tactics to manage them.
- Jason Kramer Senior Software Engineering Researcher - ObjectSecurity
- Ulrich Lang CEO - ObjectSecurity LLC
Opening the Black Box: Trust and Transparency with AIBOMs
The open-source AI ecosystem is expanding rapidly, with pre-trained models, fine-tuned variants, and custom adapters widely available for download and deployment. But this ease of access comes with significant risk. Models may be poisoned, backdoored, trained on copyrighted data, or inherit vulnerabilities from upstream sources, often without sufficient documentation. As these models are reused and redistributed, organizations can unknowingly introduce technical and legal threats into their systems.
This session introduces the Artificial Intelligence Bill of Materials (AIBOM), a governance tool designed to bring visibility and accountability to the AI supply chain. Modeled after traditional software SBOMs, AIBOMs capture critical metadata such as model provenance, fine-tuning history, licensing, and known risks. We will explore how AIBOMs help developers and security teams better assess open-source models, avoid downstream vulnerabilities, and promote safer reuse.
- Josephine Liu Chief Commissioner, Public Policy Committee - Asia-Pacific Artificial Intelligence Association (AAIA)
Digital Sovereignty or Digital Fragmentation? Risks and Remedies in Global AI Governance
As artificial intelligence systems increasingly underpin economic infrastructure, public services, and geopolitical decision-making, the debate over digital sovereignty has become a defining regulatory challenge of our time. Governments around the world are asserting greater control over data, algorithms, and platforms—often in the name of national security, economic competitiveness, or ethical accountability. Yet this growing trend also risks producing digital fragmentation: a fractured global landscape in which incompatible regulatory regimes stifle cross-border innovation, inhibit scientific collaboration, and create blind spots in AI risk oversight.
This session will explore the tension between national digital sovereignty and the need for international regulatory coordination, drawing on case studies from the United States, European Union, and China. We will assess how divergent models of AI governance are shaping the contours of global AI development. The talk will examine where convergence is possible, where it is unlikely, and what strategies could mitigate fragmentation without compromising legitimate sovereign interests.
In a moment when technological acceleration outpaces policymaking, this session offers a pragmatic roadmap to align innovation incentives with public interest—without allowing global AI governance to splinter into irreconcilable silos.
Wednesday, August 20, 2025
- Sundar Chandrasekaran Principal Product Manager - Alexa AI Trust & Safety - Amazon
Taming Rogue Agents: Real-Time Mitigation Strategies for Multi-Agent and Multimodal AI at Scale
As Large Language Models (LLMs) become the foundation for multi-agent and multimodal systems, spanning voice assistants, chat interfaces, generative media tools, and autonomous agents, the risk of rogue or misaligned behavior is accelerating. Failures now make headlines, from chatbots producing unsafe or biased responses, to voice systems offering incorrect or harmful advice, to autonomous agents executing unintended actions.
This talk will present an industry level framework for detecting, containing, and preventing rogue AI behavior in real time. Drawing on cross-modal examples, spanning conversational AI, multimodal reasoning agents, and enterprise AI deployments. We will explore:
• Common failure modes in multi-agent architectures (e.g., policy overreach, unsafe delegation, compounding errors).
• Real-time mitigation playbooks, including autonomous hotfix pipelines, selective guardrails, and trust signal monitoring.
• Scalable safety strategies that balance rapid containment with minimal disruption to end users.
Attendees will leave with actionable guidance for operationalizing rogue AI mitigation, grounded in lessons from large scale, high traffic AI deployments across the industry.
Learning Objectives:
1. Identify systemic risks and rogue behaviors in multi-agent and multimodal AI.
2. Apply scalable, real-time guardrail and hotfix strategies across different AI modalities.
3. Design governance and escalation processes that prevent isolated failures from becoming systemic incidents.
- Blake Gilson Operational Technology Cyber Security and Risk Manager - ExxonMobil
Building AI into Industrial Environments: Practical Strategies for Secure and Scalable Deployment
AI presents tremendous opportunities for industrial organizations to improve efficiency, reduce downtime, and enhance decision-making. However, deploying AI in operational technology OT environments where uptime, safety, and security are critical, is uniquely challenging. This session will offer a practical strategies for successfully integrating AI into complex, legacy-rich industrial systems. Drawing on real-world experience from critical infrastructure and energy sectors, the session will outline key steps to assess AI readiness, system protection, and integration. Attendees will gain actionable guidance and design on mitigating cyber risks, AI pitfalls, and aligning AI efforts with broader enterprise security and compliance goals. This will focus on a Crawl, Walk, Run model of maturity,
We will explore common pitfalls, such as assuming IT patterns can directly apply to OT, and how to avoid them. The talk will also address the organizational challenges of AI adoption, including bridging IT/OT silos and building the right cross-functional teams. Whether you're starting your AI journey or scaling existing initiatives, this session will equip you with strategic and technical insights to safely and effectively deploy AI in industrial settings.
Learning Objectives:
Understand the differences in priorities between OT and IT systems
Define key technical and cultural prerequisites for AI in OT environments
Safeguard AI pipelines from cyber threats and data quality issues
Apply architecture patterns for edge, cloud, and hybrid AI deployment
Foster collaboration across IT, OT, and data science teams
- Ben Goodman Founder & CEO - CyRisk Inc.
AI and Risk Transfer: The Cyber Insurance Perspective
The insurance industry does not have a reputation for leading technical innovation, but cyber insurance is one line that has been forced to keep pace. This session addresses the intersection of AI and cyber risk from the cyber insurance perspective. Risk management executives, CISOs, Chief Privacy Officers, government officials, and policymakers will gain a understanding of the role AI risk transfer plays for organizations in the AI ecosystem. Attendees will learn what is required for cyber insurance to really work effectively as a risk transfer vehicle for the parties involved. Executives at companies developing or using AI will also come away with insight into how to determine the optimal cyber insurance coverage.
The cyber insurance industry has played an increasingly important role for technology innovators and their customers. As AI becomes a ubiquitous feature of the digital world, integrated into all levels of business operations, it introduces new cybersecurity challenges and adds new dimensions to the traditional cyberattack surface. Cyber insurance provides a useful risk transfer solution that addresses these evolving threats because AI development and deployment create new avenues for cyberattacks, including software supply chain risks from reliance on third-party AI components and increased data exposure due to the vast datasets AI processes. In addition, purveyors of “AI-powered” solutions face the same privacy liability, professional and product liability risks as any other software company, not to mention AI’s legal and regulatory landmines. There is a broad spectrum of AI risks and they are shared among the stakeholders including innovators, their customers, their vendors and their insurers.
The session will provide insight into the perspective of cyber insurance companies and their underwriters. Underwriters will soon be consumers of AI risk assessments at scale, as they are tasked with understanding and evaluating these new risk vectors. They must quickly assess an organization's AI security posture specifically concerning its AI systems.
While traditional cyber insurance policies may offer some baseline coverage, the unique nature of AI risks necessitates a closer examination of policy terms and potential coverage gaps. Insurers are beginning to recognize the need for more explicit coverage for AI-specific incidents. Potential AI-related risks that cyber insurance policies may cover or are evolving to cover include:
• AI Model compromise and/or failure
• Data breaches involving training data or the AI models themselves
• Business interruption resulting from cyberattacks against AI infrastructure, AI-driven processes or their supply chains.
• Ransomware attacks against, or facilitated by AI systems.
This session will also provide risk managers with insights into how the cyber insurance market is adapting to the age of AI, helping organizations understand the most relevant coverage available.
- Wendy Nather Senior Research Initiatives Director - 1Password
Secret Agent, Ma’am: New Rules For AI Access Management
How many identities should an AI agent be allowed to have? And how does authentication work when the agent is representing other identities? In this session we’ll talk about different risk-based approaches to a new breed of account, its entitlements, and the looming trap door we call delegation.
- Lauren Wallace Chief Legal Officer - RadarFirst
- Edna Conway CEO - EMC Advisors, LLC
AI Classification Without Chaos: Getting Ahead of the EU AI Act
As the EU AI Act begins its phased enforcement, organizations across industries are under pressure to accurately classify their AI systems and map associated legal obligations. But for many, this process is anything but simple—manual efforts are inconsistent, scattered across teams, and lack the audit-ready rigor regulators demand.
In this session, we’ll walk through a practical approach to operationalizing AI risk classification and controls mapping using automation and governance best practices. You’ll learn how to move beyond one-off classification efforts and toward a sustainable, repeatable process that scales with your organization and stays ahead of regulatory change.
- Kevin Kiley President - Airia
AI Risks Are Exploding: What You Need to Know Now to Prepare
This talk explores the urgent, rapidly evolving challenges of securing AI in the enterprise. We’ll examine risks from model manipulation, data leakage, adversarial attacks, and compliance gaps, highlighting why traditional security tools and policies fall short. Attendees will gain real-world insights into emerging strategies for protecting AI-driven systems at scale.
- Saloni Garg Senior Software Engineer - Wayfair
How We Audit ML Systems for Risk, Drift, and Misuse
As machine learning systems become deeply embedded in products, it’s not just accuracy that matters-- it’s accountability. This talk covers our internal approach to proactively identifying risks in ML workflows, from unintentional bias to model drift and even potential misuse. I’ll walk through how we adapted standard DevSecOps patterns (like monitoring, alerting, versioning) to the ML stack, and how we created a lightweight review system for ethical red flags.
- Max Leepson Senior Manager, Global Safety & Security - Salesforce
Same Data, Different Outcomes: How Prompt Variability Exposes Hidden AI Risks
Enterprises often invest significant effort into securing datasets, configuring responsible AI agents, and implementing strict guardrails. But what happens when different users — armed with the same inputs — prompt the system in entirely different ways? What if those prompts lead to radically divergent outputs, even within a tightly controlled environment?
In this session, we’ll explore a recent hands-on workshop designed to surface this very issue. During the session, over business and technical stakeholders interacted with the same generative AI agent, using the same underlying data and capabilities. The only variable? Their prompts. The results exposed a critical but often overlooked risk: prompt variability can lead to unpredictability, misalignment, or even reputational risk — all without any change to the model or its data.
We’ll present this as a case study in how to intentionally design exercises that make the risks of prompt-driven divergence visible to non-technical stakeholders. Attendees will gain practical insight into the limits of AI guardrails and the real-world complexity of human-agent interaction in enterprise settings.
- Beth George Partner, Co-head of Strategic Risk Management Practice - Freshfields
Seeing Risk: Legal and Privacy Pitfalls of Multimodal and Computer Vision AI vs Text-Based LLMs
As enterprises embrace multimodal AI and computer vision models, the legal and privacy risks multiply-often in ways that text-only large language models (LLMs) do not present. This session will examine the unique privacy and regulatory challenges introduced by AI systems that process images, video, audio and other non-textual data alongside text. We will explore how multimodal models not only expand the attack surface for adversarial threats, but also create new vectors for privacy violations, regulatory non-compliance, and legal liability.
● Increased Data Exposure: Multimodal models may process personal and sensitive data, including images, biometric identifiers, and contextual metadata. This aggregation heightens the risk of unauthorized data exposure, both during model training and inference, and introduces new obligations under privacy and security regulations.
● Informed Consent: The collection and use of visual and multimodal data can occur without explicit user consent or clear communication about secondary uses, raising significant compliance and ethical concerns. For example, training computer vision models on publicly available images without consent-as seen in high-profile facial recognition cases-has led to regulatory scrutiny and lawsuits
● Privacy Harms By Inference: Multimodal AI may infer sensitive personal attributes (such as health status or location) from seemingly innocuous images or sensor data. This risk is amplified by the richness and granularity of multimodal datasets.
● Adversarial Attacks and Data Leakage: Visual prompt injection and adversarial image attacks can bypass safety filters, leading to the generation or exposure of harmful or illegal content-sometimes at rates far exceeding those of text-only models. These attacks may also enable malicious actors to extract or reconstruct sensitive information from model outputs.
● Compliance and Transparency Challenges: The "black box" nature of advanced multimodal models makes it difficult for organizations to explain how personal data is processed, complicating compliance with privacy laws that require transparency, accountability, and the right to explanation for automated decisions.
Learning Objectives:
● Identify the specific privacy and legal risks unique to multimodal and computer vision AI compared to text-only LLMs
● Understand regulatory obligations around multimodal data collection, storage, and processing
● Develop strategies for obtaining informed consent, minimizing data exposure, and ensuring transparency in multimodal AI systems
● Assess technical and governance controls to mitigate privacy risks and support legal compliance
This session is vital for legal, compliance, and security professionals navigating the evolving landscape of multimodal AI, ensuring that innovation does not come at the expense of privacy and regulatory integrity.
- Sanjnah Ananda Kumar Product Manager - Salesforce
Emerging Threats from Accessible AI Image Generation
The rapid advancements in AI image generation have made creating realistic fake images accessible to virtually anyone, fundamentally altering our relationship with visual information. This session, kicking off with an eye-opening "AI or Reality?" game, will expose the emerging threats presented by this democratization of visual creation. We will delve into the risks associated with the increasing accessibility of AI image creation, exploring how these powerful tools are being exploited for privacy violations, financial fraud, and the widespread dissemination of misinformation. We will examine real-world examples of AI-generated forgeries, from fake insurance claims and fraudulent receipts to synthetic identities used to circumvent verification systems and viral hoaxes that erode public trust. The session will also cover practical techniques for identifying potentially manipulated or AI-generated images and actionable strategies individuals and organizations can adopt to protect digital identities and combat the spread of visual deception in this new era. Furthermore, we will discuss how enterprise organizations should consider developing mechanisms to detect fake images, including leveraging detection algorithms, watermarking, and content provenance initiatives. Finally, we will touch upon the broader emerging technological solutions and policy initiatives being developed to address these critical challenges.
Learning Objectives:
1. Understand the capabilities of modern AI image generation models and the resulting difficulty in distinguishing between AI-generated and real images.
2. Understand how the increased accessibility and sophistication of AI image generation tools contribute to emerging security and privacy risks.
3. Identify real-world examples of how AI-generated images are being used for malicious purposes, including financial fraud, identity theft, and misinformation campaigns.
4. Learn practical techniques and indicators to help detect AI-generated or manipulated images.
5. Explore actionable strategies for individuals and organizations to protect personal images and information and enhance digital self-defense against AI-powered deception.
6. Understand how enterprise organizations can develop and implement mechanisms for detecting fake images, such as using detection algorithms, watermarking, and content provenance standards.
7. Gain insight into the technological and regulatory landscape evolving to combat AI image misuse, including detection algorithms, watermarking, and content provenance, and policy frameworks.
- Shawn Marriott CTO - Canary Trap Inc.
Vibe Coding: Uncovering the Hidden Risks of Typosquatting and Supply Chain Attacks
AI-assisted coding is democratizing software development, empowering anyone to build applications at unprecedented speed. But this "vibe coding" trend—rapid prototyping by individuals without formal training—also creates new security challenges. From typosquatting attacks to dependency hijacking, attackers are targeting these environments, exploiting developer overconfidence and expanding the attack surface. This session will equip security leaders with insights into these emerging risks and practical steps to secure their software supply chains in an AI-driven world.
- Aderonke Akinbola Technical Program Manger - Google
Beyond the Breach: Analyzing AI System Failures, Safeguarding Data, and Addressing Ethical Risks
This session will provide a comprehensive look at the multifaceted risks inherent in AI systems, moving from external threats to internal failures and profound ethical challenges. We will delve into safeguarding AI systems against cyber threats and hacking, exploring strategies for preventing data breaches and information theft that target the sensitive data powering these models. The session will also analyze common causes of AI system failures, illustrating these points through real-world case studies that reveal unexpected vulnerabilities and consequences. Furthermore, we will navigate the critical ethical debates surrounding AI, addressing crucial issues like privacy violations, algorithmic bias, the risks in critical decision-making processes, and the ethical implications when AI systems are maliciously manipulated or fail. Attendees will gain a holistic understanding of the AI risk landscape, practical risk mitigation strategies, and insights into the ethical considerations essential for responsible and secure AI deployment in anticipation of evolving compliance and regulatory demands.
- David Brauchler III Technical Director - NCC Group
An AI Pentester's Reflections On Risk
Despite AI's complex and pervasive growth across products and services, most organizations find difficult to tangibly define AI risk, let alone mitigation and management. Yet industry continues an ever-pressing push toward deeper and more powerful integration of AI technology, which is only accelerated by the reach into agentic software and design patterns.
Drawing from extensive cross-sector engagements, NCC Group's AI/ML security practice lead will analyze the most significant risk vectors we've observed reoccur in AI implementations and the real, impactful vulnerabilities that have emerged from this computing paradigm. This talk outlines:
* How AI impacts Confidentiality, Integrity, and Availability of critical assets within organizations
* Why organizations find it difficult to apply traditional security models to AI systems
* The impact of agentic AI on system security
* How we can apply security fundamentals to AI
* What lessons we can draw from previous paradigm shifts
Attendees will walk away with a clear understanding of AI security's "state of play," including tangible AI risks along with their requisite remediation mechanisms. They'll leave equipped to lead and direct secure AI deployments using state-of-the-art defensive practices adopted by AI-mature organizations at the forefront of modern AI security.
- Vaishnavi Gudur Senior Software Engineer - Microsoft
Ethical AI Practices: Balancing Innovation with Responsibility
As AI transforms industries, organizations face a critical challenge: harnessing its potential while ensuring ethical integrity. This talk explores actionable strategies to embed responsibility into AI innovation, drawing from real-world implementations at scale, including Microsoft Teams’ infrastructure serving 145M+ users.
Challenges & Solutions:
AI systems risk perpetuating biases, lacking transparency, or causing unintended harm. Without ethical safeguards, organizations face reputational damage and regulatory penalties. To address this, we examine frameworks like Microsoft’s Responsible AI Standard, integrating fairness, privacy, and accountability into the development lifecycle. Tools such as Fairlearn detect biases during training, while SHAP provides model interpretability, ensuring compliance with regulations like the EU AI Act.
Real-World Impact:
Case studies reveal how bias audits reduced disparities in user feedback analysis by 40%, and transparent data handling aligned with GDPR. Post-deployment, Azure Machine Learning’s monitoring tools flagged 25% fewer ethical incidents through real-time anomaly detection.
Key Takeaways:
Proactive Bias Mitigation: Audit datasets and models pre-deployment using fairness indicators.
Explainability for Trust: Simplify AI logic with techniques like LIME for stakeholder buy-in.
Governance & Accountability: Create cross-functional ethics boards to oversee AI lifecycle risks.
For technical and non-technical leaders alike, this session equips teams to transform ethical practices into innovation drivers—ensuring AI advances progress without compromising societal trust.
- Chris Brown New Cyber Executive - CISO & Executive Coach
- Andy Caspersen Former CISO at Gap & Charles Schwab - ECM Security
- Richard Bird Chief Security Officer - Singulr AI
Redefining the CISO: Aligning Security Leadership Beyond the Breach [Panel]
The role of the CISO is evolving—fast. In this panel discussion, we’ll challenge traditional assumptions about what it means to be a security leader in today’s business landscape. From driving measurable business value to cultivating the next generation of cybersecurity leadership, this session brings together seasoned CISOs and emerging voices to explore how the security function can become more integrated, strategic, and future-ready. Join us for a candid conversation about expanding influence, enabling innovation, and shaping what comes next.
- Tsvi Korren Field CTO - Aqua Security
Deploying AI On-prem? Now Secure It!
AI is no longer experimental, it’s operational. Enterprises are deploying AI models into production applications where they interact with sensitive data, call backend systems, and make real-world decisions. The use cases might be new, but the risks are familiar. Privilege escalation, supply chain compromise, data exfiltration, and unauthorized execution now flow through a different path: the prompt.
In this session, we’ll walk through how on-prem and private AI deployments actually work, from user input to inference to tool execution. We’ll dissect the modern AI stack, illustrate where risks accumulate, and show how those risks resemble what we've long dealt with in containerized applications.
Key topics covered:
• How AI workloads show up in enterprise applications
• What a production AI transaction looks like under the hood
• Where traditional controls (SAST, DAST, firewalling) fail
• How AI risks like prompt injections can lead to familiar attack paths
• The security capabilities required to safeguard the use of AI, from container to model
We’ll focus on practical architecture and operational controls for AI workloads, especially when they are built from open-source code, exposed to user input, run in containers, and make privileged decisions. We will explore how AI deployments deserve the same baseline protections we already apply to modern applications, plus AI-specific extensions.
- Charit Upadhyay Senior Site Reliability Engineer - Oracle
When AI Agents Go Rogue - Securing Autonomous AI Systems Before They Act
Autonomous AI agents are no longer theoretical. They’re building workflows, calling APIs, writing code, and making decisions at scale. But with that power comes risk, new, emergent, and often unpredictable. As agent frameworks like AutoGPT, LangGraph, CrewAI, and custom orchestrators gain adoption, organizations must ask: What happens when your AI doesn’t just hallucinate but acts?
In this talk, Advait Patel, cloud security engineer and contributor to the Cloud Security Alliance’s AI Control Matrix, will unpack the risks associated with AI agents acting autonomously in production environments. Through real-world examples and red-team simulations, we’ll explore how agentic systems can be manipulated, coerced, or simply misaligned in ways that lead to security incidents, privacy violations, and cascading system failures.
Topics we’ll cover:
- How AI agents make decisions and where control is lost
- Prompt injection + tool usage = real-world lateral movement
- Over-permissive action spaces: API abuse, identity leaks, and shadow access paths
- Why traditional threat modeling fails for agentic workflows
- Techniques to sandbox, constrain, and monitor AI agents (function routers, policy-as-code, response filters)
- Logging and observability for “invisible” agent behavior
The attendees will walk away with:
- A framework to assess agentic AI security posture in your environment
- Examples of attack chains involving AI agents, cloud APIs, and dynamic plugin execution
- Architectural patterns to deploy secure-by-design agent frameworks in enterprise settings
- Recommendations for SOC teams on how to detect and respond to rogue agent behavior
This session is designed for CISOs, security architects, red teams, and AI product engineers who are exploring or deploying autonomous AI systems. If your LLM can act, it can be exploited, and this talk will show you how to defend against that future.
- Sharon Augustus Lead Product Security Engineer - Salesforce
- Jason Ross Product Security Principal - Salesforce
Prompt Defense "A Multi-Layered Approach"
Large Language Models (LLMs) are reshaping how we build applications—but with great power comes great vulnerability. Prompt injection attacks exploit the very thing that makes LLMs so useful: their ability to follow natural language instructions. The result? Malicious prompts that can hijack model behavior, often in subtle and dangerous ways.
While prompt injection is now widely recognized, the defenses being deployed across the industry often fall short. Why? Because what works in one context—one model, one use case—can completely fail in another. In this talk, we’ll go beyond just classifying attack types to focus on what really matters: how to build prompt defenses that actually work.
We’ll dig into practical, layered defense strategies—like prompt hardening, input/output validation, and system prompt design—while highlighting why secure prompting must be tailored to your model architecture, application flow, and risk surface. From SLMs to multi-modal inputs, we’ll show how “one prompt to rule them all” just doesn’t exist.
You’ll also get an overview of emerging tools for stress-testing and validating your prompt security, helping you move from reactive patching to proactive defense. If you're building with LLMs, it's time to think beyond generic guardrails and start securing prompts like it actually matters—because it does.
- Jyotheeswara Reddy Gottam Sr Software Engineer - Walmart Global Technology
The Triple Threat: How AI Technologies Reduce Testing Costs While Improving Quality Metrics
This presentation explores the transformative integration of three cutting-edge AI technologies in software quality assurance that collectively reduce testing costs across enterprise implementations. By combining generative AI for test script creation, machine learning-based predictive defect analytics, and self-healing automation frameworks, we establish a continuous quality feedback loop that dramatically improves testing efficiency. Our longitudinal study across healthcare, fintech, and e-commerce implementations reveals that generative AI significantly reduces test creation time while increasing test coverage. Predictive analytics successfully identifies high-risk code modules before deployment, allowing targeted testing that prevents potential critical defects from reaching production. Most impressively, self-healing frameworks substantially decreases test maintenance overhead, virtually eliminating false positives from UI changes and saving considerable engineering hours quarterly. This presentation provides both theoretical frameworks and practical implementation guidelines drawn from real-world deployments affecting millions of users. We'll examine the architectural integration patterns that proved most successful, discuss the ethical AI governance frameworks we established, and share our toolchain integration approaches that maintain reliable testing even in high-velocity deployment environments. Attendees will gain actionable insights into establishing AI-enhanced quality assurance practices that simultaneously improve quality metrics while dramatically reducing resource requirements.
- Amitabh Kumar CoFounder - Contrails
Deepfake Detection: Safeguarding Trust in the Age of Synthetic Media
In today's digital landscape, the proliferation of AI-generated synthetic media presents an unprecedented challenge to online trust. Deepfakes—hyper-realistic forgeries that can manipulate faces, voices, and entire identities—have evolved from technological curiosities into serious threats affecting commerce, politics, and personal security across digital platforms worldwide.
The statistics are alarming: deepfake incidents have surged by 900% since 2022, with malicious actors leveraging increasingly accessible generation tools to create convincing fake celebrity endorsements, fraudulent marketplace listings, and targeted disinformation campaigns. Research shows that 73% of consumers abandon platforms they perceive as unable to address synthetic content threats, creating both immediate financial impacts and long-term reputational damage.
At Contrails, we have been at the forefront of deepfake detection, working collaboratively with leading fact-checking organizations and providing critical protection for Fortune 500 C-suite executives increasingly targeted by synthetic media attacks. Our pioneering work has established benchmarks for the industry in both detection accuracy and implementation strategies across varied digital environments.
This comprehensive session explores the multifaceted approach required to combat deepfake proliferation through cutting-edge detection technologies and strategic implementation. We'll examine multimodal AI analysis systems that achieve 95%+ accuracy by simultaneously evaluating visual inconsistencies, audio anomalies, and metadata fingerprints that human observers might miss. Leading solutions like Microsoft Video Authenticator and Intel's FakeCatcher demonstrate how platforms can deploy real-time screening at scale while minimizing false positives.
Beyond technical solutions, we'll dissect emerging trends reshaping the detection landscape, including the ongoing "AI vs. AI arms race" where detection systems and generation capabilities continuously evolve in response to each other. Decentralized approaches are democratizing access to protection, with open-source APIs and federated learning frameworks enabling even smaller platforms to implement robust defenses without prohibitive investments.
The regulatory environment adds further urgency, as the EU's Digital Services Act, evolving GDPR applications, and proposed U.S. legislation increasingly mandate synthetic media transparency and verification capabilities. Through case studies like eBay's 60% reduction in fraudulent listings using combined AI detection and crowdsourced verification, we'll illustrate practical implementation strategies balancing technological and human elements.
- Celina Stewart Director of Cyber Risk Management - Neuvik
Framework Failings: Addressing the Lack of Responsible Deployment Guidance in Existing AI Frameworks
Most AI governance frameworks provide extensive guidance on ethical AI development, aiming to ensure that companies building AI do so responsibly. However, most organizations – public and private – are not developing AI models. Instead, they purchase commercial AI tools to enhance some component of their workflow. This begs a critical question: how can organizations be expected to responsibly and securely deploy AI when this topic isn’t emphasized in any of the major AI frameworks?
This talk argues that they can’t – and that lack of emphasis on responsible deployment is an extremely concerning risk in today’s technology environment. First, this talk will showcase the shortcomings of existing frameworks, including the NIST AI Risk Management Framework (NIST AI 100-1, 600-1), OECD AI Principles, and the UNESCO AI Ethics Recommendations. We’ll then unpack cybersecurity, data privacy, and productivity use cases to showcase how a lack of responsible deployment leads to tangible business risk. Finally, we’ll discuss concrete components of responsible AI deployment that should be incorporated not only into AI governance practices by companies buying AI tools, but into the AI governance frameworks themselves.
Attendees will leave with a deeper understanding of the limitations in existing AI governance frameworks, how a lack of responsible “deployment” guidance leads to risk, and practical considerations to use in lieu of existing framework guidance for AI deployment.
- Vasudha Hegde Senior Privacy Program Manager - DoorDash
Future-Proofing AI/ML Compliance Through Strong Data Privacy Foundations
In this session, I will explore how strong privacy practices today are the foundation for future-proof AI systems—enabling organizations to move faster, scale globally, and stay ahead of emerging regulations. As AI governance frameworks take shape worldwide, privacy is no longer just a compliance checkbox; it’s a critical enabler of trust, responsible innovation, and operational resilience.
Drawing on real-world case studies and scenario-based insights (including breakout exercises), I’ll illustrate how teams that embed privacy early in the AI lifecycle are better positioned to adapt to regulatory change, avoid costly rework, and build systems that are transparent, accountable, and ethical by design. Attendees will leave with practical takeaways on how to operationalize privacy as a strategic asset in AI development.
- Oliver Szimmetat Director of Security and Compliance - Taxbit
Understanding and Mitigating Risks Introduced by LLM Agents
This presentation delves into the cyber security risks posed by Large Language Model (LLM) based agents.
It will introduce a structured approach for threat modeling these agents and their frameworks, highlighting the various vulnerability classes they may introduce.
Attendees will gain insights into common threats such as data leakage, adversarial attacks, and unauthorized access.
Furthermore, the presentation will discuss effective security measures to mitigate these risks, ensuring that organizations can leverage the power of LLM agents while maintaining robust cyber security defenses.
- Ashok Prakash Senior Principal Engineer - Oracle
Scaling AI Infrastructure: Navigating Risks in Distributed Systems
As organizations increasingly integrate AI into their operations, the scalability of AI infrastructure becomes paramount. However, scaling introduces a spectrum of risks, from data inconsistencies and model drift to system failures and security vulnerabilities. Drawing from my experience leading AI infrastructure projects at Fortune 50 companies and major cloud providers, this session will delve into the challenges and solutions associated with scaling AI systems.
Key discussion points will include:
Designing resilient distributed systems that mitigate common failure points.
Implementing robust monitoring and observability to detect and address anomalies proactively.
Ensuring data integrity and consistency across diverse pipelines.
Balancing scalability with compliance, especially in regulated industries like healthcare.
Fostering cross-functional collaboration to align technical solutions with organizational risk management strategies.
Attendees will gain actionable insights into building scalable AI infrastructures that are not only efficient but also resilient against potential risks.
- Wesley Ramirez Senior Principal Model Governance - Discover Financial Services
From Idea to Reality: Bringing AI/GenAI Risk Management to Life in Finance
While the world of AI/GenAI heats up, it is essential to prepare your bank organization to lead with innovation and a risk management mindset. Join this engaging session to learn how you can plan for an AI/GenAI Risk Management upgrade, from ideation with key stakeholders to managing the new risks for your bank. During this session, you will hear from Wesley Ramirez, who led the roll out of the AI/GenAI Risk Management framework for Discover Financial Services, sharing pro-tips and techniques for managing AI/GenAI use cases, including organizational considerations, enterprise risks, and ongoing monitoring. If you want to prepare your organization for success and responsible AI, you won't want to miss this!
- Naman Goyal Machine Learning Engineer - Google Deepmind
The Ascendancy and Challenges of Agentic Large Language Models
The development of Large Language Models (LLMs) has shifted from passive text generators to proactive, goal-oriented "agentic LLMs," capable of planning, utilizing tools, interacting with environments, and maintaining memory. This talk provides a critical review of this rapidly evolving field, particularly focusing on innovations from late 2023 through 2025. We will explore the core architectural pillars enabling this transition, including hierarchical planning, advanced long-term memory solutions like Mem0 , and sophisticated tool integration. Prominent operational frameworks such as ReAct and Plan-and-Execute will be examined alongside emerging multi-agent systems (MAS). This talk will critically analyze fundamental limitations like "planning hallucination" , the "tyranny of the prior" where pre-training biases override contextual information, and difficulties in robust generalization and adaptation. We will also discuss the evolving landscape of evaluation methodologies, moving beyond traditional metrics to capability-based assessments and benchmarks like BFCL v3 for tool use and LoCoMo for long-term memory.
Furthermore, the presentation will address the critical ethical imperatives and safety protocols necessitated by increasingly autonomous agents. This includes discussing risks like alignment faking, multi-agent security threats , and the need for frameworks such as the Relative Danger Coefficient (RDC).
Finally, we will explore pioneering frontiers, including advanced multi-agent systems, embodied agency for physical world interaction, and the pursuit of continual and meta-learning for adaptive agents. The talk will conclude by synthesizing the current state, emphasizing that overcoming core limitations in reasoning, contextual grounding, and evaluation is crucial for realizing robust, adaptable, and aligned agentic intelligence.
- Joey Melo AI Redteaming Specialist - Pangea
What a $10,000 Challenge and 300K+ Prompt Injection Attempts Taught Us About Attacking AI
Over the course of 4 weeks in March 2025 we ran a $10,000 Prompt Injection Challenge where contestants competed to bypass 3 Virtual Escape Rooms consisting of 11 levels. Like golf, winners were scored on the lowest number of tokens used to bypass a level. Levels increased in difficulty and were protected by increasingly sophisticated guardrails. The challenge attracted thousands of participants.
We collected a broad set of Prompt Injection attacks that allowed us to build a comprehensive Taxonomy on Prompt Injection with well over 100 methods. We believe that this is the most comprehensive collection of Prompt Injection methods to date. The challenge winner, Joey Melo, who is also now Pangea's AI Red Teaming Specialist, will walk you through the data and learnings and show you how he beat the challenge, covering:
- Data-driven insights into how attackers manipulate generative AI systems.
- A comprehensive Taxonomy of Prompt Injection methods built on this data.
- Leading approaches to detecting and preventing Prompt Injection.
- Suraj Jayakumar AI Scientist - Block, Inc.
Evaluating and Monitoring LLM-Powered Applications at Scale
Large Language Models (LLMs) have transformed how businesses automate complex workflows.
At Block Inc., we've integrated LLMs deeply into our operational fabric, automating critical risk operations tasks with significant business impact.
However, deploying LLMs into production is just the beginning—continuously evaluating their effectiveness and maintaining visibility into their performance presents significant challenges.
This talk provides a deep dive into practical frameworks and methodologies for evaluating, monitoring, and improving LLM-based applications at scale.
We'll explore:
Techniques for robust prompt engineering: How do we effectively design, test, and iterate on prompts to ensure maximum impact?
Evaluation frameworks: Leveraging LLMs themselves as "judges" to measure the quality and effectiveness of applications.
Continuous performance monitoring: Strategies to track LLM effectiveness over time, identify performance drift, and proactively address degradation.
Observability and user impact: Using telemetry data, session replay, click tracking, and A/B testing to measure real-world usage and value.
- Peter Ableda Director of Product Management - Cloudera
Building a Secure Foundation: Essential Components of a Private AI Enterprise Stack
Enterprises are eager to leverage AI, but the inherent privacy risks of relying on public AI services, particularly for regulated industries, remain a significant barrier. This session will explore the core principles of Private AI – including the strategic use of open-source, private deployments, and controlled data access – and how they form the foundation for building a secure enterprise AI stack in-house. We will delve into the critical components necessary to maintain control over sensitive data and AI workloads, such as the selection of both commercial and open-source AI models that can be hosted within your firewalls, secure infrastructure options (private clouds, on-premise data centers), essential infrastructure like orchestration and high-performance inference engines, model registries for reproducibility and governance, and the vital role of an AI gateway for security, auditability, and access control. Attendees will gain a comprehensive understanding of how these interconnected components, guided by the principles of Private AI, work together to effectively mitigate risks, ensure compliance, and establish a trustworthy and flexible foundation for enterprise AI adoption.
Learning Objectives:
- Understand the core principles of Private AI.
- Ley challenges and risks associated with using public AI services for enterprise data and workloads.
- Components required to build a secure and open private AI stack for production deployment.
- Key considerations when designing a private AI platform.