Tuesday, August 19, 2025

AI Risk Summit Morning Keynote

Join us for this keynote as we kick off the 2025 AI Risk Summit!

AI Risk Summit Track 1 (Salon I)
Tue 9:00 AM - 9:45 AM

CISO Forum Tuesday Morning Keynote

CISO Forum Track (Salon III)
Tue 9:00 AM - 9:45 AM

When AI Agents Go Rogue: Unmasking Risky Enterprise AI Behavior with Unsupervised Learning

As enterprises rapidly adopt AI agents (e.g., Salesforce's Agentforce), a critical risk emerges: misconfigured or compromised agents performing anomalous, potentially harmful, data operations. This presentation unveils an original, practical methodology for detecting such threats using unsupervised machine learning.

Drawing from a real-world Proof-of-Concept, we demonstrate how behavioral profiling—analyzing features engineered from system logs like data access patterns, query syntax (SOQL keyword analysis), and IP usage, along with signals from the content moderation mechanisms embedded within the LLM guardrails such prompt injection detection and toxicity scoring—can distinguish risky agent actions. We explore the creation of 30+ behavioral features and the application of KMeans clustering to identify agents exhibiting statistically significant deviations, serving as an early warning for misuse or overpermissive configurations. We will share insights into observed differences between AI agent and human user profiles, and challenges like crucial data gaps that impact comprehensive monitoring.

This session offers a vendor-neutral, technical deep-dive into a novel approach for safeguarding enterprise AI deployments.

Learning Objectives for Attendees:

1. Understand the novel security risks posed by misconfigured/overpermissive enterprise AI agents.
2. Learn a practical methodology for behavioral profiling of AI agents using unsupervised ML and log data.
3. Identify key data features, feature engineering techniques (e.g., for SOQL analysis), and common data challenges (log gaps, attribution) in AI agent monitoring.
4. Gain actionable insights to develop proactive detection strategies for anomalous AI agent activity and protect sensitive data.

AI Risk Summit Track 2 (Salon II)
Tue 9:45 AM - 10:15 AM

Using Incident Response Practice For Stealth Risk Analysis

As a CISO, it's hard to get the attention of executive leadership amongst all the competing business issues. One big problem is that even if they agree on the potential impact of an incident, they won't agree on the probability of it happening, so your plans get lost in the shuffle. In this session we'll talk about one way to get them to take the risks as seriously as you do: tabletop exercises. Put on your social engineering hats, and prepare for the kind of fun that usually only the red team gets to have.

CISO Forum Track (Salon III)
Tue 9:45 AM - 10:15 AM

From Misfire to Mastery: AI Discovery as Strategic Risk

Join this session presented by Harald Ujc (CTO, Invenci) to learn how small and mid-sized businesses (SMBs) face major risks with AI adoption — not because of the technology itself, but due to poor discovery and problem definition.

AI Risk Summit Track 1 (Salon I)
Tue 10:15 AM - 10:45 AM

Patching Critical Infrastructure: Lessons from DARPA’s AI Cyber Challenge

DARPA and ARPA-H are on a mission to advance AI-driven cybersecurity and usher in a future where we can patch vulnerabilities before they can be exploited. AI Cyber Challenge Program Manager Andrew Carney will discuss lessons learned from competition and how the program is driving the innovation of responsible AI systems designed to address some of our most important digital issues today: the security of critical infrastructure and software supply chains.

AI Risk Summit Track 2 (Salon II)
Tue 10:15 AM - 10:45 AM

Adversarial AI Risk: Your Next Incident Won’t Be an 0Day

Most AI failures won’t come from novel exploits. They’ll come from assumptions no one tested. This talk breaks down the real threats already happening and shows why red teaming is the best way to catch what others miss. From nation-state actors to prompt-based jailbreak kits, you’ll learn how adversaries think and how to get ahead of them. If your model is in production, it’s already in scope.

AI Risk Summit Track 1 (Salon I)
Tue 11:00 AM - 11:30 AM

Economic Impact of Securing AI

As artificial intelligence (AI) becomes an increasingly integral part of global infrastructure, commerce, defense, and daily life, the imperative to secure these systems is no longer a technical concern alone—it is an economic necessity. This keynote explores the intersection of cybersecurity and AI through the lens of economic strategy, risk modeling, and incentive alignment, presenting a holistic framework for understanding and addressing the financial realities of securing AI.

Securing AI systems involves unique challenges: data poisoning, model inversion, adversarial attacks, and algorithmic manipulation. Unlike traditional software, AI models can be subverted not just through code exploits, but through the very data and feedback loops that drive their behavior. The existing investments we have for traditional Cyber Security do protect models - they provide only indirect protection at best. This misunderstanding of the existing controls can create a misalignment of not only spending but incentives among developers, users, regulators, and attackers, which traditional security economics already struggles to address.

In this talk, I will examine how context is key and cash is king. We will examine the existing controls in use in organizations and how those controls do not protect AI models from attacks. We will discuss financial materiality and material risk to help refine how to think about the incentives for companies to invest in AI security when the threats are diffuse and the benefits of prevention may be difficult to quantify. I will walk through how to construct an economic impact analysis for securing AI including how to evaluate various control options on total costs as well as sufficiency of control. I will also share recent trends in cyber security insurance and the lack of coverage benefits for AI models.

Drawing from my real-world experience as a former finance leader in addition to my years running security, this keynote offers a strategic view of AI security as an investment problem, a game-theoretic challenge, and a policy frontier which we all need to navigate to protect the promise of AI and avoid the perils that could occur. Attendees will leave with a deeper understanding of how economic impact and total cost models can inform the design of secure AI systems and influence corporate decision-making

By treating security not as just a cost but as a critical enabler of AI growth, we can move toward a future where AI systems are not only powerful, but are secure, resilient, and trustworthy.

AI Risk Summit Track 2 (Salon II)
Tue 11:00 AM - 11:30 AM
  • David Haddad Assistant Director - Technology Risk Management - Ernst & Young

Leading A Successful Generative AI Journey: A CIO’s Guide

The potential of generative AI (GenAI) to enhance profitability and productivity is widely recognized. However, skepticism regarding ROI necessitates a strategic approach for chief information officers (CIOs) to effectively leverage GenAI. Working to exploit GenAI opportunities and establish robust GenAI programs, CIOs face challenges including data and infrastructure readiness, cyber risks and regulatory compliance. This presentation explores practical implications, such as the importance of implementing strong cybersecurity measures to protect data, as well as navigating emerging AI regulations that could result in financial penalties and operational disruptions. EY technology, strategy and transactions, and risk management professionals outline strategic approaches for CIOs, emphasizing the identification of GenAI opportunities and defining leadership archetypes based on organizational maturity levels. Various governance strategies and the role of centers of excellence are also discussed. Sourcing strategies highlight the importance of investing in core GenAI capabilities and partnering with external providers. Guidance for managing ROI through consistent measurement across development stages aims to drive strategic alignment with business objectives. The EY team concludes by outlining key steps for a successful GenAI journey. This presentation is a resource for CIOs, technology practitioners and risk management specialists aiming to navigate the complexities of GenAI adoption and risks while driving valuable outcomes.

CISO Forum Track (Salon III)
Tue 11:00 AM - 11:30 AM

Emerging AI Attack Vectors: The Rising Threat of MCP-Enabled Attacks on Agentic AI

As generative AI systems become increasingly embedded within enterprise applications and critical infrastructure, attackers are rapidly evolving new methods to subvert their behavior. This session provides a deep dive into the emerging landscape of prompt injection vulnerabilities, with a particular focus on Model Context Protocol (MCP)—a rapidly growing surface for sophisticated, indirect exploitation.

We begin by mapping out the new attack vectors that go beyond traditional prompt manipulation, including:
• Advanced indirect injections through RAG systems and stored memory,
• Multimodal injections leveraging audio, images, and steganography,
• And most critically, MCP-based attacks that exploit tool descriptions, agent planning logic, and retrieval-agent deception techniques.

Attendees will gain insight into how MCP tool poisoning, RADE attacks, and “rug pull” strategies exploit the trust models embedded within AI agents, allowing attackers to hijack LLM behavior without direct interaction.

We also explore the “distraction effect”—a novel mechanism where attackers manipulate internal attention weights within transformer architectures—and the “policy puppetry attack”, which uses leetspeak, roleplay, and structured inputs to bypass safety filters across models.

The session closes by reflecting on the fundamental architectural challenges in mitigating these threats, examining why traditional input/output filtering, prompt engineering, and adversarial training may fall short.

This is a must-attend session for AI researchers, security professionals, and developers who want to stay ahead of evolving threats and understand how protocols like MCP are transforming both AI capabilities and their associated risks.

Focus Track (Salon IV)
Tue 11:00 AM - 11:30 AM

Measuring the Propagation of Bias in Foundation Model-Based AI Systems

As general-purpose foundation models become integral to a wide array of downstream AI applications, concerns over their embedded biases and the extent to which these biases propagate into fine-tuned models and real-world systems are increasingly critical. This presentation provides a systematic framework for measuring bias transmission from pretraining in foundation models to deployment in task-specific applications.

AI Risk Summit Track 1 (Salon I)
Tue 11:30 AM - 12:00 PM

Strong Arming and Appealing to Human-like Fallibility: How Attackers Manipulate AI Tooling

Many organizations have rapidly adopted Generative Artificial Intelligence (GenAI) tooling, using it to enhance productivity, facilitate customer interactions, and boost sales. However, most companies – even those with strong cybersecurity programs and AI governance – lack awareness of the ways GenAI tooling can be manipulated by malicious actors to bypass controls and reveal confidential data.

Using technical case examples, this talk highlights techniques attackers use to manipulate GenAI tools such as chatbots into revealing sensitive information. These include appeals to GenAI’s human-like desire to “get along” and “help” and its propensity to become “distracted” or “intimidated” if competing or forceful requests occur. Then, this talk will then showcase how these techniques are used to supercharge common intrusion tactics such as prompt injection, command injection and privilege escalation during the initial access and exploitation phase of an adversary’s attack path.

Attendees will take away a clear understanding of common methods used by adversaries to manipulate GenAI tools and bypass existing controls, as well as concrete guidance on how to incorporate these techniques into their own penetration testing programs to preemptively identify weaknesses.

AI Risk Summit Track 2 (Salon II)
Tue 11:30 AM - 12:00 PM

Modern Threats, Smarter Defenses: A Case-Based Look at Proactive Security in the AI Era

Inspired by case reports from Trend’s incident response team, we’ll explore a typical recent attack chain in detail, showing the latest efforts to stay under the radar of detection technologies. But proactive strategies are evolving rapidly, and we’ll replay the attack timeline together to see how exposure management makes all the attacker steps slower and more challenging, if not impossible. We’ll wrap up by reviewing some of the latest AI-specific enterprise risks, and the relevant proactive defense strategies.

CISO Forum Track (Salon III)
Tue 11:30 AM - 12:00 PM

AIDR? Why AI Demands its Own Detection & Response Strategy

AI is increasingly embedded in all aspects of compute with the real potential for agents, not humans, to soon become the majority users of software. This paradigm shift requires visibility, detection and security control measures comparable to those implemented for other attack surface layers such as networks and endpoints. This session will explore new threats introduced by AI using real-world attack data and present strategies for achieving visibility, detection and control footholds across all AI transit points.

AI Risk Summit Track 1 (Salon I)
Tue 12:00 PM - 12:30 PM

Adversarial Intelligence: Production AI Systems Through the Eyes of the Attacker

This presentation explores Adversarial Intelligence - an approach that views the security of AI applications from an attacker’s perspective. Drawing from vulnerability research experience at the NSO Group and building Pegasus, the speaker will highlight how overlooked low and medium vulnerabilities can be combined to execute successful attacks. By examining attack chains and application runtime behavior, attendees will see how gaps often missed by traditional methods are exposed. Attendees will learn about effective tools and techniques for detecting and mitigating these threats, especially in cloud-native and distributed systems. Designed for security practitioners and academics, this session provides a deeper understanding of defending against emerging attack patterns specific to AI applications by adopting their mindset.

AI Risk Summit Track 2 (Salon II)
Tue 12:00 PM - 12:30 PM

The Art of Prompt Injection and Making Your AI Turn on You

Promptware and prompt injections have been making waves across the cybersecurity world in the last year. Allowing hackers to hijack AI applications of any kind (autonomous agents included) for their own malicious purposes, they open the door to high-impact attacks leading to data corruption, data exfiltration, account takeover and even persistent C&C.
But crafting effective prompt injections is an art. And today, we’ll reveal its best kept secrets.

Together we’ll go through the principles of building effective and devastatingly impactful prompt injection attacks, effective against the world’s most secure systems. We’ll demonstrate access-to-impact exploits in the most prominent AI systems out there, including: ChatGPT, Gemini, Copilot, Einstein and their custom agentic platforms. Penetrating through prompt shields as if they were butter, and revealing every clever technique along the way.

We’ll see how tricking AI into playing games leads to system prompt leakage, and how we can use it to craft even better injections. We’ll understand why training LLMs for political correctness might actually make them more vulnerable. Why special characters are your best friend, if you just know where to place them. How you can present new rules that hijack AI applications without even having direct access to them. Ultimately instilling the ability to look at AI applications from a hacker’s perspective, developing the intuition for how to attack each one for the highest impact.

Finally, after dismantling every layer of prompt protection out there, we’ll discuss going beyond prompt shielding, and explore defense-in-depth for AI applications. Suggesting a new way into how we can truly start managing this threat in the real world.

AI Risk Summit Track 1 (Salon I)
Tue 1:30 PM - 2:00 PM

AI Risk Summit Panel

AI Risk Summit Track 2 (Salon II)
Tue 1:30 PM - 2:15 PM

CISO Perspectives: Navigating the Security Landscape in 2025

In a world where cyber risk is business risk, today's Chief Information Security Officers are not just defenders of data—they are strategic partners driving organizational resilience. Join a high-impact panel discussion featuring several of the industry’s leading CISOs, moderated by Gartner's Ash Ahuja. This candid conversation will explore how security leaders are balancing innovation with risk management, influencing board-level decision-making, and navigating complex threat environments in 2025.

CISO Forum Track (Salon III)
Tue 1:30 PM - 2:15 PM

Breaking the Black Box

Traditional security testing is neat and binary: find the bug, exploit the system, check the box. But when your target is a generative AI model that improvises, adapts (and sometimes lies with confidence) things get weird, fast.

This talk dives into the messy, fascinating world of AI red teaming, where success isn’t just about getting in, it’s about provoking behavior, exposing hidden biases, slipping past safety guardrails, and seeing what breaks when the rules bend.

We'll unpack why AI security demands more than traditional exploits, why your tools now need to think, and how testing has evolved from black-and-white checks to full-spectrum investigation.

If you’ve ever wondered how to secure a system that won’t stop changing (or how to test something that can talk back) this talk's for you!

AI Risk Summit Track 1 (Salon I)
Tue 2:15 PM - 2:45 PM

AI Is Making the Decisions—Where’s the Control Layer?

AI Is Making the Decisions—Where’s the Control Layer?”
Why Traditional Risk Models Are Breaking in the Age of Autonomous Systems

AI Risk Summit Track 2 (Salon II)
Tue 2:15 PM - 2:45 PM

From Assumptions to Assurance: Calibrating AI with Institutional Truth

Generative AI has made a number of recent 'up the hill' technical advances, from training time compute to recent advances on inference time, but that hasn't made risk management and compliance executives any more comfortable in deploying large scale AI to consumers. Central to this issue is the ability to apply an organizations or regions definition / perspective / ground truth to the management of the AI, so that its reasoning, safety, and security guardrails align to individual expectations. For example, your definition of ‘safety’ most definitely is not mine, nor others. And with regulators reminding organizations that AI must still comply with existing laws and regulations, the next advancement will be focused on ‘intelligent AI’, AI that can comprehend nuanced requirements, specific to each organization’s ground truth, in a defensible manner. In this talk, we will have a fun and interactive fireside chat discussion on the types of AI risk management controls that allow for a tailored ground truth that risk, legal, compliance, and AI leaders should be looking out for, including the types of evidence and skillsets needed to effectively oversee them.

Learning Objectives:
 Awareness of critical AI controls throughout the AI lifecycle that support ground truth identification
 Insights into AI risk management function of the future
 Interactive engagement, clearly understanding that ground truth is not one size fits all

Focus Track (Salon IV)
Tue 2:15 PM - 2:45 PM

Beneath the Prompt: The Hidden Risks Powering GenAI

As LLMs power more applications across industries, firmware and hardware security is now mission-critical. The attack surface has shifted downward, making AI infrastructure itself the new battleground. Securing GenAI involves both:

- Traditional cybersecurity controls (monitoring, patching, access controls)
- AI-specific governance frameworks (model integrity, supply chain verification)

The message is clear: securing the model is not enough—you must secure the machine it runs on. This talk will highlight the vulnerabilities in the infrastructure powering large language models (LLMs) and generative AI systems. It will focus on the hardware, firmware, and cloud components that support AI, revealing how these foundational layers are increasingly targeted by sophisticated attacks.

AI Risk Summit Track 1 (Salon I)
Tue 2:45 PM - 3:15 PM

AI and It's Impact on Data Privacy and Technology

In this session, we will explore the critical role of safeguarding data privacy in the development and deployment of AI-driven software applications. With AI systems increasingly handling sensitive personal information, it is essential to understand the privacy challenges these technologies present. We will discuss how to implement privacy-preserving techniques, including differential privacy, data anonymization, and secure data storage, to protect user information. Through real-world examples and case studies, attendees will gain insights into the practical steps required to balance the innovative capabilities of AI with the necessary safeguards to ensure user trust and regulatory compliance. This session is ideal for anyone working on AI applications who wants to understand how to better safeguard data and respect privacy.

AI Risk Summit Track 2 (Salon II)
Tue 2:45 PM - 3:15 PM
  • James Sayles Chief AI Officer and Director of Global GRC - Halliburton

AI Under Fire: Securing Trust, Strategy, and Sovereignty in the Age of Intelligent Threats

As AI reshapes global industries and defense strategies, it also introduces unprecedented risks—deepfakes, adversarial manipulation, IP theft, and geopolitical destabilization. This session dives into how adversaries exploit AI for misinformation, brand attacks, and national security disruption—and what leaders can do to defend against it. Beyond the threats, we’ll explore how to plan an AI integration roadmap that protects intellectual property, embeds cybersecurity by design, and enhances enterprise risk management.

Drawing from real-world defense case studies and high-stakes risk mitigation strategies, this session equips security and business leaders with a battle-tested blueprint for AI resilience. We’ll also tackle the regulatory crossroads: How can we balance innovation with public interest, and what role does international collaboration play in securing responsible AI advancement?

Join Dr. JK. Sayles for a high-impact discussion on building sovereign, secure, and strategic AI ecosystems—where risk is managed, innovation is unleashed, and trust is earned by design.

CISO Forum Track (Salon III)
Tue 2:45 PM - 3:15 PM

Deepfake Detection: Safeguarding Trust in the Age of Synthetic Media

In today's digital landscape, the proliferation of AI-generated synthetic media presents an unprecedented challenge to online trust. Deepfakes—hyper-realistic forgeries that can manipulate faces, voices, and entire identities—have evolved from technological curiosities into serious threats affecting commerce, politics, and personal security across digital platforms worldwide.

The statistics are alarming: deepfake incidents have surged by 900% since 2022, with malicious actors leveraging increasingly accessible generation tools to create convincing fake celebrity endorsements, fraudulent marketplace listings, and targeted disinformation campaigns. Research shows that 73% of consumers abandon platforms they perceive as unable to address synthetic content threats, creating both immediate financial impacts and long-term reputational damage.

At Contrails, we have been at the forefront of deepfake detection, working collaboratively with leading fact-checking organizations and providing critical protection for Fortune 500 C-suite executives increasingly targeted by synthetic media attacks. Our pioneering work has established benchmarks for the industry in both detection accuracy and implementation strategies across varied digital environments.

This comprehensive session explores the multifaceted approach required to combat deepfake proliferation through cutting-edge detection technologies and strategic implementation. We'll examine multimodal AI analysis systems that achieve 95%+ accuracy by simultaneously evaluating visual inconsistencies, audio anomalies, and metadata fingerprints that human observers might miss. Leading solutions like Microsoft Video Authenticator and Intel's FakeCatcher demonstrate how platforms can deploy real-time screening at scale while minimizing false positives.

Beyond technical solutions, we'll dissect emerging trends reshaping the detection landscape, including the ongoing "AI vs. AI arms race" where detection systems and generation capabilities continuously evolve in response to each other. Decentralized approaches are democratizing access to protection, with open-source APIs and federated learning frameworks enabling even smaller platforms to implement robust defenses without prohibitive investments.

The regulatory environment adds further urgency, as the EU's Digital Services Act, evolving GDPR applications, and proposed U.S. legislation increasingly mandate synthetic media transparency and verification capabilities. Through case studies like eBay's 60% reduction in fraudulent listings using combined AI detection and crowdsourced verification, we'll illustrate practical implementation strategies balancing technological and human elements.

AI Risk Summit Track 1 (Salon I)
Tue 3:30 PM - 4:00 PM

Can You Trust Your AI SOC Analyst? Testing the Limits of LLMs in Security Operations

LLMs are showing up in SOC tools, from log triage to incident summaries. But can we trust their outputs in critical workflows? This session explores the promises and pitfalls of using LLMs in security operations. We’ll evaluate real-world use cases like auto-generating detections, summarizing incidents, and helping with reverse engineering tasks. Through examples and benchmarks, we’ll explore where LLMs shine, where they hallucinate, and how to build secure, auditable pipelines around them. Attendees will leave with a framework to evaluate AI tools in the SOC, and a clear sense of when to automate, when to supervise, and when to just say no.

AI Risk Summit Track 2 (Salon II)
Tue 3:30 PM - 4:00 PM

Adversarial Machine Learning and AI Forensics

Artificial intelligence is now central to enterprise innovation, risk reduction, and profitability—making legal, regulatory, and risk preparedness a top priority. This presentation explores the AI lifecycle from inception to deployment, highlighting how implementations can be compromised through inadvertence, internal misuse, or external threats. We’ll examine systemic risks across the AI ecosystem and outline practical mitigation strategies. The session concludes with an overview of AI forensics—what to collect, how to do so defensibly, and its role in investigations, litigation, and audits.

Description

Artificial intelligence has become the new norm for enterprise competitive advantage, decreased risk and improved profit. Accordingly, we must be prepared for regulatory, legal and risk as a top priority.

In this presentation, Paul will cover the AI ecosystem, from inception, to development and then to deployment.

From this, we will examine ways in which artificial intelligence implementations can be compromised either through inadvertence or malfeasance. Artificial intelligence risks span the entirety of an ecosystem involving an interdisciplinary synergy that must be examined holistically.

This approach involves first understanding the ways in which AI implementations can be compromised by inadvertence, internal attacks, external threats. We will examine known risks as well as mitigation strategies to reduce risk across the AI-technology spectrum.

We will then review AI forensics which touches on what information should be gathered and how to do so in a forensically sound and defensible manner. This is most relevant as factual support for investigations, discovery in litigation and in audits.

Key Takeaways for Risk Professionals

AI Is a Risk Vector: AI systems introduce unique risks—legal, operational, ethical—that must be integrated into enterprise risk frameworks.

End-to-End Exposure: Risks can arise at any stage—design, development, or deployment—and require continuous, interdisciplinary oversight.

Compromise Is Multidimensional: AI can be undermined through inadvertent design flaws, insider misuse, or external attacks; vigilance must extend beyond traditional cyber controls.

Holistic Risk Mitigation: Effective controls include technical safeguards, governance policies, cross-functional coordination, and continuous monitoring.

AI Forensics Matters: In the event of an incident, knowing what data to preserve and how to collect it forensically is crucial for audits, investigations, and litigation.

Prepare for Regulatory Scrutiny: Emerging global regulations demand documentation, explainability, and defensible processes—risk teams must lead in ensuring compliance.

Focus Track (Salon IV)
Tue 3:30 PM - 4:00 PM

Wednesday, August 20, 2025

Secret Agent, Ma’am: New Rules For AI Access Management

How many identities should an AI agent be allowed to have? And how does authentication work when the agent is representing other identities? In this session we’ll talk about different risk-based approaches to a new breed of account, its entitlements, and the looming trap door we call delegation.

AI Risk Summit Track 1 (Salon I)
Wed 9:45 AM - 10:15 AM
  • Blake Gilson Operational Technology Cyber Security and Risk Manager - ExxonMobil

Building AI into Industrial Environments: Practical Strategies for Secure and Scalable Deployment

AI presents tremendous opportunities for industrial organizations to improve efficiency, reduce downtime, and enhance decision-making. However, deploying AI in operational technology OT environments where uptime, safety, and security are critical, is uniquely challenging. This session will offer a practical strategies for successfully integrating AI into complex, legacy-rich industrial systems. Drawing on real-world experience from critical infrastructure and energy sectors, the session will outline key steps to assess AI readiness, system protection, and integration. Attendees will gain actionable guidance and design on mitigating cyber risks, AI pitfalls, and aligning AI efforts with broader enterprise security and compliance goals. This will focus on a Crawl, Walk, Run model of maturity,

We will explore common pitfalls, such as assuming IT patterns can directly apply to OT, and how to avoid them. The talk will also address the organizational challenges of AI adoption, including bridging IT/OT silos and building the right cross-functional teams. Whether you're starting your AI journey or scaling existing initiatives, this session will equip you with strategic and technical insights to safely and effectively deploy AI in industrial settings.

Learning Objectives:
Understand the differences in priorities between OT and IT systems
Define key technical and cultural prerequisites for AI in OT environments
Safeguard AI pipelines from cyber threats and data quality issues
Apply architecture patterns for edge, cloud, and hybrid AI deployment
Foster collaboration across IT, OT, and data science teams

AI Risk Summit Track 2 (Salon II)
Wed 9:45 AM - 10:15 AM

How We Audit ML Systems for Risk, Drift, and Misuse

As machine learning systems become deeply embedded in products, it’s not just accuracy that matters-- it’s accountability. This talk covers our internal approach to proactively identifying risks in ML workflows, from unintentional bias to model drift and even potential misuse. I’ll walk through how we adapted standard DevSecOps patterns (like monitoring, alerting, versioning) to the ML stack, and how we created a lightweight review system for ethical red flags.

AI Risk Summit Track 1 (Salon I)
Wed 10:15 AM - 10:45 AM
  • Max Leepson Senior Manager, Global Safety & Security - Salesforce

Same Data, Different Outcomes: How Prompt Variability Exposes Hidden AI Risks

Enterprises often invest significant effort into securing datasets, configuring responsible AI agents, and implementing strict guardrails. But what happens when different users — armed with the same inputs — prompt the system in entirely different ways? What if those prompts lead to radically divergent outputs, even within a tightly controlled environment?

In this session, we’ll explore a recent hands-on workshop designed to surface this very issue. During the session, over business and technical stakeholders interacted with the same generative AI agent, using the same underlying data and capabilities. The only variable? Their prompts. The results exposed a critical but often overlooked risk: prompt variability can lead to unpredictability, misalignment, or even reputational risk — all without any change to the model or its data.

We’ll present this as a case study in how to intentionally design exercises that make the risks of prompt-driven divergence visible to non-technical stakeholders. Attendees will gain practical insight into the limits of AI guardrails and the real-world complexity of human-agent interaction in enterprise settings.

AI Risk Summit Track 2 (Salon II)
Wed 10:15 AM - 10:45 AM
  • Beth George Partner, Co-head of Strategic Risk Management Practice - Freshfields

Seeing Risk: Legal and Privacy Pitfalls of Multimodal and Computer Vision AI vs Text-Based LLMs

As enterprises embrace multimodal AI and computer vision models, the legal and privacy risks multiply-often in ways that text-only large language models (LLMs) do not present. This session will examine the unique privacy and regulatory challenges introduced by AI systems that process images, video, audio and other non-textual data alongside text. We will explore how multimodal models not only expand the attack surface for adversarial threats, but also create new vectors for privacy violations, regulatory non-compliance, and legal liability.
● Increased Data Exposure: Multimodal models may process personal and sensitive data, including images, biometric identifiers, and contextual metadata. This aggregation heightens the risk of unauthorized data exposure, both during model training and inference, and introduces new obligations under privacy and security regulations.
● Informed Consent: The collection and use of visual and multimodal data can occur without explicit user consent or clear communication about secondary uses, raising significant compliance and ethical concerns. For example, training computer vision models on publicly available images without consent-as seen in high-profile facial recognition cases-has led to regulatory scrutiny and lawsuits
● Privacy Harms By Inference: Multimodal AI may infer sensitive personal attributes (such as health status or location) from seemingly innocuous images or sensor data. This risk is amplified by the richness and granularity of multimodal datasets.
● Adversarial Attacks and Data Leakage: Visual prompt injection and adversarial image attacks can bypass safety filters, leading to the generation or exposure of harmful or illegal content-sometimes at rates far exceeding those of text-only models. These attacks may also enable malicious actors to extract or reconstruct sensitive information from model outputs.
● Compliance and Transparency Challenges: The "black box" nature of advanced multimodal models makes it difficult for organizations to explain how personal data is processed, complicating compliance with privacy laws that require transparency, accountability, and the right to explanation for automated decisions.

Learning Objectives:
● Identify the specific privacy and legal risks unique to multimodal and computer vision AI compared to text-only LLMs
● Understand regulatory obligations around multimodal data collection, storage, and processing
● Develop strategies for obtaining informed consent, minimizing data exposure, and ensuring transparency in multimodal AI systems
● Assess technical and governance controls to mitigate privacy risks and support legal compliance
This session is vital for legal, compliance, and security professionals navigating the evolving landscape of multimodal AI, ensuring that innovation does not come at the expense of privacy and regulatory integrity.

CISO Forum Track (Salon III)
Wed 10:15 AM - 10:45 AM

AI and Risk Transfer: The Cyber Insurance Perspective

The insurance industry does not have a reputation for leading technical innovation, but cyber insurance is one line that has been forced to keep pace. This session addresses the intersection of artificial intelligence (AI) and cyber risk from the cyber insurance perspective. Risk management executives, CISOs, Chief Privacy Officers, government officials, and policymakers will gain a understanding of the role AI risk transfer plays for organizations in the AI ecosystem. Attendees will learn what is required for cyber insurance to really work effectively as a risk transfer vehicle for the parties involved. Executives at companies developing or using AI will also come away with insight into how to determine the optimal cyber insurance coverage.
The cyber insurance industry has played an increasingly important role for technology innovators and their customers. As AI becomes a ubiquitous feature of the digital world, integrated into all levels of business operations, it introduces new cybersecurity challenges and adds new dimensions to the traditional cyberattack surface. Cyber insurance provides a useful risk transfer solution that addresses these evolving threats because AI development and deployment create new avenues for cyberattacks, including software supply chain risks from reliance on third-party AI components and increased data exposure due to the vast datasets AI processes. In addition, purveyors of “AI-powered” solutions face the same privacy liability, professional and product liability risks as any other software company, not to mention AI’s legal and regulatory landmines. There is a broad spectrum of AI risks and they are shared among the stakeholders including innovators, their customers, their vendors and their insurers.
The session will provide insight into the perspective of cyber insurance companies and their underwriters. Underwriters will soon be consumers of AI risk assessments at scale, as they are tasked with understanding and evaluating these new risk vectors. They must quickly assess an organization's AI security posture specifically concerning its AI systems.
While traditional cyber insurance policies may offer some baseline coverage, the unique nature of AI risks necessitates a closer examination of policy terms and potential coverage gaps. Insurers are beginning to recognize the need for more explicit coverage for AI-specific incidents. Potential AI-related risks that cyber insurance policies may cover or are evolving to cover include:
• AI Model compromise and/or failure
• Data breaches involving training data or the AI models themselves
• Business interruption resulting from cyberattacks against AI infrastructure, AI-driven processes or their supply chains.
• Ransomware attacks against, or facilitated by AI systems.
This session will also provide risk managers with insights into how the cyber insurance market is adapting to the age of AI, helping organizations understand the most relevant coverage available.

Focus Track (Salon IV)
Wed 10:15 AM - 10:45 AM

Emerging Threats from Accessible AI Image Generation

The rapid advancements in AI image generation have made creating realistic fake images accessible to virtually anyone, fundamentally altering our relationship with visual information. This session, kicking off with an eye-opening "AI or Reality?" game, will expose the emerging threats presented by this democratization of visual creation. We will delve into the risks associated with the increasing accessibility of AI image creation, exploring how these powerful tools are being exploited for privacy violations, financial fraud, and the widespread dissemination of misinformation. We will examine real-world examples of AI-generated forgeries, from fake insurance claims and fraudulent receipts to synthetic identities used to circumvent verification systems and viral hoaxes that erode public trust. The session will also cover practical techniques for identifying potentially manipulated or AI-generated images and actionable strategies individuals and organizations can adopt to protect digital identities and combat the spread of visual deception in this new era. Furthermore, we will discuss how enterprise organizations should consider developing mechanisms to detect fake images, including leveraging detection algorithms, watermarking, and content provenance initiatives. Finally, we will touch upon the broader emerging technological solutions and policy initiatives being developed to address these critical challenges.

Learning Objectives:

1. Understand the capabilities of modern AI image generation models and the resulting difficulty in distinguishing between AI-generated and real images.
2. Understand how the increased accessibility and sophistication of AI image generation tools contribute to emerging security and privacy risks.
3. Identify real-world examples of how AI-generated images are being used for malicious purposes, including financial fraud, identity theft, and misinformation campaigns.
4. Learn practical techniques and indicators to help detect AI-generated or manipulated images.
5. Explore actionable strategies for individuals and organizations to protect personal images and information and enhance digital self-defense against AI-powered deception.
6. Understand how enterprise organizations can develop and implement mechanisms for detecting fake images, such as using detection algorithms, watermarking, and content provenance standards.
7. Gain insight into the technological and regulatory landscape evolving to combat AI image misuse, including detection algorithms, watermarking, and content provenance, and policy frameworks.

AI Risk Summit Track 1 (Salon I)
Wed 11:00 AM - 11:30 AM

Vibe Coding: Uncovering the Hidden Risks of Typosquatting and Supply Chain Attacks

AI-assisted coding is democratizing software development, empowering anyone to build applications at unprecedented speed. But this "vibe coding" trend—rapid prototyping by individuals without formal training—also creates new security challenges. From typosquatting attacks to dependency hijacking, attackers are targeting these environments, exploiting developer overconfidence and expanding the attack surface. This session will equip security leaders with insights into these emerging risks and practical steps to secure their software supply chains in an AI-driven world.

AI Risk Summit Track 2 (Salon II)
Wed 11:00 AM - 11:30 AM

Beyond the Breach: Analyzing AI System Failures, Safeguarding Data, and Addressing Ethical Risks

This session will provide a comprehensive look at the multifaceted risks inherent in AI systems, moving from external threats to internal failures and profound ethical challenges. We will delve into safeguarding AI systems against cyber threats and hacking, exploring strategies for preventing data breaches and information theft that target the sensitive data powering these models. The session will also analyze common causes of AI system failures, illustrating these points through real-world case studies that reveal unexpected vulnerabilities and consequences. Furthermore, we will navigate the critical ethical debates surrounding AI, addressing crucial issues like privacy violations, algorithmic bias, the risks in critical decision-making processes, and the ethical implications when AI systems are maliciously manipulated or fail. Attendees will gain a holistic understanding of the AI risk landscape, practical risk mitigation strategies, and insights into the ethical considerations essential for responsible and secure AI deployment in anticipation of evolving compliance and regulatory demands.

CISO Forum Track (Salon III)
Wed 11:00 AM - 11:30 AM

An AI Pentester's Reflections On Risk

Despite AI's complex and pervasive growth across products and services, most organizations find difficult to tangibly define AI risk, let alone mitigation and management. Yet industry continues an ever-pressing push toward deeper and more powerful integration of AI technology, which is only accelerated by the reach into agentic software and design patterns.

Drawing from extensive cross-sector engagements, NCC Group's AI/ML security practice lead will analyze the most significant risk vectors we've observed reoccur in AI implementations and the real, impactful vulnerabilities that have emerged from this computing paradigm. This talk outlines:

* How AI impacts Confidentiality, Integrity, and Availability of critical assets within organizations
* Why organizations find it difficult to apply traditional security models to AI systems
* The impact of agentic AI on system security
* How we can apply security fundamentals to AI
* What lessons we can draw from previous paradigm shifts

Attendees will walk away with a clear understanding of AI security's "state of play," including tangible AI risks along with their requisite remediation mechanisms. They'll leave equipped to lead and direct secure AI deployments using state-of-the-art defensive practices adopted by AI-mature organizations at the forefront of modern AI security.

AI Risk Summit Track 1 (Salon I)
Wed 11:30 AM - 12:00 PM

Ethical AI Practices: Balancing Innovation with Responsibility

As AI transforms industries, organizations face a critical challenge: harnessing its potential while ensuring ethical integrity. This talk explores actionable strategies to embed responsibility into AI innovation, drawing from real-world implementations at scale, including Microsoft Teams’ infrastructure serving 145M+ users.

Challenges & Solutions:
AI systems risk perpetuating biases, lacking transparency, or causing unintended harm. Without ethical safeguards, organizations face reputational damage and regulatory penalties. To address this, we examine frameworks like Microsoft’s Responsible AI Standard, integrating fairness, privacy, and accountability into the development lifecycle. Tools such as Fairlearn detect biases during training, while SHAP provides model interpretability, ensuring compliance with regulations like the EU AI Act.

Real-World Impact:
Case studies reveal how bias audits reduced disparities in user feedback analysis by 40%, and transparent data handling aligned with GDPR. Post-deployment, Azure Machine Learning’s monitoring tools flagged 25% fewer ethical incidents through real-time anomaly detection.

Key Takeaways:

Proactive Bias Mitigation: Audit datasets and models pre-deployment using fairness indicators.

Explainability for Trust: Simplify AI logic with techniques like LIME for stakeholder buy-in.

Governance & Accountability: Create cross-functional ethics boards to oversee AI lifecycle risks.

For technical and non-technical leaders alike, this session equips teams to transform ethical practices into innovation drivers—ensuring AI advances progress without compromising societal trust.

AI Risk Summit Track 2 (Salon II)
Wed 11:30 AM - 12:00 PM

Redefining the CISO: Aligning Security Leadership Beyond the Breach [PANEL]

The role of the CISO is evolving—fast. In this panel discussion, we’ll challenge traditional assumptions about what it means to be a security leader in today’s business landscape. From driving measurable business value to cultivating the next generation of cybersecurity leadership, this session brings together seasoned CISOs and emerging voices to explore how the security function can become more integrated, strategic, and future-ready. Join us for a candid conversation about expanding influence, enabling innovation, and shaping what comes next.

CISO Forum Track (Salon III)
Wed 11:35 AM - 12:30 PM

When AI Agents Go Rogue - Securing Autonomous AI Systems Before They Act

Autonomous AI agents are no longer theoretical. They’re building workflows, calling APIs, writing code, and making decisions at scale. But with that power comes risk, new, emergent, and often unpredictable. As agent frameworks like AutoGPT, LangGraph, CrewAI, and custom orchestrators gain adoption, organizations must ask: What happens when your AI doesn’t just hallucinate but acts?

In this talk, Advait Patel, cloud security engineer and contributor to the Cloud Security Alliance’s AI Control Matrix, will unpack the risks associated with AI agents acting autonomously in production environments. Through real-world examples and red-team simulations, we’ll explore how agentic systems can be manipulated, coerced, or simply misaligned in ways that lead to security incidents, privacy violations, and cascading system failures.

Topics we’ll cover:
- How AI agents make decisions and where control is lost
- Prompt injection + tool usage = real-world lateral movement
- Over-permissive action spaces: API abuse, identity leaks, and shadow access paths
- Why traditional threat modeling fails for agentic workflows
- Techniques to sandbox, constrain, and monitor AI agents (function routers, policy-as-code, response filters)
- Logging and observability for “invisible” agent behavior

The attendees will walk away with:
- A framework to assess agentic AI security posture in your environment
- Examples of attack chains involving AI agents, cloud APIs, and dynamic plugin execution
- Architectural patterns to deploy secure-by-design agent frameworks in enterprise settings
- Recommendations for SOC teams on how to detect and respond to rogue agent behavior

This session is designed for CISOs, security architects, red teams, and AI product engineers who are exploring or deploying autonomous AI systems. If your LLM can act, it can be exploited, and this talk will show you how to defend against that future.

AI Risk Summit Track 2 (Salon II)
Wed 12:00 PM - 12:30 PM

The Triple Threat: How AI Technologies Reduce Testing Costs While Improving Quality Metrics

This presentation explores the transformative integration of three cutting-edge AI technologies in software quality assurance that collectively reduce testing costs across enterprise implementations. By combining generative AI for test script creation, machine learning-based predictive defect analytics, and self-healing automation frameworks, we establish a continuous quality feedback loop that dramatically improves testing efficiency. Our longitudinal study across healthcare, fintech, and e-commerce implementations reveals that generative AI significantly reduces test creation time while increasing test coverage. Predictive analytics successfully identifies high-risk code modules before deployment, allowing targeted testing that prevents potential critical defects from reaching production. Most impressively, self-healing frameworks substantially decreases test maintenance overhead, virtually eliminating false positives from UI changes and saving considerable engineering hours quarterly. This presentation provides both theoretical frameworks and practical implementation guidelines drawn from real-world deployments affecting millions of users. We'll examine the architectural integration patterns that proved most successful, discuss the ethical AI governance frameworks we established, and share our toolchain integration approaches that maintain reliable testing even in high-velocity deployment environments. Attendees will gain actionable insights into establishing AI-enhanced quality assurance practices that simultaneously improve quality metrics while dramatically reducing resource requirements.

Focus Track (Salon IV)
Wed 12:00 PM - 12:30 PM

Prompt Defense "A Multi-Layered Approach"

Large Language Models (LLMs) are reshaping how we build applications—but with great power comes great vulnerability. Prompt injection attacks exploit the very thing that makes LLMs so useful: their ability to follow natural language instructions. The result? Malicious prompts that can hijack model behavior, often in subtle and dangerous ways.
While prompt injection is now widely recognized, the defenses being deployed across the industry often fall short. Why? Because what works in one context—one model, one use case—can completely fail in another. In this talk, we’ll go beyond just classifying attack types to focus on what really matters: how to build prompt defenses that actually work.
We’ll dig into practical, layered defense strategies—like prompt hardening, input/output validation, and system prompt design—while highlighting why secure prompting must be tailored to your model architecture, application flow, and risk surface. From SLMs to multi-modal inputs, we’ll show how “one prompt to rule them all” just doesn’t exist.
You’ll also get an overview of emerging tools for stress-testing and validating your prompt security, helping you move from reactive patching to proactive defense. If you're building with LLMs, it's time to think beyond generic guardrails and start securing prompts like it actually matters—because it does.

AI Risk Summit Track 1 (Salon I)
Wed 1:30 PM - 2:15 PM

Preparing for the Quantum Threat: A CISO’s Roadmap

Quantum computing is no longer theoretical — it’s a looming disruptor of today’s cryptographic standards. This session equips CISOs with a clear understanding of the quantum threat landscape, timelines to watch, and practical steps to start building quantum-resilient security strategies now

CISO Forum Track (Salon III)
Wed 1:30 PM - 2:15 PM

Securing Pharmaceutical AI: Managing Risk in API-Driven Healthcare Infrastructure

In today's rapidly evolving pharmaceutical landscape, organizations implementing AI-enhanced API architectures face critical security challenges alongside remarkable operational benefits. While system availability has increased from 67% to 99.9%, this integration introduces new risk vectors requiring sophisticated governance frameworks. This presentation explores the strategic transformation of pharmaceutical data systems and the accompanying risk management strategies essential for responsible AI deployment.
Analysis of implementation data from leading biotech organizations reveals that companies adopting AI-integrated microservices have reduced deployment cycles from weeks to hours while achieving 90% service independence - but this acceleration demands robust risk assessment protocols. Our research demonstrates how structured API governance frameworks have reduced security incident response times by 65% while enhancing compliance management efficiency.
Through examination of real-world case studies, including major pharmaceutical manufacturers' AI integration implementations, this session illuminates how data-driven optimization approaches have improved not only production performance but established comprehensive guardrails for ensuring AI system safety. The presentation addresses critical security considerations in pharmaceutical API implementation, demonstrating how organizations implementing risk-aware security protocols have achieved operational excellence while maintaining patient safety and regulatory compliance.
For pharmaceutical executives and technology leaders navigating AI adoption, this session provides actionable strategies for leveraging API-driven architectures while implementing essential risk mitigation measures. Attendees will gain practical insights into developing future-ready frameworks that balance technological advancement with robust AI safety protocols in an industry where the stakes of AI deployment couldn't be higher.RetryClaude can make mistakes. Please double-check responses.

Focus Track (Salon IV)
Wed 1:30 PM - 2:15 PM
  • Aashu Singh Senior Staff Software Engineer - Meta Platforms Inc

Beyond Precision: Building Trust and Safety in AI-Powered Content Recommendation

The future of AI-powered content recommendation demands more than just technical precision—it requires building systems that users can trust. I will share frameworks and methodologies for evaluating recommendation quality beyond traditional metrics, incorporating dimensions of transparency, accountability, and user agency. Through case studies from the industry, attendees will gain insights into effectively navigating trade-offs between innovation and responsible deployment, establishing appropriate human oversight mechanisms, and designing recommendation systems that not only understand content but respect the complex human values and preferences they serve. This session offers practical guidance for organizations seeking to harness the power of multimodal LLMs while maintaining robust ethical standards in their recommendation practices.

AI Risk Summit Track 1 (Salon I)
Wed 2:15 PM - 2:45 PM

Framework Failings: Addressing the Lack of Responsible Deployment Guidance in Existing AI Frameworks

Most AI governance frameworks provide extensive guidance on ethical AI development, aiming to ensure that companies building AI do so responsibly. However, most organizations – public and private – are not developing AI models. Instead, they purchase commercial AI tools to enhance some component of their workflow. This begs a critical question: how can organizations be expected to responsibly and securely deploy AI when this topic isn’t emphasized in any of the major AI frameworks?

This talk argues that they can’t – and that lack of emphasis on responsible deployment is an extremely concerning risk in today’s technology environment. First, this talk will showcase the shortcomings of existing frameworks, including the NIST AI Risk Management Framework (NIST AI 100-1, 600-1), OECD AI Principles, and the UNESCO AI Ethics Recommendations. We’ll then unpack cybersecurity, data privacy, and productivity use cases to showcase how a lack of responsible deployment leads to tangible business risk. Finally, we’ll discuss concrete components of responsible AI deployment that should be incorporated not only into AI governance practices by companies buying AI tools, but into the AI governance frameworks themselves.

Attendees will leave with a deeper understanding of the limitations in existing AI governance frameworks, how a lack of responsible “deployment” guidance leads to risk, and practical considerations to use in lieu of existing framework guidance for AI deployment.

AI Risk Summit Track 2 (Salon II)
Wed 2:15 PM - 2:45 PM

Future-Proofing AI/ML Compliance Through Strong Data Privacy Foundations

In this session, I will explore how strong privacy practices today are the foundation for future-proof AI systems—enabling organizations to move faster, scale globally, and stay ahead of emerging regulations. As AI governance frameworks take shape worldwide, privacy is no longer just a compliance checkbox; it’s a critical enabler of trust, responsible innovation, and operational resilience.
Drawing on real-world case studies and scenario-based insights (including breakout exercises), I’ll illustrate how teams that embed privacy early in the AI lifecycle are better positioned to adapt to regulatory change, avoid costly rework, and build systems that are transparent, accountable, and ethical by design. Attendees will leave with practical takeaways on how to operationalize privacy as a strategic asset in AI development.

CISO Forum Track (Salon III)
Wed 2:15 PM - 2:45 PM

Understanding and Mitigating Risks Introduced by LLM Agents

This presentation delves into the cyber security risks posed by Large Language Model (LLM) based agents.

It will introduce a structured approach for threat modeling these agents and their frameworks, highlighting the various vulnerability classes they may introduce.

Attendees will gain insights into common threats such as data leakage, adversarial attacks, and unauthorized access.

Furthermore, the presentation will discuss effective security measures to mitigate these risks, ensuring that organizations can leverage the power of LLM agents while maintaining robust cyber security defenses.

AI Risk Summit Track 1 (Salon I)
Wed 2:45 PM - 3:15 PM

Scaling AI Infrastructure: Navigating Risks in Distributed Systems

As organizations increasingly integrate AI into their operations, the scalability of AI infrastructure becomes paramount. However, scaling introduces a spectrum of risks, from data inconsistencies and model drift to system failures and security vulnerabilities. Drawing from my experience leading AI infrastructure projects at Fortune 50 companies and major cloud providers, this session will delve into the challenges and solutions associated with scaling AI systems.​

Key discussion points will include:

Designing resilient distributed systems that mitigate common failure points.
Implementing robust monitoring and observability to detect and address anomalies proactively.
Ensuring data integrity and consistency across diverse pipelines.
Balancing scalability with compliance, especially in regulated industries like healthcare.
Fostering cross-functional collaboration to align technical solutions with organizational risk management strategies.

Attendees will gain actionable insights into building scalable AI infrastructures that are not only efficient but also resilient against potential risks.

AI Risk Summit Track 2 (Salon II)
Wed 2:45 PM - 3:15 PM

The Ascendancy and Challenges of Agentic Large Language Models

The development of Large Language Models (LLMs) has shifted from passive text generators to proactive, goal-oriented "agentic LLMs," capable of planning, utilizing tools, interacting with environments, and maintaining memory. This talk provides a critical review of this rapidly evolving field, particularly focusing on innovations from late 2023 through 2025. We will explore the core architectural pillars enabling this transition, including hierarchical planning, advanced long-term memory solutions like Mem0 , and sophisticated tool integration. Prominent operational frameworks such as ReAct and Plan-and-Execute will be examined alongside emerging multi-agent systems (MAS). This talk will critically analyze fundamental limitations like "planning hallucination" , the "tyranny of the prior" where pre-training biases override contextual information, and difficulties in robust generalization and adaptation. We will also discuss the evolving landscape of evaluation methodologies, moving beyond traditional metrics to capability-based assessments and benchmarks like BFCL v3 for tool use and LoCoMo for long-term memory.

Furthermore, the presentation will address the critical ethical imperatives and safety protocols necessitated by increasingly autonomous agents. This includes discussing risks like alignment faking, multi-agent security threats , and the need for frameworks such as the Relative Danger Coefficient (RDC).

Finally, we will explore pioneering frontiers, including advanced multi-agent systems, embodied agency for physical world interaction, and the pursuit of continual and meta-learning for adaptive agents. The talk will conclude by synthesizing the current state, emphasizing that overcoming core limitations in reasoning, contextual grounding, and evaluation is crucial for realizing robust, adaptable, and aligned agentic intelligence.

AI Risk Summit Track 1 (Salon I)
Wed 3:30 PM - 4:00 PM

What a $10,000 Challenge and 300K+ Prompt Injection Attempts Taught Us About Attacking AI

Over the course of 4 weeks in March 2025 we ran a $10,000 Prompt Injection Challenge where contestants competed to bypass 3 Virtual Escape Rooms consisting of 11 levels. Like golf, winners were scored on the lowest number of tokens used to bypass a level. Levels increased in difficulty and were protected by increasingly sophisticated guardrails. The challenge attracted thousands of participants.

We collected a broad set of Prompt Injection attacks that allowed us to build a comprehensive Taxonomy on Prompt Injection with well over 100 methods. We believe that this is the most comprehensive collection of Prompt Injection methods to date.
Oliver Friedrichs, founder and CEO of Pangea, will guide you through the analysis.

Oliver will uncover:
- Data-driven insights into how attackers manipulate generative AI systems.
- A comprehensive Taxonomy of Prompt Injection methods built on this data.
- Leading approaches to detecting and preventing Prompt Injection.

AI Risk Summit Track 2 (Salon II)
Wed 3:30 PM - 4:00 PM
  • Wesley Ramirez Senior Principal Model Governance - Discover Financial Services

From Idea to Reality: Bringing AI/GenAI Risk Management to Life in Finance

While the world of AI/GenAI heats up, it is essential to prepare your bank organization to lead with innovation and a risk management mindset. Join this engaging session to learn how you can plan for an AI/GenAI Risk Management upgrade, from ideation with key stakeholders to managing the new risks for your bank. During this session, you will hear from Wesley Ramirez, who led the roll out of the AI/GenAI Risk Management framework for Discover Financial Services, sharing pro-tips and techniques for managing AI/GenAI use cases, including organizational considerations, enterprise risks, and ongoing monitoring. If you want to prepare your organization for success and responsible AI, you won't want to miss this!

CISO Forum Track (Salon III)
Wed 3:30 PM - 4:00 PM

Evaluating and Monitoring LLM-Powered Applications at Scale

Large Language Models (LLMs) have transformed how businesses automate complex workflows.
At Block Inc., we've integrated LLMs deeply into our operational fabric, automating critical risk operations tasks with significant business impact.

However, deploying LLMs into production is just the beginning—continuously evaluating their effectiveness and maintaining visibility into their performance presents significant challenges.

This talk provides a deep dive into practical frameworks and methodologies for evaluating, monitoring, and improving LLM-based applications at scale.
We'll explore:

Techniques for robust prompt engineering: How do we effectively design, test, and iterate on prompts to ensure maximum impact?

Evaluation frameworks: Leveraging LLMs themselves as "judges" to measure the quality and effectiveness of applications.

Continuous performance monitoring: Strategies to track LLM effectiveness over time, identify performance drift, and proactively address degradation.

Observability and user impact: Using telemetry data, session replay, click tracking, and A/B testing to measure real-world usage and value.

AI Risk Summit Track 1 (Salon I)
Wed 4:00 PM - 4:30 PM

Securing the AI Frontier: Managing Emerging Risks in the Era of Widespread AI Adoption

As AI adoption accelerates across industries—with 86% of executives anticipating mainstream implementation by 2025 according to PwC—organizations face an increasingly complex risk landscape. This session examines the critical security challenges emerging at the intersection of rapid AI deployment and organizational risk management.
The proliferation of AI systems introduces novel vulnerabilities spanning data protection, model security, and operational resilience. With AI increasingly embedded in critical infrastructure and decision-making processes, security failures can cascade into significant financial losses, reputational damage, and regulatory penalties. This presentation explores how sophisticated threat actors are already exploiting AI vulnerabilities and developing AI-enhanced attack vectors including advanced deepfakes and autonomous threat campaigns.
Drawing from real-world case studies, this talk outlines a comprehensive framework for AI risk management—from secure model development through deployment to continuous monitoring—that balances innovation with robust security controls. Special attention is given to emerging regulatory requirements and industry-specific compliance challenges under frameworks like GDPR, HIPAA, and anticipated AI-specific legislation.
Attendees will gain actionable insights for implementing zero-trust architectures for AI systems, establishing effective governance models, and developing cross-functional approaches to AI risk that align technical, legal, and ethical considerations. The session concludes with strategic recommendations for organizations to build secure, ethical, and resilient AI capabilities that maintain stakeholder trust while delivering transformative business value.

AI Risk Summit Track 2 (Salon II)
Wed 4:00 PM - 4:30 PM

Building a Secure Foundation: Essential Components of a Private AI Enterprise Stack

Enterprises are eager to leverage AI, but the inherent privacy risks of relying on public AI services, particularly for regulated industries, remain a significant barrier. This session will explore the core principles of Private AI – including the strategic use of open-source, private deployments, and controlled data access – and how they form the foundation for building a secure enterprise AI stack in-house. We will delve into the critical components necessary to maintain control over sensitive data and AI workloads, such as the selection of both commercial and open-source AI models that can be hosted within your firewalls, secure infrastructure options (private clouds, on-premise data centers), essential infrastructure like orchestration and high-performance inference engines, model registries for reproducibility and governance, and the vital role of an AI gateway for security, auditability, and access control. Attendees will gain a comprehensive understanding of how these interconnected components, guided by the principles of Private AI, work together to effectively mitigate risks, ensure compliance, and establish a trustworthy and flexible foundation for enterprise AI adoption.

Learning Objectives:
- Understand the core principles of Private AI.
- Ley challenges and risks associated with using public AI services for enterprise data and workloads.
- Components required to build a secure and open private AI stack for production deployment.
- Key considerations when designing a private AI platform.

AI Risk Summit Track 1 (Salon I)
Wed 4:30 PM - 5:00 PM
  • Josephine Liu Chief Commissioner, Public Policy Committee - Asia-Pacific Artificial Intelligence Association (AAIA)

Digital Sovereignty or Digital Fragmentation? Risks and Remedies in Global AI Governance

As artificial intelligence systems increasingly underpin economic infrastructure, public services, and geopolitical decision-making, the debate over digital sovereignty has become a defining regulatory challenge of our time. Governments around the world are asserting greater control over data, algorithms, and platforms—often in the name of national security, economic competitiveness, or ethical accountability. Yet this growing trend also risks producing digital fragmentation: a fractured global landscape in which incompatible regulatory regimes stifle cross-border innovation, inhibit scientific collaboration, and create blind spots in AI risk oversight.

This session will explore the tension between national digital sovereignty and the need for international regulatory coordination, drawing on case studies from the United States, European Union, and China. We will assess how divergent models of AI governance are shaping the contours of global AI development. The talk will examine where convergence is possible, where it is unlikely, and what strategies could mitigate fragmentation without compromising legitimate sovereign interests.

In a moment when technological acceleration outpaces policymaking, this session offers a pragmatic roadmap to align innovation incentives with public interest—without allowing global AI governance to splinter into irreconcilable silos.

AI Risk Summit Track 2 (Salon II)
Wed 4:30 PM - 5:00 PM