When AI Agents Go Rogue: Unmasking Risky Enterprise AI Behavior with Unsupervised Learning
About This Session
As enterprises rapidly adopt AI agents (e.g., Salesforce's Agentforce), a critical risk emerges: misconfigured or compromised agents performing anomalous, potentially harmful, data operations. This presentation unveils an original, practical methodology for detecting such threats using unsupervised machine learning.
Drawing from a real-world Proof-of-Concept, we demonstrate how behavioral profiling—analyzing features engineered from system logs like data access patterns, query syntax (SOQL keyword analysis), and IP usage, along with signals from the content moderation mechanisms embedded within the LLM guardrails such prompt injection detection and toxicity scoring—can distinguish risky agent actions. We explore the creation of 30+ behavioral features and the application of KMeans clustering to identify agents exhibiting statistically significant deviations, serving as an early warning for misuse or overpermissive configurations. We will share insights into observed differences between AI agent and human user profiles, and challenges like crucial data gaps that impact comprehensive monitoring.
This session offers a vendor-neutral, technical deep-dive into a novel approach for safeguarding enterprise AI deployments.
Learning Objectives for Attendees:
1. Understand the novel security risks posed by misconfigured/overpermissive enterprise AI agents.
2. Learn a practical methodology for behavioral profiling of AI agents using unsupervised ML and log data.
3. Identify key data features, feature engineering techniques (e.g., for SOQL analysis), and common data challenges (log gaps, attribution) in AI agent monitoring.
4. Gain actionable insights to develop proactive detection strategies for anomalous AI agent activity and protect sensitive data.
Drawing from a real-world Proof-of-Concept, we demonstrate how behavioral profiling—analyzing features engineered from system logs like data access patterns, query syntax (SOQL keyword analysis), and IP usage, along with signals from the content moderation mechanisms embedded within the LLM guardrails such prompt injection detection and toxicity scoring—can distinguish risky agent actions. We explore the creation of 30+ behavioral features and the application of KMeans clustering to identify agents exhibiting statistically significant deviations, serving as an early warning for misuse or overpermissive configurations. We will share insights into observed differences between AI agent and human user profiles, and challenges like crucial data gaps that impact comprehensive monitoring.
This session offers a vendor-neutral, technical deep-dive into a novel approach for safeguarding enterprise AI deployments.
Learning Objectives for Attendees:
1. Understand the novel security risks posed by misconfigured/overpermissive enterprise AI agents.
2. Learn a practical methodology for behavioral profiling of AI agents using unsupervised ML and log data.
3. Identify key data features, feature engineering techniques (e.g., for SOQL analysis), and common data challenges (log gaps, attribution) in AI agent monitoring.
4. Gain actionable insights to develop proactive detection strategies for anomalous AI agent activity and protect sensitive data.
Speaker

Millie Huang
Staff Data Scientist - Salesforce
Millie Huang is a Staff Data Scientist at Salesforce, at the forefront of applying machine learning to critical cybersecurity challenges. She specializes in innovative AI-driven solutions and advanced detection models for anomalous behaviors, enhancing enterprise security.
Millie holds a Master's from MIT's Operations Research Center and a Bachelor's in Mathematics and Economics from Wellesley College. Prior to Salesforce, she honed her deep data science expertise across a spectrum of business domains—from demand forecasting to causal inference to product analytics—spanning consulting, consumer tech, and retail. Millie's blend of deep academic knowledge and practical experience developing ML solutions at scale makes her a key voice at the intersection of AI and security.
Millie holds a Master's from MIT's Operations Research Center and a Bachelor's in Mathematics and Economics from Wellesley College. Prior to Salesforce, she honed her deep data science expertise across a spectrum of business domains—from demand forecasting to causal inference to product analytics—spanning consulting, consumer tech, and retail. Millie's blend of deep academic knowledge and practical experience developing ML solutions at scale makes her a key voice at the intersection of AI and security.