Augmenting AI Security: External Strategies for Threat Mitigation
About This Session
In today's rapidly evolving AI landscape, securing models is a critical concern for organizations. While most focus on model-specific security, this session highlights often-overlooked external methods for safeguarding AI models and their data, essential for preventing breaches and ensuring system integrity. Drawing on insights from our Air Force Phase II SBIR on AI security and trust, we will share real-world applications and best practices.
This session will cover key strategies for securing AI systems, including setting resource quotas and monitoring to prevent DoS attacks, applying rate limiting and input validation to block API abuse, and using postprocessing to catch anomalies. Attendees will learn how to implement data provenance and version control to protect data integrity, along with real-time audit logging, output moderation, and automated backups for continuous protection. Finally, we'll discuss tokenization and redaction to safeguard sensitive data and ensure privacy.
This session stands out by focusing on critical external security measures around AI models, such as securing Docker containers, safeguarding API endpoints, and implementing preprocessing and postprocessing workflows---essential for robust AI protection. Through detailed walkthroughs of hypothetical scenarios and step-by-step guides, participants will gain practical takeaways, including a comprehensive checklist of security practices.
This session will showcase real-world case studies, such as unsecured vector databases and AI vulnerabilities like prompt injection and data leakage, to demonstrate the risks of weak external security. Examples include exposed databases and flaws in AI tools like Flowise (CVE-2024-9148) that allowed unauthorized access. We'll examine how organizations mitigated these threats through stronger authentication, security audits, and input validation to protect their AI systems.
By the end of the session, attendees will have actionable steps to implement resource quotas, token limits, and rate limiting to protect AI systems, set up audit logs and real-time monitoring to detect intrusions, use data provenance and tamper detection for data integrity, and deploy automated backups and failover mechanisms to address potential security breaches. Participants will leave equipped with the knowledge and tools to enhance AI security, safeguard sensitive data, and respond effectively to emerging threats.
This session will cover key strategies for securing AI systems, including setting resource quotas and monitoring to prevent DoS attacks, applying rate limiting and input validation to block API abuse, and using postprocessing to catch anomalies. Attendees will learn how to implement data provenance and version control to protect data integrity, along with real-time audit logging, output moderation, and automated backups for continuous protection. Finally, we'll discuss tokenization and redaction to safeguard sensitive data and ensure privacy.
This session stands out by focusing on critical external security measures around AI models, such as securing Docker containers, safeguarding API endpoints, and implementing preprocessing and postprocessing workflows---essential for robust AI protection. Through detailed walkthroughs of hypothetical scenarios and step-by-step guides, participants will gain practical takeaways, including a comprehensive checklist of security practices.
This session will showcase real-world case studies, such as unsecured vector databases and AI vulnerabilities like prompt injection and data leakage, to demonstrate the risks of weak external security. Examples include exposed databases and flaws in AI tools like Flowise (CVE-2024-9148) that allowed unauthorized access. We'll examine how organizations mitigated these threats through stronger authentication, security audits, and input validation to protect their AI systems.
By the end of the session, attendees will have actionable steps to implement resource quotas, token limits, and rate limiting to protect AI systems, set up audit logs and real-time monitoring to detect intrusions, use data provenance and tamper detection for data integrity, and deploy automated backups and failover mechanisms to address potential security breaches. Participants will leave equipped with the knowledge and tools to enhance AI security, safeguard sensitive data, and respond effectively to emerging threats.
Speaker

Jason Kramer
Senior Software Engineering Researcher - ObjectSecurity
Jason is dedicated to advancing the state of the art in secure and robust AI. With a bachelor’s degree in computer science from San Diego State University, he is focused on ensuring trust, security, privacy, bias, and robustness of AI/ML models. Jason has led the development efforts of a commercial solution for the detection and repair of vulnerabilities in deep learning systems, and the co-author of multiple patents related to the cybersecurity of systems including AI/ML, embedded devices, supply chain, and others. His passion for improving the field has driven him to push the boundaries of what is possible and make a meaningful impact in the fields of AI and cybersecurity.