Emerging Threats from Accessible AI Image Generation

Wednesday, August 20, 2025
11:00 AM - 11:30 AM
AI Risk Summit Track 1 (Salon I)

About This Session

The rapid advancements in AI image generation have made creating realistic fake images accessible to virtually anyone, fundamentally altering our relationship with visual information. This session, kicking off with an eye-opening "AI or Reality?" game, will expose the emerging threats presented by this democratization of visual creation. We will delve into the risks associated with the increasing accessibility of AI image creation, exploring how these powerful tools are being exploited for privacy violations, financial fraud, and the widespread dissemination of misinformation. We will examine real-world examples of AI-generated forgeries, from fake insurance claims and fraudulent receipts to synthetic identities used to circumvent verification systems and viral hoaxes that erode public trust. The session will also cover practical techniques for identifying potentially manipulated or AI-generated images and actionable strategies individuals and organizations can adopt to protect digital identities and combat the spread of visual deception in this new era. Furthermore, we will discuss how enterprise organizations should consider developing mechanisms to detect fake images, including leveraging detection algorithms, watermarking, and content provenance initiatives. Finally, we will touch upon the broader emerging technological solutions and policy initiatives being developed to address these critical challenges.

Learning Objectives:

1. Understand the capabilities of modern AI image generation models and the resulting difficulty in distinguishing between AI-generated and real images.
2. Understand how the increased accessibility and sophistication of AI image generation tools contribute to emerging security and privacy risks.
3. Identify real-world examples of how AI-generated images are being used for malicious purposes, including financial fraud, identity theft, and misinformation campaigns.
4. Learn practical techniques and indicators to help detect AI-generated or manipulated images.
5. Explore actionable strategies for individuals and organizations to protect personal images and information and enhance digital self-defense against AI-powered deception.
6. Understand how enterprise organizations can develop and implement mechanisms for detecting fake images, such as using detection algorithms, watermarking, and content provenance standards.
7. Gain insight into the technological and regulatory landscape evolving to combat AI image misuse, including detection algorithms, watermarking, and content provenance, and policy frameworks.

Speaker

Sanjnah Ananda Kumar

Sanjnah Ananda Kumar

Product Manager - Salesforce

Sanjnah Ananda Kumar is Product Manager for Salesforce Data Security and Key Management services, designing mission critical APIs that protect cryptographic material in cloud environments. With an MS in Information Security and Technology from Carnegie Mellon University and research experience at CyLab, she blends usable privacy and security with practical product strategy. Her research involves understanding people's attitude towards privacy on social media. Outside of work outside work she builds open‑source resources that advance privacy and security for everyone.