Seeing Risk: Legal and Privacy Pitfalls of Multimodal and Computer Vision AI vs Text-Based LLMs

Wednesday, August 20, 2025
10:15 AM - 10:45 AM
CISO Forum Track (Salon III)

About This Session

As enterprises embrace multimodal AI and computer vision models, the legal and privacy risks multiply-often in ways that text-only large language models (LLMs) do not present. This session will examine the unique privacy and regulatory challenges introduced by AI systems that process images, video, audio and other non-textual data alongside text. We will explore how multimodal models not only expand the attack surface for adversarial threats, but also create new vectors for privacy violations, regulatory non-compliance, and legal liability.
● Increased Data Exposure: Multimodal models may process personal and sensitive data, including images, biometric identifiers, and contextual metadata. This aggregation heightens the risk of unauthorized data exposure, both during model training and inference, and introduces new obligations under privacy and security regulations.
● Informed Consent: The collection and use of visual and multimodal data can occur without explicit user consent or clear communication about secondary uses, raising significant compliance and ethical concerns. For example, training computer vision models on publicly available images without consent-as seen in high-profile facial recognition cases-has led to regulatory scrutiny and lawsuits
● Privacy Harms By Inference: Multimodal AI may infer sensitive personal attributes (such as health status or location) from seemingly innocuous images or sensor data. This risk is amplified by the richness and granularity of multimodal datasets.
● Adversarial Attacks and Data Leakage: Visual prompt injection and adversarial image attacks can bypass safety filters, leading to the generation or exposure of harmful or illegal content-sometimes at rates far exceeding those of text-only models. These attacks may also enable malicious actors to extract or reconstruct sensitive information from model outputs.
● Compliance and Transparency Challenges: The "black box" nature of advanced multimodal models makes it difficult for organizations to explain how personal data is processed, complicating compliance with privacy laws that require transparency, accountability, and the right to explanation for automated decisions.

Learning Objectives:
● Identify the specific privacy and legal risks unique to multimodal and computer vision AI compared to text-only LLMs
● Understand regulatory obligations around multimodal data collection, storage, and processing
● Develop strategies for obtaining informed consent, minimizing data exposure, and ensuring transparency in multimodal AI systems
● Assess technical and governance controls to mitigate privacy risks and support legal compliance
This session is vital for legal, compliance, and security professionals navigating the evolving landscape of multimodal AI, ensuring that innovation does not come at the expense of privacy and regulatory integrity.

Speaker

Beth George

Beth George

Partner, Co-head of Strategic Risk Management Practice - Freshfields

Beth George leads the strategic risk management practice at Freshfields. With a background in national security and technology, Beth regularly advises boards of both private and public companies on risk management and governance, including advising on governance related to artificial intelligence, data practices and cybersecurity, content management, and geopolitical events.

Beth has worked at senior levels across the U.S. federal government, most recently serving as the Acting General Counsel of the U.S. (DoD) in 2021. Previously, Beth served in various roles for the National Security Division of the (DOJ), U.S. Senate Select Committee on Intelligence. and the White House as Associate Counsel in the Office of the White House Counsel.