From Assumptions to Assurance: Calibrating AI with Institutional Truth
About This Session
Generative AI has made a number of recent 'up the hill' technical advances, from training time compute to recent advances on inference time, but that hasn't made risk management and compliance executives any more comfortable in deploying large scale AI to consumers. Central to this issue is the ability to apply an organizations or regions definition / perspective / ground truth to the management of the AI, so that its reasoning, safety, and security guardrails align to individual expectations. For example, your definition of ‘safety’ most definitely is not mine, nor others. And with regulators reminding organizations that AI must still comply with existing laws and regulations, the next advancement will be focused on ‘intelligent AI’, AI that can comprehend nuanced requirements, specific to each organization’s ground truth, in a defensible manner. In this talk, we will have a fun and interactive fireside chat discussion on the types of AI risk management controls that allow for a tailored ground truth that risk, legal, compliance, and AI leaders should be looking out for, including the types of evidence and skillsets needed to effectively oversee them.
Learning Objectives:
Awareness of critical AI controls throughout the AI lifecycle that support ground truth identification
Insights into AI risk management function of the future
Interactive engagement, clearly understanding that ground truth is not one size fits all
Learning Objectives:
Awareness of critical AI controls throughout the AI lifecycle that support ground truth identification
Insights into AI risk management function of the future
Interactive engagement, clearly understanding that ground truth is not one size fits all
Speaker

Daniel Ross
Head of AI Compliance Strategy - Dynamo AI
Dan Ross, Head of AI Compliance Strategy at Dynamo AI, focuses on aligning AI, policy, risk management, and business application. Dan regularly engages global policy makers on AI risk management and compliance, and oversees a number of Dynamo’s most consequential AI security and compliance deployment efforts. Prior to Dynamo AI, Dan spent close to a decade at Promontory Financial Group, a premier risk and regulatory advisory firm, focused on data and technology risk, where he advised global financial institutions and governments. He has also held technology strategy and management consulting leadership positions at Deutsche Bank, Bank of America Merrill Lynch, and Accenture. Dan studied Economics at Vanderbilt University and lives with his wife and Lagotto Romagnolo in New York.