Skip to content
Virtual Cybersecurity Summit: Implications of AI

AI Trust and Safety: A Wicked Problem


Course
Upgrade subscription below

Yan Bellerose breaks down the "wicked" complexity of AI trust and safety, and presents a Discover, Secure and Manage framework for building transparent, resilient LLM deployments.

Trust in artificial intelligence is not a feature to be added at the end - it is a foundational design challenge that touches technology, ethics, psychology and social impact simultaneously. In this session, Yan Bellerose, cloud security architect at Google, frames AI trust and safety as a "wicked problem": one with no single solution and where fixes in one area can create new risks elsewhere. Drawing on Google Cloud's Vertex AI and AI Protection offerings, Bellrose walks through a practical, life cycle-based framework - Discover, Secure and Manage - that moves organizations from reactive security postures to proactive ones.

The session will also explore:

  • The six factors - competence, data integrity, benevolence, security, privacy and transparency - that determine whether users and organizations can genuinely trust an AI system;
  • How Google Cloud's Model Armor solution intercepts and evaluates prompts and responses at inference time to detect prompt injection, jailbreaks, PII exposure and harmful content before they reach the model or the user;
  • Why securing AI requires a holistic, end-to-end life cycle approach - from inventory and attack path simulation to real-time threat intelligence.
 

 

Here is the course outline:

AI Trust and Safety: A Wicked Problem

Completion

The following certificates are awarded when the course is completed:

CPE Credit Certificate

Floating Button