Joern Schneeweisz of GitLab reveals how prompt injection exploits in-band signaling, why AI agent tooling magnifies security failures when accessing sensitive data, and defense strategies including context visibility.
Large language models rapidly integrate into critical workflows, yet their deployment repeats decades-old security failures. Prompt injection mirrors classic in-band signaling vulnerabilities - code and data remain indistinguishable within context windows, enabling attackers to hide malicious instructions in HTML, images and documents that users cannot see but models interpret as commands.
When AI agents gain tool access to email, calendars and databases, prompt injection transforms from demonstration attacks into data exfiltration and privilege escalation. Hidden prompts in white-on-white text, downscaled images and PDF metadata bypass human review while exploiting model trust assumptions, creating systemic risk as organizations outsource decisions to hallucinating systems.
In this session, led by Joern Schneeweisz of GitLab, you will learn:
- How prompt injection exploits in-band signaling to blend malicious instructions with legitimate context;
- Why AI agent tooling magnifies application security failures when models access sensitive data without sandboxing;
- Human-in-the-loop limitations and defense strategies including context visibility and controlled tool permissions.
Here is the course outline:
The LLM Security Crisis: Prompt Injections, AI Agents and Application Flaws |
Completion
The following certificates are awarded when the course is completed:
![]() |
CPE Credit Certificate |
