Amit Dhawan of Network Intelligence examines integrating agentic AI and LLMs into cybersecurity operations, focusing on threat classification, data protection controls and privacy-by-design implementation in AI systems.
The integration of agentic artificial intelligence and large language models into cybersecurity and privacy operations offers transformative potential to significantly advance capabilities in threat detection, incident response and data protection.
This session will examine how agentic AI can enhance key cybersecurity functions, including security operations, vulnerability management, threat hunting and secure software development. In parallel, it will address critical issues related to responsible AI use, privacy management and risks such as AI model hallucinations. You will learn to classify threats associated with data handling, AI training processes, and model inputs and outputs. You will also explore practical strategies and best practices for successfully deploying agentic AI technologies within robust cybersecurity and privacy frameworks.
The session will cover:
- Adoption threat classification to improve AI model resilience during training and inference phases;
- Essential controls required to secure data acquisition and storage for AI-driven systems;
- Implementing privacy by design in AI and machine learning processes;
- Implementing anonymization, pseudonymization and differential privacy methods for responsible AI;
- Real-time detection of vulnerabilities and insecure coding patterns.
Here is the course outline:
Leveraging Agentic AI to Enhance Cybersecurity and Privacy Operations |