Practical AI Security: Attacks, Defenses, and Applications is a hands-on course covering three pillars: attacking AI systems, defending them, and applying AI as a tool in security work. It is designed for penetration testers, red teamers, and security engineers who need to evaluate or protect AI-powered applications, and for practitioners who want to leverage AI to do their security work more effectively.
The course covers: understanding transformer architecture and LLM attack surfaces, exploiting prompt injection (direct, indirect, and multi-turn), attacking MCP (Model Context Protocol) servers, agent hijacking and tool poisoning, supply chain risks in AI registries, securing RAG pipelines and vector databases, implementing Google's Secure AI Framework (SAIF) and OWASP LLM Top 10, building AI gateways and input/output guardrails, and using AI automation tools like Fabric for security tasks.
Students work through the modules progressing from LLM fundamentals to full offensive and defensive implementations. The course is relevant to any security professional whose scope now includes LLM-powered applications, autonomous AI agents, or AI-assisted security tooling.