Self-paced Course

Practical AI Security: Attacks, Defenses, and Applications

Practical AI Security: Attacks, Defenses, and Applications takes you from the foundations of machine learning to advanced security practices involving Generative AI and Large Language Models. Through hands-on labs, you'll train models, build LLM-powered applications, and execute real-world red team attack scenarios. You'll master both the offensive and defensive sides of AI security, from creating Automated Pentesting Agents and exploiting Prompt Injection vulnerabilities to implementing MCP Server attacks, conducting AI threat modeling, performing secure code reviews, deploying AI Gateways for protection, and designing robust guardrails. Along the way, you'll work with industry-standard tools including Hugging Face, LangChain, OpenAI APIs, Anthropic's Claude and MCP, and learn enterprise frameworks such as Google's SAIF (Secure AI Framework). By the end, you'll walk away with practical exploit code, reusable defensive tools, comprehensive security checklists, and the expertise to both attack and defend AI systems in production environments, making you a complete AI security professional capable of operating across the entire security lifecycle.

  • Level

    Beginner / Intermediate

  • Video

    15.5 hours - 115 videos

  • CERTIFICATION EXAM

    Included

A path to CAISR (Certified AI Security Researcher) certification

Key Objectives

  • Understand the core concepts distinguishing AI, Machine Learning, and LLMs, including supervised vs unsupervised learning, neural networks, Generative AI, diffusion models, and the complete ML model training lifecycle from data preprocessing to deployment.
  • Master the fundamentals of Large Language Models, including Transformer architecture, tokenization mechanisms (BPE), context windows, embeddings, and the differences between foundational and fine-tuned models like GPT vs BERT architectures.
  • Become proficient in Prompt Engineering techniques including system vs user prompts, prompt templates, leaked system prompts analysis, and controlling model output via sampling parameters (Temperature, Top-k, Top-p) for security-focused workflows like threat modeling assistants.
  • Learn to use essential AI development tools including Hugging Face Transformers, LangChain (with memory and tool integration), LlamaIndex (multi-file processing), OpenWebUI for local LLM deployment, vector databases like FAISS for RAG implementations, and fine-tuning workflows.
  • Build and deploy production-ready AI applications, including custom RAG (Retrieval-Augmented Generation) systems with vector storage, conversational agents with short and long-term memory, AI-powered security tools with proper rule-based and advanced guardrails, and FastAPI-based scanners.
  • Master Model Context Protocol (MCP) servers for integrating AI with security tools to understand MCP vs traditional connectors, build custom MCP servers, and leverage them for reverse engineering, Mobile malware analysis, and automated penetration testing workflows. Configure MCP with Cursor and Claude for enhanced AI-assisted security research.
  • Develop Offensive AI capabilities, including building autonomous AI agents and workflows for vulnerability scanning, CVE finding, reconnaissance, IAM policy analysis, threat intelligence gathering, and exploit development assistance using frameworks like LangChain.
  • Execute advanced attacks against AI systems, including Prompt Injection variants (direct, indirect, multimodal attacks on CV screeners, meeting summarizers, image analyzers), jailbreaking techniques, data exfiltration through prompt manipulation, and exploiting MCP server vulnerabilities (Confused Deputy attacks, information disclosure, bruteforcing, arbitrary file read/write).
  • Implement Defensive AI strategies, including securing AI-powered applications against prompt injection, analyzing vulnerabilities in "vibe-coded" AI-generated applications, securing MCP servers with proper authentication and authorization, and applying pre-launch security checklists for AI-assisted apps.
  • Deploy and configure AI Gateways to secure production LLM applications and learn to migrate existing apps behind AI Gateways, implement multi-layered guardrails for input/output validation, configure rate limiting policies, and leverage analytics and comprehensive logging for monitoring, compliance, and cost optimization.
  • Master AI-powered Threat Modeling using STRIDE methodology and understand the engineering logic of systematic threat modeling, leverage LLMs to identify threats across Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege categories, and develop practical mitigations with AI assistance.
  • Apply AI to enhance Security Operations and Reverse Engineering workflows using Fabric AI for knowledge mining, log parsing, email header analysis, threat intelligence processing, video knowledge extraction, breaking language barriers in security research, and integrating AI into tools like Ghidra and JADX for automated malware analysis.
  • Understand and implement enterprise AI security frameworks, including comprehensive coverage of Google's Secure AI Framework (SAIF) with all 14 security risks (data poisoning, unauthorized training data, model tampering, prompt injection, model evasion, sensitive data disclosure, etc.).
  • Debug, intercept, and secure MCP implementations using MCP Inspector for debugging, Burp Suite for traffic interception and modification, and apply the comprehensive MCP Server Security Cheatsheet for identifying and remediating common vulnerabilities in both custom and third-party MCP servers.
  • Secure AI supply chains by pinning dependencies, verifying model signatures, understanding format risks, and detecting tampering or backdoors.
  • Earn the Certified AI Security Researcher (CAISR) certification by demonstrating mastery across all course modules from foundational AI/ML concepts through advanced offensive and defensive AI security techniques in real-world scenarios.

Who Should Attend?

This course is ideal for anyone interested in learning about the application of AI in cybersecurity.

Prerequisites

To successfully participate in this course, attendees should possess the following:
  • Working knowledge of cybersecurity and pentesting fundamentals
  • Basic understanding of Artificial Intelligence and Machine Learning fundamentals
  • Understanding of principles of data science and learning algorithms
  • Understanding of fundamental programming concepts and looping structures in at least one higher-level language used in machine learning (eg: Python, or similar)

Duration

  • 365 days of access after purchase

Technical Requirements

  • Laptop with 8+ GB RAM and 40 GB hard disk space
  • Administrative access on the system

Syllabus

START LEARNING

Practical AI Security: Attacks, Defenses, and Applications

On-demand
  • Immediate access to materials
  • Lecture recordings and self-assessments
  • 365 days of access
  • Certification of course completion
  • Dedicated email support

Enroll a group

Get in touch for pricing
Includes everything from the individual rate, plus:
  • Special group pricing
  • Available add-ons to Oversee and track individual student progress for large groups

Enterprise

Get in touch for pricing
Includes everything from the group rate, with the ability to manage multiple seats and track student progress across all courses. Contact us with your preferred courses and number of students for a customized quote.

Unlock Job Opportunities

Gain the in-demand skills to pursue career opportunities such as:

AI Security Engineer*

A Tech Giant Company

$136,000 – $212,800 a year

Required Qualifications

- Bachelor’s degree in Computer Science or a related field
- 2+ years of combined experience in areas such as threat modeling, secure coding practices, identity and access management, authentication, cryptography, or network security
- Familiarity with GenAI technologies and related security risks, along with mitigation strategies such as penetration testing and exploit development (or equivalent expertise)

Offensive AI Security Tester*

IT Services Provider Firm

$114,400 - $124,800 a year

Required Qualifications

- 5+ years of relevant professional experience
- Practical expertise in adversarial testing of GenAI systems (e.g., jailbreaks, prompt injections, input–output evaluations, data exfiltration) and delivering actionable mitigation steps

- Solid understanding of ML/GenAI concepts (LLMs, embeddings, diffusion models) and adversarial ML techniques (such as model extraction, data poisoning, and prompt manipulation)

Adversarial Prompt Expert*

IT Services Provider Firm

Up to $80 per hour, part time

Required Qualifications

- Extensive hands-on experience with LLMs, both open- and closed-source, and comfort experimenting across different platforms
- Strong background in prompt engineering and jailbreak techniques, including evasion strategies and innovative approaches to bypassing model safeguards
- Adversarial and security-oriented mindset, with additional value placed on red teaming or offensive security experience

*This is a compiled job description based on actual postings from LinkedIn and Indeed.

Created by

8kSec Academy

Our instructors are experts with over a decade of hands-on experience in mobile security, IoT exploitation, and vulnerability assessment. They've delivered numerous private trainings to high-profile clients and shared their knowledge at renowned conferences like BlackHat, Def Con, POC, TyphoonCon, Brucon, Hack in Paris, Phdays, Appsec USA, and more.

With thousands of students having completed our courses, our instructors continually refine their content based on real-world feedback. Whether through live sessions or our new on-demand courses, we ensure the same high-quality learning experience is accessible to professionals worldwide.