AI Security Deep Dive (TTAI2800)

Overview

AI and machine learning systems introduce unprecedented security challenges that traditional cybersecurity practices cannot adequately address. AI Security Deep Dive delivers the specialized knowledge and hands-on experience needed to secure AI/ML systems against sophisticated attacks, protect sensitive training data, and implement robust defenses for AI-integrated applications. This intensive course is designed for programmers building AI-enabled applications, security analysts responsible for protecting AI systems, cybersecurity professionals expanding into AI security, and technical managers overseeing AI implementation projects.

Hands-On Format: – Days 1 and 2 feature interactive labs delivered via Jupyter notebooks, allowing participants to experiment directly with code, attacks, and defenses in a guided environment. – Day 3 focuses on real-world integration, exposing local models via a Flask API and integrating with a Large Language Model (LLM) using the Hugging Face Inference API (free tier, requires registration).

  • Integration labs offer multiple language options: Python/Flask, Java/Spring, ASP.Net, and Node.js, so participants can choose the stack most relevant to their work.
  • All labs and exercises are designed to be accessible with minimal setup, and detailed instructions are provided for each environment.

Throughout three intensive days, you will master the fundamentals of machine learning from a security perspective, identify and exploit vulnerabilities in AI systems through hands-on exercises, and implement practical defenses against data poisoning, adversarial attacks, and privacy breaches. You will gain critical experience securing traditional applications that integrate AI models, including LLM-powered features, and learn to validate inputs and outputs to prevent prompt injection and other AI-specific attacks. The course combines essential AI/ML concepts with real-world security scenarios, ensuring you understand both the technical foundations and practical implementation challenges.

With a 50 percent hands-on approach, this course provides extensive practical exercises where you will simulate adversarial attacks, implement data poisoning defenses, conduct membership inference attacks, secure API integrations with AI models, and build comprehensive security strategies for AI-powered applications. Whether you are developing AI systems, securing existing implementations, or preparing for the next wave of AI-driven threats, you will leave with the expertise to protect machine learning applications, implement security-first AI development practices, and respond effectively to emerging AI security challenges.

Pre-Reqs

To ensure a smooth learning experience and maximize the benefits of attending this course, you should have the following prerequisite skills:

  • Read code and understand basic programming concepts. The course provides hands-on opportunities using interactive Python and optionally other platforms. Successful students will need to setup a basic development environment, read and follow program logic and make minor modifications to code.
  • Awareness of traditional cybersecurity issues. The successful student will have some prior knowledge of security issues in an IT environment.
  • Basic understanding of web applications. Students should have some experience and exposure to basic HTTP based web technology.
  • Familiarity with data handling and basic statistical concepts. Understanding of data formats, databases, and basic data analysis principles.
  • Experience with software development lifecycle and security practices. Knowledge of testing, deployment, and security integration in development processes.

 

  • Price: $2,795.00
  • Duration: 3 days
  • Delivery Methods: Virtual
Date Time Price Option
02/02/2026 09:00 AM - 05:00 PM CT $2,795.00
06/22/2026 09:00 AM - 05:00 PM CT $2,795.00