Fine-Tuning Large Language Models

Course Overview

You will develop the skills to gather, clean, and organize data for fine-tuning pre-trained LLMs and Generative AI models. Through a combination of lectures and hands-on labs, you will use Python to fine-tune open-source Transformer models. Gain practical experience with LLM frameworks, learn essential training techniques, and explore advanced topics such as quantization. During the hands-on labs, you will access a GPU-accelerated server for practical experience with industry-standard tools and frameworks.

Objectives

  • Clean and Curate Data for AI Fine-Tuning
  • Establish guidelines for obtaining RAW Data
  • Go from Drowning in Data to Clean Data
  • Fine-Tune AI Models with PyTorch
  • Understand AI architecture: Transformer model
  • Describe tokenization and word embeddings
  • Install and use AI frameworks like Llama-3
  • Perform LoRA and QLoRA Fine-Tuning
  • Explore model quantization and fine-tuning
  • Deploy and Maximize AI Model Performance

Audience

  • Developers
  • Data Acquisition Specialists
  • Architects
  • Project Managers

Prerequisites

  • Python – PCEP Certification or Equivalent Experience
  • Familiarity with Linux
  • Price: $2,495.00
  • Duration: 3 days
  • Delivery Methods: Virtual
DateTimePriceOption
03/16/202609:00 AM - 05:00 PM CT$2,495.00
05/20/202609:00 AM - 05:00 PM CT$2,495.00