This comprehensive 3-day instructor-led training provides a deep dive into data engineering practices and solutions on Amazon Web Services (AWS). Participants will learn how to design, build, optimize, and secure data engineering solutions by using AWS services. Topics range from foundational concepts to hands-on implementation of data lakes, data warehouses, and both batch and streaming data pipelines. This course equips data professionals with the skills needed to architect and manage modern data solutions at scale.
Prerequisites:
We recommend that attendees of this course have the following:
- Familiarity with basic machine learning concepts, such as supervised and unsupervised learning, regression, classification, and clustering algorithms.
- Working knowledge of Python programming language and common data science libraries like NumPy, Pandas, and Scikit-learn.
- Basic understanding of cloud computing concepts and familiarity with the AWS platform.
- Familiarity with SQL and relational databases is recommended but not mandatory.
- Experience with version control systems like Git is beneficial but not required.
Course Objectives:
In this course, you will learn to do the following:
- Understand the foundational roles and key concepts of data engineering, including data personas, data discovery, and relevant AWS services.
- Identify and explain the various AWS tools and services crucial for data engineering, encompassing orchestration, security, monitoring, CI/CD, IaC, networking, and cost optimization.
- Design and implement a data lake solution on AWS, including storage, data ingestion, transformation, and serving data for consumption.
- Optimize and secure a data lake solution by implementing open table formats, security measures, and troubleshooting common issues.
- Design and set up a data warehouse using Amazon Redshift Serverless, understanding its architecture, data ingestion, processing, and serving capabilities.
- Apply performance optimization techniques to data warehouses in Amazon Redshift, including monitoring, data optimization, query optimization, and orchestration.
- Manage security and access control for data warehouses in Amazon Redshift, understanding authentication, data security, auditing, and compliance.
- Design effective batch data pipelines using appropriate AWS services for processing and transforming data.
- Implement comprehensive strategies for batch data pipelines, covering data processing, transformation, integration, cataloging, and serving data for consumption.
- Optimize, orchestrate, and secure batch data pipelines, demonstrating advanced skills in data processing automation and security.
- Architect streaming data pipelines, understanding various use cases, ingestion, storage, processing, and analysis using AWS services.
- Optimize and secure streaming data solutions, including compliance considerations and access control.
Who Should Attend
This course is designed for professionals who are interested in designing, building, optimizing, and securing data engineering solutions using AWS services:
- Data engineers
- Solutions architects
- DevOps engineers
- IT professionals
- Data analysts looking to expand into data engineerin
Course content
Module 1: Data Engineering Roles and Key Concepts
Module 2: AWS Data Engineering Tools and Services
Module 3: Designing and Implementing Data Lakes
Module 4: Optimizing and Securing Data Lake Solutions
Module 5: Data Warehouse Architecture and Design Principles
Module 6: Performance Optimization Techniques for Data Warehouses
Module 7: Security and Access Control for Data Warehouses
Module 8: Designing Batch Data Pipelines
Module 9: Implementing Strategies for Batch Data Pipelines
Module 10: Optimizing, Orchestrating, and Securing Batch Data Pipelines
Module 11: Streaming Data Architecture Patterns
Module 12: Optimizing and Securing Streaming Solutions