Course Topics / Agenda
Please note that this list of topics is based on our standard course offering, evolved from typical industry uses and trends. We will work with you to tune this course and level of coverage to target the skills you need most. Course agenda, topics and labs are subject to adjust during live delivery in response to student skill level, interests and participation.
1. What is AI and Machine Learning
• Is machine learning difficult?
• What is artificial intelligence
• Difference between AI and machine learning
• Machine learning examples
2. Types of Machine Learning
• Three different types of machine learning: supervised, unsupervised, and reinforcement learning
• Difference between labeled and unlabeled data
• The difference between regression and classification, and how they are used
3. Linear Regression
• Fitting a line through a set of data points
• Coding the linear regression algorithm in Python
• Using Turi Create to build a linear regression model to predict housing prices in a real dataset
• What is polynomial regression
• Fitting a more complex curve to nonlinear data
• Examples of linear regression
4. Optimizing the Training Process
• What is underfitting and overfitting
• Solutions for avoiding overfitting
• Testing the model complexity graph, and regularization
• Calculating the complexity of the model
• Picking the best model in terms of performance and complexity
5. The perceptron Algorithm
• What is classification
• Sentiment analysis
• How to draw a line that separates points of two colors
• What is a perceptron
• Coding the perceptron algorithm in Python and Turi Create
6. Logistic Classifiers
• Hard assignments and Soft assignments
• The sigmoid function
• Discrete perceptrons vs. Continuous perceptrons
• Logistic regression algorithm for classifying data
• Coding the logistic regression algorithm in Python
7. Measuring Classification Models
• Types of errors a model can make
• The confusion matrix
• what are accuracy, recall, precision, F-score, sensitivity, and specificity
• what is the ROC curve
8. The Naive Bayes Model
• What is Bayes theorem
• Dependent and independent events
• The prior and posterior probabilities
• Calculating conditional probabilities
• using the naive Bayes model
• Coding the naive Bayes algorithm in Python
9. Decision Trees
• What is a decision tree
• Using decision trees for classification and regression
• Building an app-recommendation system using users’ information
• Accuracy, Gini index, and entropy
• Using Scikit-Learn to train a decision tree
10. Neural Networks
• What is a neural network
• Architecture of a neural network: nodes, layers, depth, and activation functions
• Training neural networks
• Potential problems in training neural networks
• Techniques to improve neural network training
• Using neural networks as regression models
11. Responsible AI: Navigating the Grey Areas
• Understanding Ethical Implications in AI
• Grasp the moral complexities in recommendation systems.
• Bias and Fairness in Recommenders
• Dissect potential biases in AI-driven recommendations.
12. Introduction to Generative AI
• Understanding Generative AI
• How Generative AI fits into the broader AI and Machine Learning landscape
• Differences between generative and discriminative models
• Introduction to Generative Adversarial Networks (GANs)
• Understanding the concept of latent space in generative models
• Basic structure and components of GANs: generator and discriminator
13. OPTIONAL: Applications of Generative AI in Business
• Improving customer experience: Using generative AI for personalized content creation, such as emails, ads, and product descriptions
• Product development: Using GANs for generating new ideas for products, fashion designs, and more
• Data augmentation: How generative models can create additional training data for other machine learning models, improving their performance
• Content creation: Using AI for generating realistic images, music, text, and more
• Risk management: Using generative AI to simulate different business scenarios and outcomes
• Healthcare: Generating synthetic medical data for research while preserving patient privacy
Bonus Content / Time Permitting
14. Bonus: Support vector machine and the Kernel methods
• What a support vector machine
• Which of the linear classifiers for a dataset has the best boundary
• Using the kernel method to build nonlinear classifiers
• Coding support vector machines and the kernel method in Scikit-Learn
15. Bonus: Ensemble learning
• What ensemble learning is
• Using bagging to combine classifiers
• Using boosting to combine classifiers
• Ensemble methods: random forests, AdaBoost, gradient boosting, and XGBoost
16. Bonus: Real-World Example: Data Engineering and ML
• Cleaning up and preprocessing data to make it readable by our model
• Using Scikit-Learn to train and evaluate several models
• Using grid search to select good hyperparameters for our model
• Using k-fold cross-validation to be able to use our data for training and validation simultaneously