8 hours of instruction
Explore the fundamental tools and techniques for building highly effective recommender systems, as well as how to deploy GPU-accelerated solutions for real-time recommendations.
OBJECTIVES
- Build a content-based recommender system using the open-source cuDF library and Apache Arrow
- Construct a collaborative filtering recommender system using alternating least squares (ALS) and CuPy
- Design a wide and deep neural network using TensorFlow 2 to create a hybrid recommender system
- Optimize performance for both training and inference using large, sparse datasets
- Deploy a recommender model as a high-performance web service
PREREQUISITES
None
SYLLABUS & TOPICS COVERED
- Introduction
- Meet the instructor
- Create an account
- Matrix Based Recommender Systems
- Read sparse data into a GPU using CuPy
- Perform ALS efficiently with NumPy broadcasting rules
- Build a content-based filter with cuDF
- Training Wide And Deep Recommenders
- Build a deep network using Keras
- Build a wide and deep network using TensorFlow feature columns
- Efficiently ingest training data with tf.data
- Case study 1: See real-world examples of recommender system model architectures
- Challenges Of Deploying Recommendation Systems To Production
- Acquire a trained model configuration for deployment
- Build a container for deployment
- Deploy the trained model using NVIDIA Triton Inference Server
- Final Review
- Review key learnings and answer questions.
- Complete the assessment and earn a certificate.
- Complete the workshop survey.
- Learn how to set up your own AI application development environment.
SOFTWARE REQUIREMENTS
Each participant will be provided with dedicated access to a fully configured, GPU-accelerated workstation in the cloud.