- Overview
- Prerequisites
- Audience
- Curriculum
Description:
The rapid evolution of artificial intelligence (AI) and machine learning (ML) is transforming industries worldwide. From personalized customer experiences to real-time fraud detection, organizations are leveraging ML to drive innovation and gain a competitive edge. Snowflake, with its powerful data platform, has emerged as a leader in enabling end-to-end ML pipelines, empowering data professionals to harness the full potential of their data.
This course equips students with the skills to navigate this transformative landscape, combining the scalability of Snowflake with advanced ML capabilities through Snowpark and Cortex. Over three days, students will learn how to build ML workflows that are seamless, scalable, and optimized for modern data engineering and analytics. With Snowflake's recent advancements in Large Language Model (LLM) integration and in-database ML, participants will gain firsthand experience with cutting-edge technologies shaping the future of AI. Whether you're a data scientist, ML engineer, or analytics professional, this course will help you stay ahead of industry trends and enable you to deploy powerful, real-time ML models in Snowflake's unified platform
Duration: 3 days
Course Code: BDT396
Learning Objectives:
After this course, students will be able to:
- Understand Snowflake’s architecture for machine learning pipelines and integration with external datasets
- Develop ML pipelines using Snowpark for scalable data preparation and modeling
- Utilize Cortex for advanced ML pipeline development, including model training and deployment
- Implement and execute ML functions and LLM functions in Cortex for real-world use cases
- Optimize end-to-end ML workflows using Snowflake’s capabilities
- Familiarity with programming language – especially Python
- Basics of using Snowflake and SQL
- Prior knowledge of Machine Learning will be useful but not required
This course is designed for Software Developers, Data Scientists, Software Architects, Quality Assurance Engineers, Data Analysts
Course Outline:
Module 1: Introduction to Generative AI
- 1 What is Generative AI?
- Definition and Overview of Generative Models
- Difference between Generative and Discriminative Models
- Applications of Generative AI
- 2 Evolution of Generative AI
- Historical Context and Key Milestones
- Development of GANs, VAEs, and Transformer-based models
- Generative AI in the Context of Deep Learning
- 3 Overview of Machine Learning Models
- Supervised, Unsupervised, and Reinforcement Learning
- Key Concepts in ML that are Related to Generative AI
Module 2: Key Techniques in Generative AI
- 1 Generative Adversarial Networks (GANs)
- Architecture of GANs: Generator vs. Discriminator
- Training Process and Challenges in GANs
- Types of GANs: DCGAN, CycleGAN, StyleGAN, and more
- Applications of GANs in Image Generation, Style Transfer, and Super Resolution
- 2 Variational Autoencoders (VAEs)
- The VAE Architecture: Encoder, Decoder, and Latent Space
- Understanding the Variational Inference and ELBO
- Applications in Image Generation and Data Reconstruction
- 3 Other Generative Models
- Autoregressive Models (e.g., PixelCNN, WaveNet)
- Flow-based Models (e.g., RealNVP, Glow)
- Diffusion Models and their Recent Popularity
Module 3: Deep Dive into GANs
- 1 Advanced GANs Architecture
- Conditional GANs (cGANs)
- Wasserstein GANs (WGANs) and WGAN-GP
- Progressive Growing of GANs
- Self-Attention GANs (SAGAN)
- 2 Training GANs
- Challenges in GAN Training: Mode Collapse, Non-Convergence
- Techniques to Improve GAN Stability
- Evaluating GAN Performance: Inception Score, FID Score
- 3 Applications of GANs
- Image Generation and Synthesis
- Style Transfer and Deepfakes
- Data Augmentation for Training ML Models
- Text-to-Image Generation
Module 4: Variational Autoencoders (VAEs)
- 1 Understanding Latent Variables in VAEs
- The Role of Latent Variables in Data Generation
- Reconstructing Data via VAEs
- ELBO and its Role in Training VAEs
- 2 Advanced Topics in VAEs
- Conditional VAEs (CVAE)
- VAE for Generating Complex Data (Images, Text, etc.)
- Disentangled VAEs and their Applications
- 3 Applications of VAEs
- Image and Text Generation
- Anomaly Detection
- Latent Variable Modeling in Healthcare and Finance
Module 5: Transformers and Attention Mechanisms in Generative AI
- 1 The Transformer Architecture
- Attention Mechanism and Self-Attention
- Transformer-based Models: BERT, GPT, T5, and more
- 2 Generative Transformers
- GPT Series: Architecture and Training of Large Language Models
- Text Generation with Transformer Models
- Fine-Tuning Transformers for Specific Generative Tasks
- 3 Applications of Transformer-based Generative Models
- Text Generation, Summarization, Translation
- Music Composition and Code Generation
- Multimodal Generative Models (e.g., CLIP, DALL·E)
Module 6: Practical Implementation of Generative AI
- 1 Tools and Frameworks for Generative AI
- Popular Libraries: TensorFlow, PyTorch, Keras
- Specialized Libraries for GANs and VAEs (e.g., PyTorch-GAN, TensorFlow-GAN)
- Cloud-based tools for large-scale generative model training (e.g., Google Colab, AWS, GCP)
- 2 Building and Training GANs
- Step-by-step guide to building a GAN for Image Generation
- Data Preprocessing and Augmentation Techniques
- Training GANs and Fine-Tuning Hyperparameters
- 3 Building and Training VAEs
- Step-by-step guide to building a VAE for Data Reconstruction
- Hyperparameter Tuning in VAEs
- Applications of VAEs in Healthcare and Biomedicine
Module 7: Ethical and Practical Challenges in Generative AI
- 1 Ethical Considerations in Generative AI
- Impact of Generative AI on Society
- Bias and Fairness Issues in Generative Models
- Addressing Deepfakes, Misinformation, and Fake Media
- 2 Responsible Use of Generative AI
- Ensuring Transparency and Accountability
- Guardrails and Regulatory Frameworks
- Privacy and Security Concerns in AI Models
- 3 Addressing the Environmental Impact of Generative AI
- Computational Resources and Energy Consumption
- Techniques for Improving Efficiency in Training Large Models
Module 8: Applications of Generative AI
- 1 Generative AI in Image and Video Synthesis
- Applications in Film, Animation, and Art Generation
- GANs in Face and Style Transfer
- Video Synthesis and Deepfake Detection
- 2 Natural Language Generation
- Text-to-Image Models like DALL·E
- Story Generation and Content Creation with GPT-based Models
- Chatbots and Conversational AI using Generative Models
- 3 Music and Audio Generation
- Generative Models for Music Composition (e.g., OpenAI Jukedeck)
- Text-to-Speech and Voice Synthesis
- Speech-to-Text and Text-to-Speech with Generative Models
- 4 Healthcare and Scientific Applications
- Data Synthesis in Medical Research
- Drug Discovery and Molecular Generation using GANs
- Predictive Modeling and Data Augmentation
Training material provided: Yes (Digital format)