What Will You Learn?
Gain foundational knowledge, practical skills, and a functional understanding of how generative AI works
Dive into the latest research on Gen AI to understand how companies are creating value with cutting-edge technology
Instruction from expert AWS AI practitioners who actively build and deploy AI in business use-cases today
About This Course
Provider: Coursera
Format: Online
Duration: 16 hours to complete [Approx]
Target Audience: Intermediate
Learning Objectives: By completing this free course, you'll able to deeply understand generative AI, describing the key steps in a typical LLM-based generative AI lifecycle, from data gathering and model selection, to performance evaluation and deployment
Course Prerequisites: Basics of machine learning
Assessment and Certification: Earn a Certificate upon completion from the relevant Provider
Instructor: Amazon Web Services, DeepLearning.AI
Key Topics: Generative AI, LLMs, Large Language Models, Machine Learning, Python Programming
Topic Covered:
- - Course Introduction
- - Generative AI & LLMs
- - LLM use cases and tasks
- - Text generation before transformers
- - Transformers architecture
- - Generating text with transformers
- - Prompting and prompt engineering
- - Generative configuration
- - Generative AI project lifecycle
- - Introduction to AWS labs
- - Pre-training large language models
- - Computational challenges of training LLMs
- - Optional video: Efficient multi-GPU compute strategies
- - Scaling laws and compute-optimal models
- - Instruction fine-tuning
- - Fine-tuning on a single task
- - Multi-task instruction fine-tuning
- - Model evaluation
- - Benchmarks
- - Parameter efficient fine-tuning (PEFT)
- - PEFT techniques 1: LoRA
- - PEFT techniques 2: Soft prompts
- - Aligning models with human values
- - Reinforcement learning from human feedback (RLHF)
- - RLHF: Obtaining feedback from humans
- - RLHF: Reward model
- - RLHF: Fine-tuning with reinforcement learning
- - Optional video: Proximal policy optimization
- - RLHF: Reward hacking
- - Scaling human feedback
- - Lab 3 walkthrough
- - Model optimizations for deployment
- - Generative AI Project Lifecycle Cheat Sheet
- - Using the LLM in applications
- - Interacting with external applications
- - Helping LLMs reason and plan with chain-of-thought
- - Program-aided language models (PAL)
- - ReAct: Combining reasoning and action
- - LLM application architectures
- - Optional video: AWS Sagemaker JumpStart
- - Responsible AI
0 Comments