Generative AI with Large Language Models

Mark as Favorite Share
image

What Will You Learn?

Gain foundational knowledge, practical skills, and a functional understanding of how generative AI works
Dive into the latest research on Gen AI to understand how companies are creating value with cutting-edge technology
Instruction from expert AWS AI practitioners who actively build and deploy AI in business use-cases today

About This Course

Provider: Coursera
Format: Online
Duration: 16 hours to complete [Approx]
Target Audience: Intermediate
Learning Objectives: By completing this free course, you'll able to deeply understand generative AI, describing the key steps in a typical LLM-based generative AI lifecycle, from data gathering and model selection, to performance evaluation and deployment
Course Prerequisites: Basics of machine learning
Assessment and Certification: Earn a Certificate upon completion from the relevant Provider
Instructor: Amazon Web Services, DeepLearning.AI
Key Topics: Generative AI, LLMs, Large Language Models, Machine Learning, Python Programming
Topic Covered: 
  1. - Course Introduction
  2. - Generative AI & LLMs
  3. - LLM use cases and tasks
  4. - Text generation before transformers
  5. - Transformers architecture
  6. - Generating text with transformers
  7. - Prompting and prompt engineering
  8. - Generative configuration
  9. - Generative AI project lifecycle
  10. - Introduction to AWS labs
  11. - Pre-training large language models
  12. - Computational challenges of training LLMs
  13. - Optional video: Efficient multi-GPU compute strategies
  14. - Scaling laws and compute-optimal models
  15. - Instruction fine-tuning
  16. - Fine-tuning on a single task
  17. - Multi-task instruction fine-tuning
  18. - Model evaluation
  19. - Benchmarks
  20. - Parameter efficient fine-tuning (PEFT)
  21. - PEFT techniques 1: LoRA
  22. - PEFT techniques 2: Soft prompts
  23. - Aligning models with human values
  24. - Reinforcement learning from human feedback (RLHF)
  25. - RLHF: Obtaining feedback from humans
  26. - RLHF: Reward model
  27. - RLHF: Fine-tuning with reinforcement learning
  28. - Optional video: Proximal policy optimization
  29. - RLHF: Reward hacking
  30. - Scaling human feedback
  31. - Lab 3 walkthrough
  32. - Model optimizations for deployment
  33. - Generative AI Project Lifecycle Cheat Sheet
  34. - Using the LLM in applications
  35. - Interacting with external applications
  36. - Helping LLMs reason and plan with chain-of-thought
  37. - Program-aided language models (PAL)
  38. - ReAct: Combining reasoning and action
  39. - LLM application architectures
  40. - Optional video: AWS Sagemaker JumpStart
  41. - Responsible AI

0 Comments

No reviews yet !!

Please login first