Create a Large Language Model from Scratch with Python Free Course

Create a Large Language Model from Scratch with Python

Mark as Favorite Share
image

What Will You Learn?

In this course, you'll learn to construct your own large language model from the ground up, delving into the intricacies of data handling, mathematics, and transformer architectures. Practical implementation will be carried out using Python, providing you with hands-on experience in building and understanding the mechanics of large language models.

About This Course

Provider: Youtube
Format: Online
Duration: 6 hours to complete [Approx]
Target Audience: Beginners
Learning Objectives: Upon completion this free course, you will be able to construct a large language model from scratch, gaining expertise in data handling, mathematical concepts, and transformer architectures, all implemented using Python.
Course Prerequisites: NA
Assessment and Certification: NA
Instructor: Freecodecamp
Key Topics: Large Language Model, Python Programming, Artificial Intelligence
Topic Covered: 
  1. - Introduction
  2. - Install Libraries
  3. - Pylzma build tools
  4. - Jupyter Notebook
  5. - Download wizard of oz
  6. - Experimenting with text file
  7. - Character-level tokenizer
  8. - Types of tokenizers
  9. - Tensors instead of Arrays
  10. - Linear Algebra heads up
  11. - Train and validation splits
  12. - Premise of Bigram Model
  13. - Inputs and Targets
  14. - Inputs and Targets Implementation
  15. - Batch size hyperparameter
  16. - Switching from CPU to CUDA
  17. - PyTorch Overview
  18. - CPU vs GPU performance in PyTorch
  19. - More PyTorch Functions
  20. - Embedding Vectors
  21. - Embedding Implementation
  22. - Logits and Reshaping
  23. - Generate function and giving the model some context
  24. - Logits Dimensionality
  25. - Training loop + Optimizer + Zerograd explanation
  26. - Optimizers Overview
  27. - Applications of Optimizers
  28. - Loss reporting + Train VS Eval mode
  29. - Normalization Overview
  30. - ReLU, Sigmoid, Tanh Activations
  31. - Transformer and Self-Attention
  32. - Transformer Architecture
  33. - Building a GPT, not Transformer model
  34. - Self-Attention Deep Dive
  35. - GPT architecture
  36. - Switching to Macbook
  37. - Implementing Positional Encoding
  38. - GPTLanguageModel initalization
  39. - GPTLanguageModel forward pass
  40. - OpenWebText download and Survey of LLMs paper
  41. - How the dataloader/batch getter will have to change
  42. - Extract corpus with winrar
  43. - Python data extractor
  44. - Adjusting for train and val splits
  45. - Adding dataloader
  46. - Training on OpenWebText
  47. - Training works well, model loading/saving
  48. - Pickling
  49. - Fixing errors + GPU Memory in task manager
  50. - Command line argument parsing
  51. - Porting code to script
  52. - Prompt: Completion feature + more errors
  53. - nnModule inheritance + generation cropping

0 Comments

No reviews yet !!

Please login first