๐ Large Language Models (LLMs)
ยถ
๐ง What are LLMs?
๐ Definition and Intuition
๐๏ธ LLM vs. Standard Transformer
๐ Typical Sizes (Parameters, Layers, Data)
๐๏ธ Pretraining LLMs
๐ Causal Language Modeling (CLM)
๐งฉ Masked Language Modeling (MLM)
๐ Next Sentence Prediction and Others
๐ง Self-supervised Learning Explained
๐ Fine-tuning Strategies
๐ฏ Task-specific Fine-tuning
๐งช Instruction Tuning
๐ฌ RLHF (Reinforcement Learning with Human Feedback)
๐งฐ Parameter-Efficient Tuning (LoRA, Adapters)
๐ Evaluation of LLMs
โ Standard Benchmarks (GLUE, HELM, MMLU)
๐ง Measuring Hallucination, Bias, Toxicity
โ๏ธ Tradeoffs: Size vs. Accuracy vs. Inference Cost
๐งฐ LLM Capabilities and Limitations
๐ง Emergent Behaviors at Scale
๐ ๏ธ In-Context Learning
๐งจ Hallucinations and Failures
๐ Prompt Sensitivity
๐ Use Cases and Applications
๐ฌ Chatbots and Assistants
๐ Text Summarization
๐ Search + RAG (Retrieval-Augmented Generation)
๐ง Reasoning, Coding, Math, and Beyond
๐ Risks and Ethical Concerns
๐ Model Misuse and Jailbreaks
๐ฌ Biases and Stereotyping
๐ Privacy, Copyright, and Data Leakage
โ๏ธ Responsible Deployment
๐ง Building with LLMs Today
๐ค Open vs. Closed Models
๐งฐ APIs (OpenAI, Anthropic, Cohere)
โ๏ธ Open-source (LLaMA, Mistral, Falcon)
๐ง Model Selection Tips
๐ Closing Notes
๐งญ Summary and Key Takeaways
๐ Next Up: Prompt Engineering (05)
๐ง What to Practice
๐ง What are LLMs?
ยถ
๐ Definition and Intuition
ยถ
๐๏ธ LLM vs. Standard Transformer
ยถ
๐ Typical Sizes (Parameters, Layers, Data)
ยถ
Back to the top
๐๏ธ Pretraining LLMs
ยถ
๐ Causal Language Modeling (CLM)
ยถ
๐งฉ Masked Language Modeling (MLM)
ยถ
๐ Next Sentence Prediction and Others
ยถ
๐ง Self-supervised Learning Explained
ยถ
Back to the top
๐ Fine-tuning Strategies
ยถ
๐ฏ Task-specific Fine-tuning
ยถ
๐งช Instruction Tuning
ยถ
๐ฌ RLHF (Reinforcement Learning with Human Feedback)
ยถ
๐งฐ Parameter-Efficient Tuning (LoRA, Adapters)
ยถ
Back to the top
๐ Evaluation of LLMs
ยถ
โ Standard Benchmarks (GLUE, HELM, MMLU)
ยถ
๐ง Measuring Hallucination, Bias, Toxicity
ยถ
โ๏ธ Tradeoffs: Size vs. Accuracy vs. Inference Cost
ยถ
Back to the top
๐งฐ LLM Capabilities and Limitations
ยถ
๐ง Emergent Behaviors at Scale
ยถ
๐ ๏ธ In-Context Learning
ยถ
๐งจ Hallucinations and Failures
ยถ
๐ Prompt Sensitivity
ยถ
Back to the top
๐ Use Cases and Applications
ยถ
๐ฌ Chatbots and Assistants
ยถ
๐ Text Summarization
ยถ
๐ Search + RAG (Retrieval-Augmented Generation)
ยถ
๐ง Reasoning, Coding, Math, and Beyond
ยถ
Back to the top
๐ Risks and Ethical Concerns
ยถ
๐ Model Misuse and Jailbreaks
ยถ
๐ฌ Biases and Stereotyping
ยถ
๐ Privacy, Copyright, and Data Leakage
ยถ
โ๏ธ Responsible Deployment
ยถ
Back to the top
๐ง Building with LLMs Today
ยถ
๐ค Open vs. Closed Models
ยถ
๐งฐ APIs (OpenAI, Anthropic, Cohere)
ยถ
โ๏ธ Open-source (LLaMA, Mistral, Falcon)
ยถ
๐ง Model Selection Tips
ยถ
Back to the top
๐ Closing Notes
ยถ
๐งญ Summary and Key Takeaways
ยถ
๐ Next Up: Prompt Engineering
ยถ
๐ง What to Practice
ยถ
Back to the top