LLM Course documentation
Part 1 completed!
0. Setup
1. Transformer models
2. Using 🤗 Transformers
3. Fine-tuning a pretrained model
4. Sharing models and tokenizers
The Hugging Face HubUsing pretrained modelsSharing pretrained modelsBuilding a model cardPart 1 completed!End-of-chapter quiz
5. The 🤗 Datasets library
6. The 🤗 Tokenizers library
7. Classical NLP tasks
8. How to ask for help
9. Building and sharing demos
10. Curate high-quality datasets
11. Fine-tune Large Language Models
12. Build Reasoning Models new
Course Events
Part 1 completed!
This is the end of the first part of the course! Part 2 will be released on November 15th with a big community event, see more information here.
You should now be able to fine-tune a pretrained model on a text classification problem (single or pairs of sentences) and upload the result to the Model Hub. To make sure you mastered this first section, you should do exactly that on a problem that interests you (and not necessarily in English if you speak another language)! You can find help in the Hugging Face forums and share your project in this topic once you’re finished.
We can’t wait to see what you will build with this!
Update on GitHub