In this short workshop, you'll get to fine-tune a language model on a custom dataset. We'll cover the main challenges and the building blocks of the fine-tuning procedure: model quantization, parameter-efficient fine-tuning (PEFT) and low-rank adapters (LoRA), chat templates and dataset formatting, and training arguments such as gradient checkpointing, gradient accumulation, sequence length, and optimizers. We'll use Google Colab, BitsAndBytes, and several Hugging Face libraries (peft, datasets, and transformers).
-
Notifications
You must be signed in to change notification settings - Fork 2
License
dvgodoy/FineTuningLLMs101_ODSC_Europe2024
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published