Skip to content

dvgodoy/FineTuningLLMs101_ODSC_Europe2024

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fine-Tuning LLMs 101 ODSC Europe 2024

In this short workshop, you'll get to fine-tune a language model on a custom dataset. We'll cover the main challenges and the building blocks of the fine-tuning procedure: model quantization, parameter-efficient fine-tuning (PEFT) and low-rank adapters (LoRA), chat templates and dataset formatting, and training arguments such as gradient checkpointing, gradient accumulation, sequence length, and optimizers. We'll use Google Colab, BitsAndBytes, and several Hugging Face libraries (peft, datasets, and transformers).

Open in Colab

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published