>Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA
https://huggingface.co/blog/4bit-transformers-bitsandbytes

Tim Dettmers Hugging Face Profile – https://huggingface.co/timdettmers

QLORA: Efficient Finetuning of Quantized LLMs Paper – https://arxiv.org/pdf/2305.14314.pdf

LoRA config parameters – https://huggingface.co/docs/peft/conceptual_guides/lora

PEFT Supported Models – https://huggingface.co/docs/peft/index#causal-language-modeling

Google Colab – https://colab.research.google.com/drive/1Vvju5kOyBsDr7RX_YAvp6ZsSOoSMjhKD?usp=sharing />

❤️ If you want to support the channel ❤️
Support here:
Patreon –
https://www.patreon.com/1littlecoder/
Ko-Fi – https://ko-fi.com/1littlecoder

Support the Hairy Eyeball

Share this on