Instructions to use 8bit-coder/alpaca-7b-nativeEnhanced with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Adapters
How to use 8bit-coder/alpaca-7b-nativeEnhanced with Adapters:
from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("undefined") model.load_adapter("8bit-coder/alpaca-7b-nativeEnhanced", set_active=True) - Notebooks
- Google Colab
- Kaggle
I hot this error to use the model
#10 opened about 2 years ago
by
favioespinosav
Tuning error
#9 opened about 3 years ago
by
EllNiko
13b, 30b?
2
#8 opened about 3 years ago
by
tensiondriven
Loading the model 26gb?
2
#7 opened about 3 years ago
by
MooCow27
Can we get the GPTQ quantized model?
❤️ 3
2
#5 opened about 3 years ago
by
TheYuriLover