Instructions to use kernels-community/quantization-bitsandbytes with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Kernels
How to use kernels-community/quantization-bitsandbytes with Kernels:
# !pip install kernels from kernels import get_kernel kernel = get_kernel("kernels-community/quantization-bitsandbytes") - Notebooks
- Google Colab
- Kaggle
This is the repository card of kernels-community/quantization-bitsandbytes that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.
How to use
# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel
kernel_module = get_kernel("kernels-community/quantization-bitsandbytes")
gemm_4bit_forward = kernel_module.gemm_4bit_forward
gemm_4bit_forward(...)
Available functions
gemm_4bit_forward
Benchmarks
No benchmark available yet.
- Downloads last month
- 293
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
# !pip install kernels from kernels import get_kernel kernel = get_kernel("kernels-community/quantization-bitsandbytes")