Instructions to use Lil-R/UMA_LLM_Engine_V2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Lil-R/UMA_LLM_Engine_V2 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("Lil-R/2_PRYMMAL-ECE-2B-SLERP-V1") model = PeftModel.from_pretrained(base_model, "Lil-R/UMA_LLM_Engine_V2") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- c398ad7d1c85af4b7240a0018d68dae21aa873164734644175225192954cd0be
- Size of remote file:
- 5.56 kB
- SHA256:
- 02663bb987692b951a4a7ef72d5da43af82f3c97478245199e7a363f7b1bb104
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.