Instructions to use khazarai/Quran-R1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use khazarai/Quran-R1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="khazarai/Quran-R1") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("khazarai/Quran-R1") model = AutoModelForCausalLM.from_pretrained("khazarai/Quran-R1") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use khazarai/Quran-R1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "khazarai/Quran-R1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "khazarai/Quran-R1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/khazarai/Quran-R1
- SGLang
How to use khazarai/Quran-R1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "khazarai/Quran-R1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "khazarai/Quran-R1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "khazarai/Quran-R1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "khazarai/Quran-R1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use khazarai/Quran-R1 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for khazarai/Quran-R1 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for khazarai/Quran-R1 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for khazarai/Quran-R1 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="khazarai/Quran-R1", max_seq_length=2048, ) - Docker Model Runner
How to use khazarai/Quran-R1 with Docker Model Runner:
docker model run hf.co/khazarai/Quran-R1
Model Card for Quran-R1
Model Details
This model is a fine-tuned version of Qwen/Qwen3-0.6B on the musaoc/Quran-reasoning-SFT dataset. It is designed to perform reasoning and question-answering tasks related to the Quran, providing structured reasoning steps along with the final answer.
Model Description
- Language(s) (NLP): English
- License: MIT
- Fine-tuning method: Supervised fine-tuning (SFT)
- Finetuned from model: Qwen3-0.6B
- Dataset: musaoc/Quran-reasoning-SFT
Uses
The model is intended for:
- Educational purposes: Assisting with structured reasoning about Quranic content.
- Research: Exploring reasoning capabilities of small LLMs fine-tuned on religious text.
- QA Systems: Providing answers with reasoning traces.
Not intended for:
- Authoritative religious rulings (fatwas)
- Sensitive or controversial theological debates
- High-stakes decision making
Out-of-Scope Use
- Scope: The model is limited to the reasoning dataset it was trained on. It may not generalize to broader Quranic studies.
Bias, Risks, and Limitations
- Bias: Outputs reflect dataset biases and may not represent all scholarly interpretations.
- Hallucination risk: Like all LLMs, it may generate incorrect or fabricated reasoning.
- Religious sensitivity: Responses may not align with every sect, school, or interpretation. Use with caution in sensitive contexts.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("khazarai/Quran-R1")
model = AutoModelForCausalLM.from_pretrained(
"khazarai/Quran-R1",
device_map={"": 0}
)
question = "How does the Quran address the issue of parental authority and children’s rights?"
messages = [
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
enable_thinking = True,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 512,
temperature = 0.6,
top_p = 0.95,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True)
)
Training Data
Dataset: musaoc/Quran-reasoning-SFT
The Quranic Reasoning Question Answering (QRQA) Dataset is a synthetic dataset designed for experimenting purposes and for training and evaluating models capable of answering complex, knowledge-intensive questions about the Quran with a strong emphasis on reasoning. This dataset is particularly well-suited for Supervised Fine-Tuning (SFT) of Large Language Models (LLMs) to enhance their understanding of Islamic scripture and their ability to provide thoughtful, reasoned responses.
- Downloads last month
- 13