Instructions to use RinggAI/Transcript-Analytics-SLM0.5b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use RinggAI/Transcript-Analytics-SLM0.5b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="RinggAI/Transcript-Analytics-SLM0.5b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("RinggAI/Transcript-Analytics-SLM0.5b") model = AutoModelForCausalLM.from_pretrained("RinggAI/Transcript-Analytics-SLM0.5b") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use RinggAI/Transcript-Analytics-SLM0.5b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "RinggAI/Transcript-Analytics-SLM0.5b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RinggAI/Transcript-Analytics-SLM0.5b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/RinggAI/Transcript-Analytics-SLM0.5b
- SGLang
How to use RinggAI/Transcript-Analytics-SLM0.5b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "RinggAI/Transcript-Analytics-SLM0.5b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RinggAI/Transcript-Analytics-SLM0.5b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "RinggAI/Transcript-Analytics-SLM0.5b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RinggAI/Transcript-Analytics-SLM0.5b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use RinggAI/Transcript-Analytics-SLM0.5b with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for RinggAI/Transcript-Analytics-SLM0.5b to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for RinggAI/Transcript-Analytics-SLM0.5b to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for RinggAI/Transcript-Analytics-SLM0.5b to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="RinggAI/Transcript-Analytics-SLM0.5b", max_seq_length=2048, ) - Docker Model Runner
How to use RinggAI/Transcript-Analytics-SLM0.5b with Docker Model Runner:
docker model run hf.co/RinggAI/Transcript-Analytics-SLM0.5b
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("RinggAI/Transcript-Analytics-SLM0.5b")
model = AutoModelForCausalLM.from_pretrained("RinggAI/Transcript-Analytics-SLM0.5b")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))As calling operations scale, it becomes clear that dialing and talking is not enough. Even with a strong voice AI + telephony architecture, the real value shows up only when post-call actions are captured and executed in a robust, dependable and consistent way. Closing the loop matters more than just connecting the call.
To support that, we’re releasing our Hindi + English transcript analytics model tuned specifically for call transcripts:
You can plug it into your calling or voice AI stack to automatically extract:
• Enum-based classifications (e.g., call outcome, intent, disposition)
• Conversation summaries
• Action items / follow-ups
It’s built to handle real-world Hindi, English, and mixed Hinglish calls, including noisy transcripts.
Finetuning Parameters:
rank = 64
lora_alpha = rank*2,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
SFTConfig(
dataset_text_field = "prompt",
per_device_train_batch_size = 32,
gradient_accumulation_steps = 1, # Use GA to mimic batch size!
warmup_steps = 5,
num_train_epochs = 3,
learning_rate = 2e-4,
logging_steps = 50,
optim = "adamw_8bit",
weight_decay = 0.001,
lr_scheduler_type = "linear",
seed = SEED,
report_to = "wandb",
eval_strategy="steps",
eval_steps=200,
)
The model was finetuned on ~100,000 curated transcripts across different domanins and language preferences
Provide the below schema for best output:
response_schema = {
"type": "object",
"properties": {
"key_points": {
"type": "array",
"items": {"type": "string"},
"nullable": True,
},
"action_items": {
"type": "array",
"items": {"type": "string"},
"nullable": True,
},
"summary": {"type": "string"},
"classification": classification_schema,
},
"required": ["summary", "classification"],
}
Developed by: RinggAI
License: apache-2.0
Finetuned from model : unsloth/Qwen2.5-0.5B-Instruct
Parameter decision where made using Schulman, J., & Thinking Machines Lab. (2025).
LoRA Without Regret.
Thinking Machines Lab: Connectionism.
DOI: 10.64434/tml.20250929
Link: https://thinkingmachines.ai/blog/lora/
- Downloads last month
- 14
Model tree for RinggAI/Transcript-Analytics-SLM0.5b
Base model
Qwen/Qwen2.5-0.5B


# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="RinggAI/Transcript-Analytics-SLM0.5b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)