Instructions to use ARO-Lang/aro-coder-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use ARO-Lang/aro-coder-4bit with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("ARO-Lang/aro-coder-4bit") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use ARO-Lang/aro-coder-4bit with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "ARO-Lang/aro-coder-4bit"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "ARO-Lang/aro-coder-4bit" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use ARO-Lang/aro-coder-4bit with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "ARO-Lang/aro-coder-4bit"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default ARO-Lang/aro-coder-4bit
Run Hermes
hermes
- MLX LM
How to use ARO-Lang/aro-coder-4bit with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "ARO-Lang/aro-coder-4bit"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "ARO-Lang/aro-coder-4bit" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ARO-Lang/aro-coder-4bit", "messages": [ {"role": "user", "content": "Hello"} ] }'
ARO Coder
A fine-tuned code generation model specialised in the ARO (Action Result Object) programming language.
ARO is a domain-specific language where every statement follows the pattern:
Verb the <Result> preposition [the] <Object>.
| Base model | mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit |
| Quantization | 4-bit (MLX) |
| Language | ARO |
| Training samples | 2943 |
| Syntax pass rate | 57% |
| Source label | distill_student |
Links
- Website: arolang.github.io/aro
- GitHub: github.com/arolang/aro
- Documentation: Wiki
- Language Guide (PDF): Download
- Discussions: GitHub Discussions
Quick Start
MLX (Apple Silicon)
from mlx_lm import load, generate
model, tokenizer = load("ARO-Lang/aro-coder-4bit")
messages = [
{"role": "system", "content": "You are an expert ARO programmer."},
{"role": "user", "content": "Write an ARO feature set that retrieves a user by ID and returns an OK response."},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response = generate(model, tokenizer, prompt=prompt, max_tokens=500)
print(response)
MLX Server (OpenAI-compatible API)
python -m mlx_lm.server --model ARO-Lang/aro-coder-4bit --port 8080
curl http://localhost:8080/v1/chat/completions \
-H 'Content-Type: application/json' \
-d '{"model": "aro-coder", "messages": [{"role": "user", "content": "Write hello world in ARO"}]}'
Ollama
ollama run aro-coder
Example Output
Prompt: Write an ARO Application-Start that starts an HTTP server.
(Application-Start: My API) {
Log "Starting server..." to the <console>.
Start the <http-server> with <contract>.
Keepalive the <application> for the <events>.
Return an <OK: status> for the <startup>.
}
What is ARO?
ARO is a DSL for expressing business features as Action-Result-Object statements.
Every program is a directory of .aro files with event-driven feature sets:
(getUser: User API) {
Extract the <id> from the <pathParameters: id>.
Retrieve the <user> from the <user-repository> where id = <id>.
Return an <OK: status> with <user>.
}
Key features:
- Contract-first HTTP — routes defined in
openapi.yaml, feature sets matchoperationId - Event-driven — feature sets triggered by events, not direct calls
- Immutable bindings — every transformation produces a new name
- Happy-path only — no error handling code; the runtime manages errors
Training
This model was trained with the ARO training pipeline:
- Corpus collection — 2943 samples from Examples, Book, Wiki, Proposals, and real-world ARO applications
- Supervised fine-tuning — LoRA on all code generation, debugging, Q&A, and explanation tasks
- DPO preference training — using
aro checkvalidation to build chosen/rejected pairs - Iterative self-improvement — multiple rounds of generate-validate-retrain
License
This model and the ARO language are open source under the MIT License.
- Downloads last month
- 481
4-bit
Model tree for ARO-Lang/aro-coder-4bit
Base model
Qwen/Qwen3-Coder-30B-A3B-Instruct