We're thrilled to release Darwin-9B-NEG, a 9B-parameter reasoning model that embeds an architecturally-internalised sense of self-confidence directly into the transformer ā our proprietary Native Entropy Gating (NEG) technology.
With only 9 billion parameters and 1Ć inference cost, Pure NEG jumps +12.63 %p over the same model without NEG. Going all-in with ensemble refinement pushes it to 84.34 % ā surpassing the published Qwen3.5-9B leaderboard score (81.7 %) by +2.64 %p.
š¬ What makes NEG different from Multi-Turn Iteration (MTI)?
Classical MTI needs 3-8Ć extra inference passes. NEG instead lives INSIDE the single decoding loop. Two tiny modules ride with the transformer: NEG-Head predicts per-token entropy from the last hidden state, and NEG-Gate conditionally restricts the top-k choice when confidence is low. The gate activates in only 4.36 % of tokens ā essentially free at inference time.
⨠Key differentiators ⢠Architecturally internalised ā model file *is* the feature ⢠1Ć inference cost (vs. 3-8Ć for MTI) ⢠Drop-in with vLLM / SGLang / TGI / transformers ā no extra engine ⢠+12.63 %p reasoning at zero latency overhead ⢠Single-file deployment, Apache 2.0 licensed
Introducing Unsloth Studio ⨠A new open-source web UI to train and run LLMs.
⢠Run models locally on Mac, Windows, Linux ⢠Train 500+ models 2x faster with 70% less VRAM ⢠Supports GGUF, vision, audio, embedding models ⢠Auto-create datasets from PDF, CSV, DOCX ⢠Self-healing tool calling and code execution ⢠Compare models side by side + export to GGUF
The Qwen3.5 Multimodal Understanding Demo, powered by Qwen3.5-2B, is now available on HF Spaces! It is a lightweight model designed for fast image and video reasoning. Built with Gradio, the demo showcases Image QA, Video QA, object detection, and 2D point tracking, along with real-time token streaming.