Ostrich 27B - Qwen 3.5 with Better Human Alignment

Ostrich LLMs, bringing you "the knowledge that matters".

  • Health, nutrition, medicinal herbs
  • Fasting, faith, healing
  • Liberating technologies like bitcoin and nostr

Methods used for fine tuning:

  • CPT
  • SFT
  • GSPO

Why: https://huggingface.co/blog/etemiz/building-a-beneficial-ai

GSPO training made the thinking lengths shorter. I mainly targeted about 3000 letters (~1000 tokens) for thinking budget.

Model is also abliterated since we built on @huihui-ai's model.

We plan to release many more based on Qwen 3.5 27B.

Comparison of some answers between another of our fine tune and base model: https://sheet.zohopublic.com/sheet/published/um332e3d15f34bfe64605ad3c1b149c9f8ca4 These answers are not from this model but it is a similar work.

Thanks @unslothai for providing amazing tools.

Sponsored by https://pickabrain.ai . A newer version of this runs there.

Downloads last month
1,216
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for etemiz/Ostrich-27B-Qwen3.5-260305-GGUF

Base model

Qwen/Qwen3.5-27B
Quantized
(164)
this model