We're thrilled to release Darwin-9B-NEG, a 9B-parameter reasoning model that embeds an architecturally-internalised sense of self-confidence directly into the transformer — our proprietary Native Entropy Gating (NEG) technology.
With only 9 billion parameters and 1× inference cost, Pure NEG jumps +12.63 %p over the same model without NEG. Going all-in with ensemble refinement pushes it to 84.34 % — surpassing the published Qwen3.5-9B leaderboard score (81.7 %) by +2.64 %p.
🔬 What makes NEG different from Multi-Turn Iteration (MTI)?
Classical MTI needs 3-8× extra inference passes. NEG instead lives INSIDE the single decoding loop. Two tiny modules ride with the transformer: NEG-Head predicts per-token entropy from the last hidden state, and NEG-Gate conditionally restricts the top-k choice when confidence is low. The gate activates in only 4.36 % of tokens — essentially free at inference time.
✨ Key differentiators • Architecturally internalised — model file *is* the feature • 1× inference cost (vs. 3-8× for MTI) • Drop-in with vLLM / SGLang / TGI / transformers — no extra engine • +12.63 %p reasoning at zero latency overhead • Single-file deployment, Apache 2.0 licensed
Upload up to 6 photos - multi-view input for accurate reconstruction No photos? No problem- type a prompt, FLUX.1-Schnell generates your reference images AI vision pipeline - Qwen2.5-VL analyzes your angles and synthesizes the optimal 3D
description Wireframe inspector - review topology before you export GLB export - drop it straight into Blender, ZBrush, Maya, Unity, or Unreal 🔑 Bring your own HF token. Nothing is stored server-side. Works great as a starting mesh for retopology - pair it with [8VIEW AI Studio](ArtelTaleb/8view-ai-studio) to generate your character reference sheets first, then build the 3D asset here. 👉 ArtelTaleb/splat-explorer