Darkmere-14B-v0.1
A fine-tune of Ministral 3 14B Instruct 2512 for roleplay and creative writing. The 8B version is available here.
Why this exists
There's a noticeable gap between 8B and 24B LLMs fine-tuned for roleplay. Aside from old Mistral Nemo 12B fine-tunes, there are very few options that comfortably fit into 16GB VRAM at decent quantization. This model is an attempt to utilize that 14B "sweet spot" for a better RP experience.
The SillyTavern preset is available here.
Training Notes
Trained on a small private dataset I've been building for the last month or two. It's a mix of manually cleaned synthetic data, human-written stories, and RP logs. While the dataset leans toward a Dark Fantasy aesthetic, this fine-tune is versatile and behaves well across various genres.
- Training method: Full fine-tuning (not LoRA)
- Context Length: 16384
- Learning Rate: 5e-6
- Vision: the vision encoder was frozen during training, so the model retains its native vision capabilities.
Special Thanks
This fine-tune wouldn't be possible without the incredible work of the community:
- p-e-w for developing Heretic - an essential tool for censorship removal.
- Mistral AI for their Ministral 3 weights.
- AMD for their Instinct™ MI300X GPU.
- Downloads last month
- 177
Model tree for 0xA50C1A1/Darkmere-14B-v0.1
Base model
mistralai/Ministral-3-14B-Base-2512