-
Qwen/Qwen2.5-Coder-32B-Instruct
Text Generation • 33B • Updated • 1.3M • • 2.01k -
mistralai/Mistral-Small-24B-Instruct-2501
Updated • 120k • 950 -
AlSamCur123/Mistral-Small3-24B-Instruct
24B • Updated • 44 • 1 -
Qwen/Qwen2.5-Coder-14B-Instruct
Text Generation • 15B • Updated • 1.01M • • 150
MindOfJay
MindOfJay
·
AI & ML interests
programming, developer tools, local models, edge computing, multiple small models in a trenchcoat
Organizations
None yet
Tools
-
sphiratrioth666/SillyTavern-Presets-Sphiratrioth
Updated • 276 -
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
Text Generation • 15B • Updated • 511k • • 634 -
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Text Generation • Updated • 2.1M • • 855 - Running505
LLM Model VRAM Calculator
📈505Calculate VRAM needed to run LLMs on your GPU
todo
-
Qwen/Qwen2.5-Coder-32B-Instruct
Text Generation • 33B • Updated • 1.3M • • 2.01k -
mistralai/Mistral-Small-24B-Instruct-2501
Updated • 120k • 950 -
AlSamCur123/Mistral-Small3-24B-Instruct
24B • Updated • 44 • 1 -
Qwen/Qwen2.5-Coder-14B-Instruct
Text Generation • 15B • Updated • 1.01M • • 150
Tools
-
sphiratrioth666/SillyTavern-Presets-Sphiratrioth
Updated • 276 -
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
Text Generation • 15B • Updated • 511k • • 634 -
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Text Generation • Updated • 2.1M • • 855 - Running505
LLM Model VRAM Calculator
📈505Calculate VRAM needed to run LLMs on your GPU
models 0
None public yet
datasets 0
None public yet