Inference Providers
Active filters: AceMath
nvidia/AceMath-1.5B-Instruct
Text Generation
• 2B • Updated • 1.43k
• • 16
nvidia/AceMath-7B-Instruct
Text Generation
• 8B • Updated • 1.56k
• • 32
nvidia/AceMath-72B-Instruct
Text Generation
• 73B • Updated • 1.29k
• • 21
Text Generation
• 71B • Updated • 1.21k
• 10
Text Generation
• 7B • Updated • 1.56k
• 7
inarikami/AceMath-72B-Instruct-GGUF
Text Generation
• 73B • Updated • 18
NikolayKozloff/AceMath-7B-Instruct-Q8_0-GGUF
Text Generation
• 8B • Updated • 8
• 1
mradermacher/AceMath-7B-Instruct-GGUF
8B • Updated • 99
• 1
mradermacher/AceMath-1.5B-Instruct-GGUF
2B • Updated • 78
mradermacher/AceMath-1.5B-Instruct-i1-GGUF
2B • Updated • 172
mradermacher/AceMath-7B-Instruct-i1-GGUF
8B • Updated • 159
iamcoder18/AceMath-7B-Instruct-Q4_K_M-GGUF
Text Generation
• 8B • Updated • 4
mradermacher/AceMath-72B-Instruct-GGUF
73B • Updated • 58
mradermacher/AceMath-72B-Instruct-i1-GGUF
73B • Updated • 290
• 1
IntelligentEstate/DeRanger-1.5B-iQ5_K_S-GGUF
Text Generation
• 2B • Updated • 14
• 1
tensorblock/AceMath-1.5B-Instruct-GGUF
Text Generation
• 2B • Updated • 15
Mungert/AceMath-1.5B-Instruct-GGUF
Text Generation
• 2B • Updated • 5
Mungert/AceMath-7B-Instruct-GGUF
Text Generation
• 8B • Updated • 15
• 1
tslim1/AceMath-7B-Instruct-mlx-8Bit
Text Generation
• 8B • Updated • 14