nm-testing/tinyllama-marlin24-w4a16-group128
Text Generation
• 0.3B • Updated • 9
nm-testing/llama7b-one-shot-2_4-w4a16-marlin24-t-alt
Text Generation
• 0.9B • Updated • 10
nm-testing/llama7b-one-shot-2_4-w4a16-marlin24-t
Text Generation
• 1B • Updated • 56
• 1
nm-testing/llama3-8b-w8_channel-a8_tensor-compressed
Text Generation
• 8B • Updated • 12
nm-testing/tinyllama-one-shot-w4a16-group-compressed
Text Generation
• 1B • Updated • 6
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Asym-Per-Token-Test
8B • Updated • 5.53k
• 1
nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf
Text Generation
• 8B • Updated • 57
nm-testing/deepseekv2-lite-awq
16B • Updated • 4
• 1
nm-testing/Meta-Llama-3-8B-Instruct-fp8-hf_compat
8B • Updated • 42
nm-testing/Meta-Llama-3-70B-Instruct-FBGEMM-nonuniform
Text Generation
• 71B • Updated • 715
• 1
nm-testing/Meta-Llama-3-8B-Instruct-FBGEMM-nonuniform
Text Generation
• 8B • Updated • 13
nm-testing/llama-3-8b-instruct-fbgemm-test-model
Text Generation
• 8B • Updated • 8