llama-cpp-cuda version: 8766 (547765a) Getting Error: "Failed to parse input at pos 0: {\"type"
#6 opened 8 days ago
by
wedgeshot
This is broken with Unsloth stable (and the unstable build is broken anyway)
#5 opened about 1 year ago
by
Ransom
RAM requirements for running Llama-3.3-70B-Instruct-Q5_K_M.gguf
1
#4 opened over 1 year ago
by
hyadav22
Ollama run command doesn't works
➕ 8
1
#3 opened over 1 year ago
by
babakgh
can vllm launch this model?
5
#2 opened over 1 year ago
by
chopin1998
It is quant of your own finetuned or original model?
6
#1 opened over 1 year ago
by
supercharge19