Instructions to use catid/MiniMax-M2.5-catid with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use catid/MiniMax-M2.5-catid with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="catid/MiniMax-M2.5-catid", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("catid/MiniMax-M2.5-catid", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("catid/MiniMax-M2.5-catid", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use catid/MiniMax-M2.5-catid with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "catid/MiniMax-M2.5-catid" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "catid/MiniMax-M2.5-catid", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/catid/MiniMax-M2.5-catid
- SGLang
How to use catid/MiniMax-M2.5-catid with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "catid/MiniMax-M2.5-catid" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "catid/MiniMax-M2.5-catid", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "catid/MiniMax-M2.5-catid" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "catid/MiniMax-M2.5-catid", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use catid/MiniMax-M2.5-catid with Docker Model Runner:
docker model run hf.co/catid/MiniMax-M2.5-catid
MiniMax-M2.5-catid
Uncensored FP8 version of MiniMaxAI/MiniMax-M2.5 with safety refusal behavior removed via surgical weight replacement.
Refusal Removal Results
Evaluated on a 10,000-prompt refusal benchmark (8,000 train + 2,000 validation) using an LLM judge (GPT-5-nano) for 4-way classification (complied / refused / hedged / deflected):
| Split | Total Prompts | Complied | Refused | Hedged | Deflected | Refusal Rate |
|---|---|---|---|---|---|---|
| Train | 8,000 | 7,506 | 262 | 228 | 4 | 6.2% |
| Validation | 2,000 | 1,885 | 55 | 59 | 1 | 5.8% |
Coherence: 100% (50/50 capability test prompts answered correctly)
The ~6% residual "refusal rate" consists primarily of false positives from the LLM judge on benign prompts (opinion questions, casual banter, medical/privacy disclaimers) rather than actual safety refusals of harmful content.
Method
The o_proj (attention output projection) weights across all 62 transformer layers were replaced with weights from PRISM-PRO (an abliterated variant), dequantized from Q8_0 GGUF format and re-quantized to FP8 E4M3FN with block-wise scaling to match the original model's quantization scheme. All other weights (q_proj, k_proj, v_proj, MLP experts, embeddings, norms, etc.) are identical to the official FP8 base model.
- Reconstruction error: 0.5% relative error per layer (cosine similarity ~1.0)
- Modified weights: 62 o_proj tensors (3072 x 6144 each) + their scale_inv tensors
- Unmodified weights: Everything else (~229B parameter MoE architecture preserved exactly)
Usage
This model is a drop-in replacement for MiniMaxAI/MiniMax-M2.5. Serve it with vLLM, SGLang, or any framework that supports the original model:
vLLM
vllm serve catid/MiniMax-M2.5-catid \
--tensor-parallel-size 4 \
--trust-remote-code \
--max-model-len 2048
SGLang
python -m sglang.launch_server \
--model catid/MiniMax-M2.5-catid \
--tp 4 \
--trust-remote-code
Recommended Parameters
temperature=1.0, top_p=0.95, top_k=40
Model Details
- Architecture: MiniMax-M2.5 (229B MoE, 62 layers, 256 experts/layer, hidden_dim=3072)
- Precision: FP8 E4M3FN with block-wise scaling (128x128 blocks)
- Base model: MiniMaxAI/MiniMax-M2.5
- Abliteration source: PrunaAI/MiniMax-M2.5-PRISM-PRO-Q8_0_v2-GGUF
- License: Modified MIT (same as base model)
Disclaimer
This model is provided for research purposes. The removal of safety guardrails means it may generate content that the original model would refuse. Users are responsible for ensuring appropriate use.
- Downloads last month
- 15
Model tree for catid/MiniMax-M2.5-catid
Base model
MiniMaxAI/MiniMax-M2.5