Instructions to use DMindAI/DMind-1-mini with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DMindAI/DMind-1-mini with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DMindAI/DMind-1-mini") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("DMindAI/DMind-1-mini", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use DMindAI/DMind-1-mini with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "DMindAI/DMind-1-mini" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DMindAI/DMind-1-mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/DMindAI/DMind-1-mini
- SGLang
How to use DMindAI/DMind-1-mini with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "DMindAI/DMind-1-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DMindAI/DMind-1-mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "DMindAI/DMind-1-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DMindAI/DMind-1-mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use DMindAI/DMind-1-mini with Docker Model Runner:
docker model run hf.co/DMindAI/DMind-1-mini
Commit History
Update README.md f1285ad verified
Frank commited on
Update README.md 75806ee verified
Frank commited on
Update README.md 4dd5556 verified
Frank commited on
Update README.md da3655c verified
Frank commited on
Update README.md 308e5dd verified
Frank commited on
Update README.md 45db091 verified
Frank commited on
Upload model files 7fce87b
Update README.md 6ae928a verified
Frank commited on
Upload normalized-performance-with-price.jpeg 5e377c4 verified
Frank commited on
Delete figures/new.txt ab672d0 verified
Frank commited on
Delete figures/dmind-1-web3-performance.jpeg 9c9d48a verified
Frank commited on
Update README.md b08f172 verified
Frank commited on
Update README.md 836c8ea verified
Frank commited on
Update README.md b993dbd verified
Frank commited on
Update README.md 7146178 verified
Frank commited on
Upload 3 files a333561 verified
Frank commited on
Create figures/new.txt 3ca827a verified
Frank commited on
Update README.md 41e8e95 verified
Frank commited on
initial commit 8225dea verified
Frank commited on