Instructions to use B34st/Snowball with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use B34st/Snowball with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="B34st/Snowball", filename="Llama-3.1-8B-Lexi-Uncensored_V2_Q4.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use B34st/Snowball with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf B34st/Snowball # Run inference directly in the terminal: llama-cli -hf B34st/Snowball
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf B34st/Snowball # Run inference directly in the terminal: llama-cli -hf B34st/Snowball
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf B34st/Snowball # Run inference directly in the terminal: ./llama-cli -hf B34st/Snowball
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf B34st/Snowball # Run inference directly in the terminal: ./build/bin/llama-cli -hf B34st/Snowball
Use Docker
docker model run hf.co/B34st/Snowball
- LM Studio
- Jan
- Ollama
How to use B34st/Snowball with Ollama:
ollama run hf.co/B34st/Snowball
- Unsloth Studio new
How to use B34st/Snowball with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for B34st/Snowball to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for B34st/Snowball to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for B34st/Snowball to start chatting
- Pi new
How to use B34st/Snowball with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf B34st/Snowball
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "B34st/Snowball" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use B34st/Snowball with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf B34st/Snowball
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default B34st/Snowball
Run Hermes
hermes
- Docker Model Runner
How to use B34st/Snowball with Docker Model Runner:
docker model run hf.co/B34st/Snowball
- Lemonade
How to use B34st/Snowball with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull B34st/Snowball
Run and chat with the model
lemonade run user.Snowball-{{QUANT_TAG}}List all available models
lemonade list
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)VERSION 2 Update Notes:
- More compliant
- Smarter
- For best response, use this system prompt (feel free to expand upon it as you wish):
Think step by step with a logical reasoning and intellectual sense before you provide any response.
For more uncensored and compliant response, you can expand the system message differently, or simply enter a dot "." as system message.
IMPORTANT: Upon further investigation, the Q4 seems to have refusal issues sometimes. There seems to be some of the fine-tune loss happening due to the quantization. I will look into it for V3. Until then, I suggest you run F16 or Q8 if possible.
GENERAL INFO:
This model is based on Llama-3.1-8b-Instruct, and is governed by META LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3.1 license.
IMPORTANT:
Use the same template as the official Llama 3.1 8B instruct. System tokens must be present during inference, even if you set an empty system message. If you are unsure, just add a short system message as you wish.
FEEDBACK:
If you find any issues or have suggestions for improvements, feel free to leave a review and I will look into it for upcoming improvements and next version.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 27.93 |
| IFEval (0-Shot) | 77.92 |
| BBH (3-Shot) | 29.69 |
| MATH Lvl 5 (4-Shot) | 16.92 |
| GPQA (0-shot) | 4.36 |
| MuSR (0-shot) | 7.77 |
| MMLU-PRO (5-shot) | 30.90 |
- Downloads last month
- 1
We're not able to determine the quantization variants.
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard77.920
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard29.690
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard16.920
- acc_norm on GPQA (0-shot)Open LLM Leaderboard4.360
- acc_norm on MuSR (0-shot)Open LLM Leaderboard7.770
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard30.900



# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="B34st/Snowball", filename="", )