Instructions to use CalderaAI/Hexoteric-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use CalderaAI/Hexoteric-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="CalderaAI/Hexoteric-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("CalderaAI/Hexoteric-7B") model = AutoModelForCausalLM.from_pretrained("CalderaAI/Hexoteric-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use CalderaAI/Hexoteric-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "CalderaAI/Hexoteric-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CalderaAI/Hexoteric-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/CalderaAI/Hexoteric-7B
- SGLang
How to use CalderaAI/Hexoteric-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "CalderaAI/Hexoteric-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CalderaAI/Hexoteric-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "CalderaAI/Hexoteric-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CalderaAI/Hexoteric-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use CalderaAI/Hexoteric-7B with Docker Model Runner:
docker model run hf.co/CalderaAI/Hexoteric-7B
Missing model card?
Model is missing model card or any info at all...
An unfortunate setback; Hex' is not intentionally left mysterious and completely devoid of information, the meticulous notes kept during the experiment were unfortunately destroyed along with our original compute asset related. It's a long story but Hexo' (the O is silent), won't be seeing much love beyond its initial ambitions - six major player models fine-tuned from Mistral, each selectively fused with a different RP lora and systemically merged in parallel until two large models remained, were merged, and a final lora fused to cap it off with some logic to drive the whole ensemble of information and behaviors in a coherent chain of thought.
I would see this as a setback but like the experiments before them, the lessons learned will carry on to our future releases - including prepping the model card before a cavalier troupe of hungry badgers ravage our precious workstation frontlining our work as well as our notes. Offsite backups and better prep will be the next foot forward including a refocus on priorities such as script optimization and new innovation to come with model merge scripts. [MergeKit is the new king, has a bit of everyone's findings in it, and seems awesome; I intend to find some new ways including agitating tensors based on a noise pattern before a spherical merge as a mutation+iteration method, that one's a bit off ways on the radar though].
Back on the question - this model was a case of 'What if we took what we did with Naberius-7B and throw everything good into a new mix'
Stanford Alpaca is the right instruct format to drive this model.
Losing access to part of our infrastructure, temporary as it may be, is sadly loss of specifics, keeping Hex embroiled in mystery, even for us.
Lessons learned - and this pet project, is on me, Digitous - not anyone else in Caldera.
Apologies for not having the more definitive credit and process on the model card as intended, will leave this model up for toying and experiments but I no longer have the notes involved.
No worries at all! Looking forward to your next projects.
More are c
After a few months' hiatus we'll be back soon with some new experiments. Thank you for your patience π