Instructions to use anupam413/phi2_qlora_emailGen_bitsandbytes with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use anupam413/phi2_qlora_emailGen_bitsandbytes with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="anupam413/phi2_qlora_emailGen_bitsandbytes", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("anupam413/phi2_qlora_emailGen_bitsandbytes", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("anupam413/phi2_qlora_emailGen_bitsandbytes", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use anupam413/phi2_qlora_emailGen_bitsandbytes with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "anupam413/phi2_qlora_emailGen_bitsandbytes" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "anupam413/phi2_qlora_emailGen_bitsandbytes", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/anupam413/phi2_qlora_emailGen_bitsandbytes
- SGLang
How to use anupam413/phi2_qlora_emailGen_bitsandbytes with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "anupam413/phi2_qlora_emailGen_bitsandbytes" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "anupam413/phi2_qlora_emailGen_bitsandbytes", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "anupam413/phi2_qlora_emailGen_bitsandbytes" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "anupam413/phi2_qlora_emailGen_bitsandbytes", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use anupam413/phi2_qlora_emailGen_bitsandbytes with Docker Model Runner:
docker model run hf.co/anupam413/phi2_qlora_emailGen_bitsandbytes
Model Description
This model is used to generate the template based on the body of any emails or messages. It uses Microsoft's Phi-2 as the base model and was finetuned for 2 epochs on Google Colab's Tesla T4 GPU.
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Anupam Wagle
- Model type: Text Generation
- Language(s) (NLP): PyTorch
- License: MIT
- Finetuned from model: Microsoft Phi-2
Uses
Use to generate the message based on the previous ones.
Bias, Risks, and Limitations
For better results, increase the size of the dataset and the training epochs.
Training Details
Training Data
The format of the dataset used for finetuning is as follows: [{ "input_email": "Hello Adam,\n\nCan you come to the party tonight after 6 PM?\nBest,\nSubash", "generated_email": "Hi Eve,\n\nThank you for the invitation. I'd love to come to the party tonight after 6 PM. Looking forward to it!\n\nBest,\nAdam" }, ...]
Technical Specifications
This model was finetuned on Google colab's Tesla t4 GPU for a total of 2 epochs.
Model Architecture and Objective
The base model for this was the Microsoft's Phi-2 which was quantized using Bits and Bytes. It's primray objective is to generate messages based on previous messages.
- Downloads last month
- 5