Syntaxa-Prompt-Gen (Phi-3.5-mini-Instruct Fine-Tuned)
Syntaxa is a specialized fine-tuned version of Microsoft's Phi-3.5-mini-instruct. It is designed to act as a "Prompt Generator," turning simple persona descriptions into detailed, high-quality system prompts for other LLMs.
π Model Details
- Developed by: Saleh (saleen)
- Model type: Causal Language Model (Transformer-based)
- Base Model: microsoft/Phi-3.5-mini-instruct
- Finetuning Technique: LoRA (Low-Rank Adaptation)
- Training Focus: Instruction following for Persona-based prompt generation.
π― Intended Use
Syntaxa is intended to help users bridge the gap between a simple idea and a professional prompt.
- Input Format:
### Instruction: Act as a [Persona]. Write a prompt for yourself.\n\n### Response: - Output: A comprehensive, structured system prompt including variables and specific constraints.
π οΈ Training Procedure
The model was fine-tuned using the following configuration:
- Epochs: 3
- Batch Size: 2 (with Gradient Accumulation Steps: 4)
- Learning Rate: 2e-4
- Scheduler: Cosine
- Precision: FP16
- Dataset: Custom instruction-set focusing on the "Awesome ChatGPT Prompts" structure.
π» How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "saleen/Syntaxa_Final_Full"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=False)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "### Instruction: Act as a Senior Web Developer. Write a prompt for yourself.\n\n### Response:"
print(pipe(prompt, max_new_tokens=256)[0]['generated_text'])
- Downloads last month
- 387
Model tree for saleen/Syntaxa_Final_full
Base model
microsoft/Phi-3.5-mini-instruct