Syntaxa-Prompt-Gen (Phi-3.5-mini-Instruct Fine-Tuned)

Syntaxa is a specialized fine-tuned version of Microsoft's Phi-3.5-mini-instruct. It is designed to act as a "Prompt Generator," turning simple persona descriptions into detailed, high-quality system prompts for other LLMs.

πŸš€ Model Details

  • Developed by: Saleh (saleen)
  • Model type: Causal Language Model (Transformer-based)
  • Base Model: microsoft/Phi-3.5-mini-instruct
  • Finetuning Technique: LoRA (Low-Rank Adaptation)
  • Training Focus: Instruction following for Persona-based prompt generation.

🎯 Intended Use

Syntaxa is intended to help users bridge the gap between a simple idea and a professional prompt.

  • Input Format: ### Instruction: Act as a [Persona]. Write a prompt for yourself.\n\n### Response:
  • Output: A comprehensive, structured system prompt including variables and specific constraints.

πŸ› οΈ Training Procedure

The model was fine-tuned using the following configuration:

  • Epochs: 3
  • Batch Size: 2 (with Gradient Accumulation Steps: 4)
  • Learning Rate: 2e-4
  • Scheduler: Cosine
  • Precision: FP16
  • Dataset: Custom instruction-set focusing on the "Awesome ChatGPT Prompts" structure.

πŸ’» How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model_id = "saleen/Syntaxa_Final_Full"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=False)

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

prompt = "### Instruction: Act as a Senior Web Developer. Write a prompt for yourself.\n\n### Response:"
print(pipe(prompt, max_new_tokens=256)[0]['generated_text'])
Downloads last month
387
Safetensors
Model size
4B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for saleen/Syntaxa_Final_full

Finetuned
(274)
this model