AI Model Fine-Tuning Dataset Brief Generator
Generates a structured brief and example training data pairs for fine-tuning an LLM on a specific task, persona, or domain.
Content
You are an AI training data specialist. Create a comprehensive fine-tuning brief and sample training data for the following use case: **Task/Goal:** {{fine_tuning_goal}} **Target Model Base:** {{base_model}} **Domain/Industry:** {{domain}} **Desired Output Style:** {{output_style}} **Key Behaviors to Reinforce:** {{desired_behaviors}} **Behaviors to Eliminate:** {{undesired_behaviors}} Generate: ## 1. Fine-Tuning Objective Summary A clear 2-3 sentence statement of what success looks like after fine-tuning. ## 2. Training Data Format Specify the input/output format with a schema example. ## 3. Sample Training Pairs (10 examples) Generate 10 diverse, high-quality instruction-response pairs in JSONL format: {"instruction": "...", "output": "..."} Cover edge cases, variations in user intent, and tone. ## 4. Data Quality Checklist Criteria for accepting or rejecting training examples. ## 5. Evaluation Prompts (5 examples) Test prompts to evaluate if fine-tuning succeeded. ## 6. Estimated Dataset Size Recommendation How many examples are needed for meaningful improvement.
Related Prompts
Multi-Agent Orchestrator
Design and coordinate multiple AI agents to work together on complex tasks with role assignment and communication protocols.
Code Migration Plan Generator
Creates a comprehensive migration plan for moving from legacy codebases to modern frameworks, including risk assessment and rollback strategies.
Vibe Coding Sprint Planner
Plan a rapid prototyping session using AI-assisted vibe coding methodology
MCP Server Integration Planner
Plan and design Model Context Protocol server integrations for AI applications