Content
OpenAI Prompt Engineering
Craft system prompts, few-shot examples, chain-of-thought strategies, and structured output schemas for production AI systems on OpenAI, Anthropic, and Google Gemini.
When to use
- Writing or refining system prompts for chat applications
- Designing few-shot examples that steer model behavior
- Implementing chain-of-thought reasoning for complex tasks
- Extracting structured data from unstructured inputs
- Optimizing prompt performance (accuracy, cost, latency)
- Building evaluation datasets and regression tests for prompts
When NOT to use
- The task is simple enough that default model behavior works (no prompt needed)
- You need deterministic, rule-based logic — use code instead of prompts
- The "prompt engineering" is really just API configuration (temperature, max_tokens, inference tier)
- You're trying to make the model do something it fundamentally can't (real-time sensor feeds, external side-effects without an orchestrator)
- The problem is better solved by fine-tuning or a custom model than prompt design
Core concepts
System prompt anatomy
┌─────────────────────────────────────────────┐
│ SYSTEM PROMPT │
├─────────────────────────────────────────────┤
│ 1. Role definition (who the model is) │
│ 2. Task description (what it should do) │
│ 3. Output format (how to structure results) │
│ 4. Constraints (what to avoid) │
│ 5. Examples (few-shot demonstrations) │
│ 6. Edge case handling (ambiguity rules) │
└─────────────────────────────────────────────┘