The Universal Formula for Prompt Engineering: Core Principles and Structured Frameworks
The Universal Formula for Prompt Engineering: Core Principles and Structured Frameworks
In the previous two articles, we explored the theoretical foundations of AI large language models and the mathematical essence of prompts. Now it’s time to transform this knowledge into practical power. This article shifts entirely to practice, providing you with a systematic framework for building prompts—this will be a “toolbox” style content that you can immediately apply after reading.
Opening Experiment: The Dramatic Difference Between Two Prompts
Let’s start with a comparative experiment. Both prompts ask AI to write an article about climate change, but see how these two prompts produce vastly different results:
Prompt A (Weak Version)
Write an article about climate change.
Prompt B (Strong Version)
Assume you are a senior environmental science journalist writing a cover story for National Geographic magazine. The article should be aimed at general readers, explaining the causes of climate change and its specific impact on global coral reefs. The article should have a clear structure, include scientific data and vivid analogies, and end with a hopeful call to action. Approximately 1000 words.
The difference in results is astounding:
- Prompt A typically produces generic, superficial content
- Prompt B delivers professional, specific, and well-structured high-quality articles
Why is there such a dramatic difference? The answer lies in Prompt B’s use of the three core frameworks we’ll learn today. Let’s analyze these reusable design patterns one by one.
Core Framework One: Role Prompting
Principle Analysis: Who You Are Determines What You Can Do
Role prompting is one of the most powerful techniques in prompt engineering. When you assign a specific role to an AI model, you’re actually activating the knowledge subsets and linguistic styles associated with that role within the model. The power of this technique lies in:
- Knowledge Focus: Narrowing the model’s “thinking scope” to concentrate on specific domains
- Style Consistency: Obtaining language expressions that match the role’s characteristics
- Enhanced Professionalism: Activating relevant professional knowledge and experience patterns
Template Patterns
# Basic Templates"Assume you are a [role]...""You are a [role], your task is to...""Acting as a [role], please..."
# Enhanced Templates"You are a [role] with [experience/background]. Your expertise includes [professional domain]. Please..."
Real-World Comparison Cases
Scenario | Generic Prompt | Role-Playing Prompt | Improvement |
---|---|---|---|
Explaining Economic Concepts | ”Explain inflation" | "Assume you are a central banker explaining inflation to high school students. Use simple analogies and avoid financial jargon.” | More accessible with specific analogies |
Code Review | ”Check this code" | "You are a senior software architect with 15 years of experience. Please review this code for performance, security, and maintainability issues.” | More comprehensive professional analysis |
Creative Writing | ”Write a story" | "You are a bestselling mystery novelist known for intricate plot twists. Write a short story that keeps readers guessing until the final paragraph.” | More suspenseful and literary |
Role Selection Strategy
1. Professional Roles
- Doctors, lawyers, engineers, teachers, etc.
- Suitable for tasks requiring specialized knowledge
2. Creative Roles
- Writers, artists, designers, directors, etc.
- Suitable for creative and expressive tasks
3. Functional Roles
- Analysts, consultants, assistants, mentors, etc.
- Suitable for analytical and guidance tasks
Core Framework Two: STAR Structured Template
What is the STAR Template?
STAR is a universal structured template that ensures prompts contain all necessary information:
- S (Situation/Scenario) - Context/Background: Set the background information
- T (Task) - Task: Clearly specify the specific task for the model to complete
- A (Action/Constraints) - Actions/Constraints: Specify execution steps, format requirements, things to avoid, etc.
- R (Result) - Result: Define the required output format
STAR Template Practical Analysis
Let’s break down the opening “Strong Prompt B” using the STAR framework:
【S - Situation】Assume you are a senior environmental science journalist writing for National Geographic magazine【T - Task】Explain the causes of climate change and its specific impact on global coral reefs【A - Action/Constraints】- Aimed at general readers- Clear article structure- Include scientific data and vivid analogies- End with a hopeful call to action- Approximately 1000 words【R - Result】A cover story article
STAR Template Application Example
Scenario: Developing Marketing Strategy for a Startup
❌ Poor Version: “Create a marketing plan”
✅ STAR Optimized Version:
【S】You are a marketing director specializing in B2B SaaS, developing a marketing strategy for a startup that provides project management tools.【T】Design a 6-month digital marketing plan with the goal of acquiring 1000 paid users.【A】- Budget limited to $100,000- Focus on the SME market- Must include content marketing, social media, and paid advertising- Provide specific KPI metrics and timelines【R】Please output a detailed marketing plan in table format, including channels, budget allocation, timeline, and expected results.
STAR Template Variations
Simplified Version (for simple tasks):
- Context + Task + Format
Extended Version (for complex tasks):
- Background + Objective + Method + Constraints + Output + Examples
Core Framework Three: Chain-of-Thought (CoT)
Principle: Forcing the Model to “Think Step by Step”
For complex reasoning tasks (mathematics, logic, programming, analysis), directly asking for answers often yields poor results. Chain-of-thought technique requires the model to display the reasoning process step by step, which significantly improves the accuracy of the final answer.
Methods to Trigger Chain-of-Thought
1. Magic Phrases
"Let's think step by step.""Please analyze step by step.""Show your reasoning process."
2. Explicit Process Requirements
"First, analyze... Then, consider... Finally, conclude...""Please think through the following steps: 1) Analyze the current situation 2) Identify problems 3) Propose solutions"
Chain-of-Thought Comparison Experiment
Mathematical Reasoning Task:
❌ Zero-Shot Version:
Q: There are 5 apples in a basket. You take away 2, then put in 3 more, then take away 1. How many are left?A:
✅ CoT Version:
Q: There are 5 apples in a basket. You take away 2, then put in 3 more, then take away 1. How many are left?Let's think step by step.A:
Typical CoT Version Output:
Let me calculate step by step:1. Starting with: 5 apples2. Take away 2: 5 - 2 = 3 apples3. Put in 3 more: 3 + 3 = 6 apples4. Take away 1: 6 - 1 = 5 apples
So there are 5 apples left.
Chain-of-Thought Application Scenarios
1. Mathematical and Logical Reasoning
- Calculations, proofs, logical reasoning
2. Complex Analysis Tasks
- Business analysis, problem diagnosis, decision making
3. Multi-step Operations
- Programming debugging, process design, project planning
4. Creative Ideation
- Story creation, design thinking, solution generation
Advanced Chain-of-Thought Techniques
1. Self-Verification
"Please solve this problem, then check if your answer is correct."
2. Multi-Perspective Analysis
"Analyze this product feature from three perspectives: technical, business, and user experience."
3. Hypothesis Testing
"Propose three possible explanations, then evaluate the likelihood of each explanation."
Comprehensive Practice: Prompt Reconstruction Exercise
Now let’s combine the three frameworks and demonstrate live how to reconstruct a bland prompt into a powerful tool.
Original Prompt
Help me write a cover letter.
Reconstruction Process
Step One: Add Role Playing
You are a career coach specializing in the tech industry.
Step Two: Apply STAR Structure
【S】You are a career coach specializing in the tech industry.【T】Write a cover letter for a software engineer with 3 years of Python experience applying to an AI startup.【A】Constraints:1. Highlight machine learning projects2. Confident but not arrogant tone3. Keep to one page【R】Professional business format
Step Three: Introduce Chain-of-Thought
Please first outline the key points you will include, then write the full letter.
Final Reconstructed Version
You are a career coach specializing in the tech industry. Your task is to write a cover letter for a software engineer with 3 years of experience in Python applying to a startup focused on AI.
Constraints:1. Highlight projects involving machine learning2. Tone should be confident but not arrogant3. Keep it to one page4. Address the hiring manager professionally
Output: Please first outline the key points you will include, then write the full letter in professional business format.
Advanced Techniques: Framework Combination Strategies
1. Role + STAR Combination
Suitable for most professional tasks, providing structured professional output.
2. Role + CoT Combination
Suitable for complex problems requiring professional reasoning.
3. STAR + CoT Combination
Suitable for multi-step structured tasks.
4. All Three Frameworks Combined
Suitable for the most complex and important tasks.
Practice Exercises: Immediate Application
Exercise 1: Role Playing Reconstruction
Reconstruct the following prompts using role playing techniques:
- “Explain blockchain technology”
- “Design a mobile app”
- “Analyze stock market trends”
Exercise 2: STAR Structuring
Reconstruct using STAR template:
- “Write a business plan”
- “Create a fitness plan”
- “Prepare for an interview”
Exercise 3: Chain-of-Thought Application
Add chain-of-thought to the following tasks:
- “Choose the best investment portfolio”
- “Diagnose website performance issues”
- “Design user experience flow”
Common Pitfalls and Avoidance Strategies
Pitfall 1: Overly Broad Roles
❌ “You are an expert” ✅ “You are a senior iOS engineer specializing in mobile application development”
Pitfall 2: Unclear Constraints
❌ “Write it better” ✅ “Use concise language, no more than 3 sentences per paragraph, include specific data support”
Pitfall 3: Ignoring Output Format
❌ “Give me an analysis” ✅ “Output in table format with three columns: problem, cause, solution”
Pitfall 4: Overusing Chain-of-Thought
- Simple tasks don’t need chain-of-thought
- Creative tasks may be constrained by chain-of-thought
Iterative Optimization: Continuous Improvement of Prompts
Optimization Loop
- Initial Version: Apply basic frameworks
- Test Output: Evaluate result quality
- Identify Issues: Find unsatisfactory aspects
- Adjust and Optimize: Modify roles, constraints, or structure
- Test Again: Verify improvement effects
Optimization Strategies
1. Progressive Refinement
- Start with simple frameworks
- Gradually add constraints and requirements
2. A/B Testing
- Prepare multiple versions
- Compare output quality
3. Feedback-Driven
- Adjust based on actual usage effects
- Collect user feedback
Tool Recommendations: Prompt Management
1. Template Library Development
Establish personal or team prompt template libraries, including:
- Common role definitions
- STAR structure templates
- Chain-of-thought trigger phrases
2. Version Control
- Record prompt iteration history
- Mark improvement points for each version
3. Effect Evaluation
- Establish output quality assessment standards
- Regular review and optimization
Summary: Principles Over Memorization
Rather than memorizing 100 scattered techniques, master these three core principles:
- Assign Roles: Let AI know “who I am”
- Clear Structure: Let AI know “what to do” and “how to do it”
- Guide Thinking: Let AI know “how to think”
Remember: Few prompts are perfect on the first try. Iterative optimization is the norm in prompt engineering—continuously adjust and refine your instructions based on output results.
After mastering these “sword techniques,” you now possess the core ability to build powerful prompts. In the next article, we’ll cultivate “internal skills” by learning advanced prompting techniques such as zero-shot/few-shot learning, self-criticism, and external tool integration, taking your prompt engineering abilities to the next level.
References
- Wei, J., et al. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. NeurIPS.
- Brown, T., et al. (2020). Language Models are Few-Shot Learners. NeurIPS.
- Reynolds, L., & McDonell, K. (2021). Prompt Programming for Large Language Models. arXiv preprint.
- Liu, P., et al. (2023). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods. ACM Computing Surveys.
- OpenAI. (2023). GPT-4 Technical Report. OpenAI.
- Anthropic. (2023). Constitutional AI: Harmlessness from AI Feedback. Anthropic.