AI Development

Stop Writing Terrible Prompts: A Developer's Guide to Actually Getting What You Want

Your AI prompts are probably garbage. Here's how I learned to write prompts that actually work after thousands of failures.

January 16, 20265 min read
Share:
Stop Writing Terrible Prompts: A Developer's Guide to Actually Getting What You Want

Your first prompt to ChatGPT probably looked like mine: "Write me a React component." And like me, you probably got back something that was technically React but absolutely useless for your actual project.

I've written thousands of prompts over the past two years building AI-powered features for clients. Most were terrible. But the ones that worked taught me something important: good prompting isn't about being polite to the AI—it's about being precise about what you actually need.

developer laptop screen
developer laptop screen

Context Is Everything (And You're Probably Skipping It)

The biggest mistake I see developers make is jumping straight to the ask. "Generate a login form." "Write a function that processes payments." "Debug this code."

But here's what I've learned: AI models are context-hungry beasts. They need to understand not just what you want, but why you want it, what constraints you're working within, and what success looks like.

Here's a prompt I used last week:

text
I'm building a Next.js 14 app with TypeScript and Tailwind CSS. I need a reusable Modal component that:
- Uses React portals for proper DOM placement
- Supports both controlled and uncontrolled modes
- Has proper focus management for accessibility
- Can be dismissed with Esc key or backdrop click

Here's my current Button component for style reference: [component code]

The modal will be used for confirmation dialogs, forms, and image previews. `

Compare that to "create a modal component" and you'll see why the results are so different. The AI understands your stack, your constraints, your use cases, and even your design preferences.

The Three-Layer Prompt Structure That Actually Works

After analyzing my most successful prompts, I noticed a pattern. The ones that worked best had three distinct layers:

Layer 1: The Setup

Who you are, what you're building, what tools you're using. This isn't fluff—it's crucial context that shapes every suggestion the AI makes.

Layer 2: The Specific Ask

Exactly what you want, with concrete examples and constraints. Be weirdly specific here.

Layer 3: The Success Criteria

How you'll know if the output is good. What should it do? What should it avoid?

Show, Don't Just Tell

One technique that's dramatically improved my results: providing examples of what good looks like.

Instead of asking for "clean, maintainable code," I'll paste a function I like and say "write the new function in this style." Instead of asking for "good error handling," I'll show how I handle errors elsewhere in the codebase.

This works because AI models are pattern-matching machines. Give them a good pattern to match, and they'll follow it surprisingly well.

typescript
// Instead of: "Add error handling to this function"

async function fetchUserProfile(id: string): Promise { try { const response = await api.get(/users/${id}); return response.data; } catch (error) { if (error.status === 404) { return null; // User not found is expected } logger.error('Failed to fetch user profile', { userId: id, error }); throw new Error('Unable to fetch user profile'); } }

// Now add similar error handling to this payment function: [your code] `

code programming screen
code programming screen

The Power of Constraints

Here's something counterintuitive: the more constraints you give an AI, the better its output becomes. Unlimited freedom leads to generic solutions.

I always include:

  • Technical constraints: "Must work with React 18", "No external dependencies", "Keep bundle size under 5KB"
  • Business constraints: "Users are mostly on mobile", "Needs to work offline", "Must be GDPR compliant"
  • Style constraints: "Use our existing error handling pattern", "Follow our naming conventions", "Match the tone of our existing copy"

These constraints don't limit creativity—they channel it toward solutions that actually work in your specific situation.

Iterative Prompting: Your Secret Weapon

I rarely get perfect output on the first try. Instead, I treat prompting like code review. I look at what came back, identify what's good and what needs work, then iterate.

"This is close, but the error handling is too aggressive. Users should be able to retry failed requests without getting logged out."

"Good structure, but make it more defensive. What happens if the API returns an unexpected data shape?"

"Perfect logic, but the variable names are confusing. Use more descriptive names that match our codebase style."

Each iteration gets me closer to something I'd actually ship.

Common Patterns That Keep Working

After thousands of prompts, certain patterns consistently produce better results:

  • Role definition: "You are an experienced React developer reviewing this code..."
  • Output format specification: "Respond with only the code, no explanations" or "Explain your reasoning step by step"
  • Negative examples: "Don't use any/unknown types" or "Avoid inline styles"
  • Incremental complexity: Start simple, then ask for specific improvements
developer workspace laptop
developer workspace laptop

What I Wish I'd Known Earlier

Prompt engineering isn't about finding magic words that unlock better AI responses. It's about communication. The same skills that make you good at writing clear code comments, documenting APIs, or explaining technical concepts to non-technical stakeholders will make you better at prompting.

The AI isn't trying to be difficult—it's trying to help with incomplete information. Your job is to provide complete information.

Practical Takeaways

  • Always include your tech stack and constraints in the prompt
  • Show examples of what good output looks like
  • Be specific about what success means for your use case
  • Iterate on responses instead of expecting perfection immediately
  • Use constraints to guide the AI toward practical solutions
  • Treat prompting like documentation—clear, specific, contextual

The difference between developers who get value from AI and those who don't isn't technical sophistication. It's communication skills. Write prompts like you're onboarding a smart but unfamiliar teammate, and you'll be surprised how much better your results become.

Ibrahim Lawal

Ibrahim Lawal

Full-Stack Developer & AI Integration Specialist. Building AI-powered products that solve real problems.

View Portfolio