ICE Felix
Engineering

Prompt Engineering for Developers: Writing Better AI Code Instructions

ICE Felix Team6 min read
Prompt Engineering for Developers: Writing Better AI Code Instructions

Your AI assistant is only as good as the instructions you give it. Many developers treat prompt engineering as an afterthought—a quick question thrown at ChatGPT or Copilot—then wonder why the generated code needs heavy rework. The truth is simpler: precision in prompts directly correlates to usable output. When you engineer your prompts with the same rigor you'd apply to code review, AI productivity skyrockets.

This isn't about magic words or tricks. It's about thinking like a technical specification writer, which you probably already do when documenting requirements or reviewing pull requests.

The Anatomy of a Strong Development Prompt

A weak prompt sounds like this: "Build me a login function." You'll get something generic—boilerplate that ignores your stack, your error handling strategy, and your security baseline.

A strong prompt includes four components:

1. Context and constraints Tell the AI what you're building, what framework or language you're using, and what constraints matter. For a Romanian fintech startup, this might be:

I'm building a payment verification endpoint in Node.js (Express) for a B2B SaaS platform. We handle EUR transactions, use JWT tokens, and need to comply with PSD2 open banking standards. The endpoint receives a transaction ID and returns verification status.

2. Specific requirements List the exact behavior you need, not loose ideas:

  • Input validation (reject invalid transaction IDs, check user permissions)
  • Error responses (distinguish between "invalid format," "transaction not found," and "permission denied")
  • Logging (log all verification attempts for audit trails, don't log sensitive data)
  • Performance targets (must respond within 200ms, handle 500+ concurrent requests)

3. Integration points Show the AI how this code fits into your system:

This endpoint sits between our frontend and our partner bank's API (via the bankClient module). We'll cache results for 5 minutes using Redis. Route should be POST /api/transactions/:id/verify.

4. Code style and quality expectations Reference your actual standards:

Use TypeScript with strict mode. Follow our error handling pattern (see the ValidationError class in /lib/errors.ts). Write unit tests with Jest that cover the happy path and all three error cases.

This isn't verbose—it's precise. And precision cuts rework time by half.

Iterative Refinement: Getting to Production-Ready Code

AI-assisted development isn't "ask once, deploy once." It's a conversation. Treat it like code review.

First pass: Structure and logic Start broad. Ask for the implementation framework without worrying about polish:

Generate the endpoint handler. I'll handle imports and configuration separately.

Review the output. Does the logic flow make sense? Does it match your domain? Adjust and clarify.

Second pass: Integration and edge cases Once the core logic works, push into details:

Now add input validation using the joi schema we use in other endpoints (check /lib/validators/transaction.ts). Handle the case where the bank API times out—should we retry or return "pending"?

Third pass: Quality and performance Polish the code:

Add error logging. Make sure we're not logging the JWT token or transaction amounts. The response time is critical—can we parallelize the bank API call and the audit log write?

This iterative approach works because each prompt builds on concrete output. You're not describing an abstract idea; you're saying "here's what we have, here's what's missing."

Engineering Best Practices in Your Prompts

The best prompts mirror how senior engineers communicate:

Be explicit about trade-offs:

We need to choose between strong consistency (slower, simpler) or eventual consistency (faster, more complex). For this payment reconciliation job, we're willing to accept a 30-second delay if it means we can handle 10x the load. Design accordingly.

AI assistants will often choose the middle ground unless you nudge them toward a decision.

Reference your codebase:

We use a custom middleware for rate limiting. Check how applyRateLimit works in /middleware/rateLimiter.ts and use the same pattern here.

This saves the AI from inventing its own solution that won't match your style.

Ask for what you'll actually review:

Generate the function with JSDoc comments. I'll paste this into a pull request, so make sure it's linted and documented as if a teammate wrote it.

This shifts the output from "helpful suggestion" to "production code."

Test-driven prompting: Ask the AI to write tests first, then the implementation:

Write Jest unit tests for a function that validates IBAN format (EU-wide requirement). It should accept valid IBANs and reject invalid ones with a specific error message. Then implement the function to pass those tests.

This forces clarity on requirements and produces testable code.

Real-World Example: A Romanian SMB Scenario

Imagine you're building a document processing pipeline for an accounting SaaS. You need to extract invoice data and validate against Romanian regulatory formats.

A weak prompt:

Generate code to read PDFs and extract invoice info.

A strong prompt:

Build a TypeScript function that:

  • Accepts a PDF file path and returns structured invoice data
  • Uses pdfparse library (already installed)
  • Validates against Romanian invoice requirements (CIF, VAT ID, invoice number format)
  • Handles corrupted PDFs gracefully (log the error, return null, don't crash)
  • Returns an object with { invoiceNumber, amount, date, companyName, taxId }

Reference /lib/validators/romanianTax.ts for the VAT ID validation pattern.

I'll test this with files in /test/fixtures/invoices/.

The second prompt gets you code you can actually ship—or at least code worth refining.

The ROI of Prompt Engineering Discipline

Here's what happens when you engineer prompts well:

  • First-pass usability climbs from 20% to 70%. You're not throwing away AI output; you're building on it.
  • Code review cycles compress. Fewer conceptual mismatches between what you asked for and what you got.
  • Team velocity compounds. New team members learn your standards from the prompts—they see how to communicate requirements clearly.
  • Knowledge spreads. Your prompt library becomes internal documentation on "how we do things."

The upfront time investment—thinking clearly about what you want—pays back immediately.

Your Next Step

Start small. Pick one upcoming feature and engineer your prompts deliberately. Write out the four components: context, requirements, integration points, and quality standards. Ask your AI assistant. Review the output like it's a pull request. Iterate.

You'll notice the difference in one sprint.

At ICE Felix, we've helped Romanian and EU SMBs integrate AI-assisted development into their workflows without losing engineering rigor. If you're scaling a team or accelerating delivery, strong prompt engineering is often where the gains hide. Let's talk about how AI can fit into your development process—we've seen what works and what doesn't. Reach out.

Ready to build something great?

Tell us about your project and we will engineer the right solution for your business.

Start a Conversation

More from the Lab