Ai Coding 4 min read

The AI Coding Workflow That Actually Works

The practical coding workflow with AI: what to hand the model, what to review line by line, and when to throw the output away.

AI-assisted coding looks easy in demos. Type a prompt, get perfect code, ship it. Reality is messier. The model gives you something that compiles, passes a quick test, and falls apart in edge cases. Or it solves the wrong problem. Or it introduces a subtle bug you don’t catch until production. The gap between “it works in the chat” and “it works in your codebase” is where most people get stuck.

Here’s the workflow that actually works, based on real experience, not demos.

What to Hand the Model

Context matters more than prompt craft. The model can’t see your codebase, your conventions, or your constraints. You have to provide them.

Give it the relevant code. Don’t paste your entire 10,000-line codebase. Extract the function, class, or module you’re working on. Include the imports. Include the types or interfaces it needs. The model works with what you give it. Garbage in, garbage out.

State the constraints. “We use React 18 and functional components.” “This runs in a serverless environment with a 10-second timeout.” “We can’t add new dependencies.” The model will happily suggest solutions that violate your constraints if you don’t spell them out. It has no memory of your stack.

Be specific about the task. “Add error handling” is vague. “Add a try-catch that logs the error and returns a 500 with a generic message” is specific. The more precise you are, the less you’ll have to fix.

What to Review Line by Line

Never trust the output blindly. The model is statistically plausible, not correct. It will confidently produce code that looks right and isn’t.

Read every line. Yes, every line. Especially the parts you didn’t ask for. The model might “helpfully” add error handling that swallows exceptions, or a dependency you don’t need, or a security hole. Skimming is how bugs slip through.

Run the code. Locally. With real data. Edge cases. The model doesn’t execute. It predicts. Your test suite might pass while production fails. Run it yourself.

Check the dependencies. Did it add an import? A new package? Verify it exists, it’s maintained, and it doesn’t conflict with what you have. I’ve seen models suggest packages that were deprecated years ago.

When to Throw the Output Away

Sometimes the best move is to delete everything and start over. Know the signs:

The approach is wrong. The model solved a different problem than the one you have. Refining the prompt won’t fix it. Start fresh with a clearer problem statement.

The fix is more work than rewriting. You’re patching so many issues that a clean implementation would be faster. Cut your losses.

You don’t understand it. If you can’t explain the code to a colleague, don’t ship it. You’ll own it when it breaks. Either learn it or replace it.

It’s overengineered. The model loves to add abstractions, factories, and “flexibility” you don’t need. Simple code is easier to maintain. If the output feels like a framework when you needed a function, simplify.

Completions, Generation, and Iteration

The workflow differs by task:

Completions (inline suggestions): Great for boilerplate, repetitive patterns, and finishing lines you’ve started. Low risk. You’re still driving. Review as you accept. Don’t tab through blindly.

Generation (full functions or files): Higher risk. Use for scaffolding, tests, and well-defined problems. Always review. Iterate with follow-up prompts if something’s off (“add null checks” or “use our logging utility instead of console.log”), but don’t iterate forever. If you’re on the third revision and it’s still wrong, rewrite.

Refactoring: Tricky. The model might change behavior while “improving” structure. Diff carefully. Run the full test suite. Refactoring with AI is a collaboration, not a handoff.

The Meta-Rule

The workflow that works is the one where you stay in control. AI is a lever. You’re the one who decides where to push, what to keep, and what to throw away. The people who get the most from these tools are the ones who treat the output as a draft: useful, but never final until they’ve made it theirs.