I Tried Every AI Coding Tool. Here's What They Actually Cost.

Marcin Fratczak
Marcin Fratczak
Apr 14, 20267 min read

"Just pay $20 a month for an AI subscription. You're making a mistake if you don't."

I've seen this take everywhere lately. And I agree with the conclusion. But nobody talks about what actually happens after you start paying. The tool-hopping. The limits. The moment you're in flow at midnight and the tool says "come back in 5 hours."

I've been coding with AI for over a year. I've used Cursor, Claude Code, Gemini, GitHub Copilot, and Codex. Sometimes three or four of them in the same week. Here's what that journey actually looked like — and what it actually cost.

Cursor AI: where it started

I started with Cursor AI. Free tier, 100 requests. I burned through them in two days.

So I went Pro at $20/month. That gave me enough requests to actually work, and I started learning how to prompt. It was slow. I had to unlearn the habit of writing everything myself and figure out what AI could handle on its own.

It worked. I built a significant chunk of my application with Cursor. Then the better models arrived.

Cursor tripled the cost for premium models. One premium request counted as three regular ones. I was getting roughly 140 premium requests per month. That sounds like a lot until you're deep in a refactoring session and watching the counter drop.

Worse: when you exceeded your included requests, Cursor charged per token. A few extra requests could cost $10-15. That's hard to budget around.

Claude Code: first attempt

I switched to Claude Code for two months. The direct-from-Anthropic tool, no middleman.

At the time, the VS Code good integration didn't exist (it does now, and it's solid). But the bigger issue was limits. The 5-hour usage window? I'd burn through it in 90 minutes of focused work. There were also daily, weekly, and monthly caps.

I work evenings — after my main B2B job. My productive window is 8 PM to 1 AM. When you hit a limit at 9:30 PM, there's nothing to do but stop. That kills momentum. And for someone with ADHD, regaining focus the next day isn't guaranteed.

The Gemini detour

Google launched Antigravity with Gemini, and it was free with generous limits. Perfect timing.

It worked well for about two weeks. Gemini was actually great for certain tasks — sometimes better than Claude for fixing specific issues. But then the limits tightened. I was hitting walls again. As a primary tool, it wasn't reliable enough.

GitHub Copilot: the longest stay

I switched to Copilot next. The reasoning was simple:

  1. Similar request-based model to Cursor
  2. Better VS Code integration
  3. Predictable overflow costs

That last point mattered most. When I exceeded my monthly limit on Cursor, each extra request cost tokens — unpredictable and expensive. On Copilot, every extra request was 4 cents (12 cents for premium). I could calculate my monthly spend in advance, even when going over the limit.

I stayed on Copilot the longest out of all the tools I tried.

The multi-tool circus

At one point, I was juggling four tools simultaneously:

  • Claude Code — for complex reasoning (when I had quota left)
  • Gemini — for fixes where Claude struggled
  • Codex — as a backup when everything else hit limits (I had GPT Pro)
  • Copilot — the daily driver

This sounds productive. It wasn't. Context-switching between tools meant re-explaining your codebase every time. Each tool had different strengths, different quirks, different ways of understanding your project. The cognitive overhead was real.

What most people don't realize about models

Here's something worth knowing: tools like Cursor and Copilot are good — but they run with smaller context windows and more restrictions than using the model directly. They add their own layer of constraints on top of what the provider offers.

Claude Code gives you full access to Anthropic's models — larger context, fewer artificial limits. Codex gives you the same with OpenAI. The difference shows up when you're doing complex reasoning, large refactors, or architectural decisions where context matters.

For autocomplete and quick fixes, the intermediary tools work fine. For building entire features with AI as a pair programmer, going direct is worth the premium.

Coming back to Claude Code Max

Three months ago, I went back to Claude Code. This time with the Max plan at 90 EUR/month.

Is that expensive? Yes. But here's what changed:

The Max 5x tier gives me a full 5-hour window without hitting limits. I usually don't even use the whole window. And now there's proper VS Code integration, so the workflow is smooth.

The result: no more interruptions that aren't my fault.

That sounds small. It's not. When your tool never stops you, your work sessions are bounded only by your own energy and focus. No more "I was in the zone but ran out of requests." No more switching to a backup tool and losing context.

For someone who works evenings with limited hours and an ADHD brain that's hard to restart — uninterrupted flow is worth the premium.

The real cost isn't money

After a year of tool-hopping, here's what I think people get wrong about AI coding costs:

The subscription price is not the real cost. The real cost is:

  • Hitting a limit mid-session and losing 30 minutes of momentum
  • Switching tools and re-explaining your codebase
  • Mental overhead of tracking which tool has quota left
  • Unpredictable bills that make you hesitate before each prompt

A $20/month tool where you constantly hit walls costs more than a $90/month tool where you don't. Not in money — in time, focus, and work that actually ships.

What I'd tell someone starting today

If you're just exploring AI coding, start with a $20/month tier on any tool. Cursor, Copilot, Claude Code Pro — they're all good enough to learn the workflow. Focus on learning to prompt well. That skill transfers across every tool.

If you're building a real product, pay for the direct provider tools. Claude Code Max or Codex. The models are better, the context windows are bigger, and you'll hit fewer arbitrary walls.

If you're using Copilot and want to get more from it, check out copilot-collections — it's an open-source set of skills and agents built by my team at The Software House. It orchestrates workflows to get better results from Copilot's capabilities.

Whatever you choose, don't stay on a plan that interrupts your flow. The $20 you save per month costs you hours of lost productivity. Pay for uninterrupted work.

Marcin Fratczak

Marcin Fratczak

Solo founder building SaaS products. 18+ years in software, now focused on shipping fast, learning marketing, and sharing the journey.

Enjoyed this post?

I write weekly about SaaS, ADHD and productivity. Subscribe and get new posts in your inbox.

No spam, unsubscribe anytime. Privacy Policy

I Tried Every AI Coding Tool. Here's What They Actually Cost.