From mid-2023 onwards we have seriously tried nearly every AI coding assistant out there. The three we actually use day-to-day are: GitHub Copilot, Cursor, Claude Code. Different philosophies, different contexts. No fanboyism.
GitHub Copilot
Copilot is the eldest of the three and, paradoxically, the one many took for granted for years. With recent model updates and Copilot Chat in the IDE, it is competitive again. It is the most discreet: it completes while you type, it does not ask. For anyone living in VS Code or JetBrains, it has the lowest entry barrier.
We use it for: routine refactors, short contextual completions, unit test generation, DTO/payload translations. For larger edits we hand off to the other two.
Cursor
Cursor is a VS Code fork built around AI. The difference is not the model (you choose), it is the UX. Commands like Ctrl+K for inline edits, Ctrl+L for chat on file context, and Composer for multi-file edits with preview are designed for developer flow and they work.
We use it for: intense work sessions on a feature, where context continuity and multi-file edits matter. It has become the default for those on the team who write new code for hours.
Claude Code
Claude Code is the youngest and the most "agentic". It lives in the terminal and IDE, but its style is different: you hand it a task, it explores the repo, proposes a plan, executes. It surprised us most in terms of autonomy: you can ask "refactor this feature using pattern X" and it reads files, learns the pattern, proposes a coherent change.
We use it for: large or domain-wide refactors (e.g., pages → app router), bug hunts in unfamiliar code, realistic seed/fixture generation, end-to-end acceptance testing.
The shared limit
All three are only as useful as the context you give them. A project with strict TypeScript, configured lint, existing tests and well-written CLAUDE.md/cursor rules is one where AI shines. A messy project is one where AI does damage.
Corollary: refactoring to make them effective pays back twice. Improving the codebase for humans, so AI can work in it, is now a legitimate reason to invest the time.
Costs (Apr 2025)
- Copilot Business: $19/user/month
- Cursor Pro: $20/user/month
- Claude Code (via Anthropic API or Claude Pro plans): variable, typically $20-100/user/month by usage
For our team the combo is Copilot + Claude Code, with Cursor optional for those who prefer it. It is worth it economically: time saved offsets cost within a couple of days per month.
How we manage risk
- No copied generated code from third-party repos: we check with provenance tools, and generally ask the model to rewrite instead of copying.
- Human code review always: AI can write, it cannot approve.
- Secrets out of context: no API keys, no real customer tables in prompts.
- Guidelines in the repo (CLAUDE.md, cursor rules): AI reads them and behaves better.
Three tools, three flows. Anyone using only one is missing something.