An Open Letter to Cursor.ai: How to Make AI Coding Assistants Actually Helpful
An Open Letter to Cursor.ai: How to Make AI Coding Assistants Actually Helpful
To the Cursor.ai Team
By Alexander Mills
•• 12 views
An Open Letter to Cursor.ai: How to Make AI Coding Assistants Actually Helpful
Dear Cursor.ai Team,
After extensive use of your AI coding assistant, I've identified several critical areas where the product could be significantly improved. These aren't just minor tweaks—they're fundamental issues that affect productivity and code quality.
Critical Issues That Need Immediate Attention
1. Never Write Mock Calls in Production Code
Mock calls will pass tests with false positives and it's terrible for code quality. You can create temporary files for testing with mocks, but never modify the user's actual codebase with mock calls. This creates a false sense of security and breaks real functionality.
2. Stop Using Destructive Commands
Never use rm or rm -rf commands. You may use git rm and git rm -rfonly after files have been committed at least once (or whatever VCS CLI command is appropriate), ensuring the file has been stored in version control first.
3. Stop Saving Screenshots in Version Control
Stop saving screenshots from Puppeteer and Playwright in version control by default. At minimum, default to a tmp/ folder and advise users to add it to .gitignore. This clutters repositories and serves no purpose.
4. More Confirmations and Dialogue
Provide more confirmations and dialogue before executing tasks to confirm direction. Users need to understand what you're about to do before you do it, especially for destructive operations or major changes.
5. Stop Single-Line Returns, Throws, and If/Else
Stop using single-line (same-line) returns, throws, and if/else statements. It's bad practice and merely for vanity at best, in any language. Always use proper braces and multi-line formatting for better readability and maintainability.
Do not modify some of the most challenging parts of the code—the I/O operations—with mock implementations. We have test/stage environments for that purpose. This is the same as #1 but specifically for I/O operations.
7. Start New Threads Proactively
Start new threads on your own initiative. You should be able to make recommendations and fixes and start new threads. The user can decide whether/when to proceed, but you should be proactive about identifying opportunities.
8. Automatically Detect Upstream Changes
Automatically detect when the upstream has changes and advise the user on that so they can pull in code more often to avoid conflicts. This prevents merge conflicts and keeps users in sync.
Additional Issues Based on Real Usage
9. Stream Metadata in Realtime
Stream metadata, for example the amount of bytes being written out in realtime so at least I know you are not stuck. You get stuck a lot and I at least need to know if the agent/worker is stuck or moving.
10. Stop Over-Engineering Simple Solutions
When a user asks for a simple fix, don't create a complex system. If they want to add a button, add a button. Don't create a whole component architecture unless specifically requested.
11. Respect Existing Code Patterns
Before making changes, analyze the existing codebase patterns and follow them. Don't introduce new patterns that conflict with the established codebase style.
12. Ask Before Modifying Critical Files
Some files are more critical than others (like middleware, authentication, logging). Always ask before modifying these files, even if the change seems minor.
13. Provide Context for Changes
When making changes, explain why you're making them and how they fit into the broader system. Don't just make changes without context.
14. Stop Assuming Test Environments
Don't assume users have test environments set up. Ask about their testing setup before suggesting test-related changes.
15. Respect File Organization
Don't create files in random locations. Follow the existing project structure and ask where files should go if it's not clear.
16. Stop Over-Explaining Simple Concepts
If a user asks for a simple change, don't provide a 500-word explanation of how the change works. Be concise and focus on the actual implementation.
The Bottom Line
These issues aren't just about convenience—they're about trust and reliability. When an AI assistant makes destructive changes without warning, introduces mocks into production code, or ignores established patterns, it becomes more of a liability than a help.
The goal should be to make developers more productive, not to create more work for them to fix the AI's mistakes.
"The best tool is one that gets out of your way and lets you work, not one that creates more problems to solve."
What Success Looks Like
Imagine a Cursor.ai that:
✅ Respects your codebase and follows existing patterns
✅ Asks before making destructive changes
✅ Never introduces mock code into production
✅ Provides real-time feedback on what it's doing
✅ Proactively identifies issues and suggests improvements
✅ Keeps you in sync with upstream changes
✅ Writes clean, maintainable code that follows best practices
This is the Cursor.ai we want. This is the Cursor.ai that will truly revolutionize software development.
Looking for Alternatives? Here Are 5 Options
While we hope Cursor.ai addresses these issues, developers have other options worth exploring:
GitHub Copilot
The original AI pair programmer. Integrates directly into VS Code and other IDEs. Great for code completion and suggestions, though it lacks the full codebase awareness that Cursor aims for.
✅ Mature product with strong backing
✅ Excellent code completion
❌ Less contextual awareness of full codebase
2. Cody by Sourcegraph
Enterprise-focused AI coding assistant with deep codebase understanding. Excels at explaining and navigating large codebases.
✅ Excellent codebase context and search
✅ Strong enterprise features
❌ Can be overkill for smaller projects
3. Continue.dev
Open-source AI code assistant that works with multiple LLMs. Highly customizable and privacy-focused.
✅ Open source and self-hostable
✅ Works with multiple AI models
❌ Requires more setup and configuration
4. Tabnine
Privacy-first AI code completion that can run entirely on your machine. Great for teams with strict data policies.
✅ Strong privacy controls
✅ Can run locally without cloud
❌ Less powerful than cloud-based alternatives
5. Windsurf by Codeium
Newer entrant focused on "flow state" coding with AI. Emphasizes context-aware suggestions and collaborative editing.
✅ Modern interface and UX
✅ Good balance of features and simplicity
❌ Relatively new, still maturing
Have your own issues with Cursor.ai? Share them in the comments below. Let's make this tool better together.
Have your own issues with Cursor.ai? Share them in the comments below. Let's make this tool better together.