What's the deal with Cursor? I see a lot of praise, but I found it pretty underwhelming

I’ve used Claude and Chad Jupiter before and found them helpful for specific tasks:

  1. Identifying an algorithm I can’t recall the name of.
  2. Trying to train a model without deep machine learning knowledge; they help create a learning path, even if not optimal.

However, I think they are inefficient overall because reviewing their generated code often takes more brain power than writing my own.

Having seen Cursor mentioned as top-tier, I tried the free version, and it turned out to be quite disappointing.

For example, I prompted:

# I need some Elixir code that aligns characters or groups of characters in pairs of strings. Here’s an example:
# Input:
# ```
# abc,LlmNN
# ccb,NNNNm
# ac,LlNN
# bab,mim
# ```
# Output:
# ```
# a b c,Ll m NN
# c c b,NN NN m
# a c,Ll NN
# b a b,m i m
# ```

I didn’t craft the perfect prompt, so that may be an issue, but in comparison:

  • Claude gave runnable code but didn’t solve my request.
  • Chad Jupiter attempted but faltered.
  • Cursor? It didn’t seem to grasp the task at all.

It produced a function that merely read from a file and split it by new lines—not even aligned with my request. Is this the standard for Cursor, or am I missing something?

Here’s Claude’s flawed code output:

    defmodule CharacterAligner do
      ... // long code here
    end

    # Example usage
    input = """
    abc,LlmNN
    ccb,NNNNm
    ac,LlNN
    bab,mim
    """
    result = CharacterAligner.process(input)
    IO.puts(result)

The returned output was:

a L b l c m
c N c N b N
a L c l
b m a i b m

I expected:

a b c,Ll m NN
c c b,NN NN m
a c,Ll NN
b a b,m i m

What gives?

You’re using it the wrong way. Cursor is designed to auto-complete your existing code as you type. Use it in the sidebar chat for generating new code or adding functions to existing files, which gets applied automatically.

Your experience seems tied to the LLM rather than Cursor itself. Cursor excels at autocompletion.

Teegan said:
Your experience seems tied to the LLM rather than Cursor itself. Cursor excels at autocompletion.

I’m using whatever default is on Cursor Pro. Seems too much like Tab Nine but only completing lines without doing more.

What do you mean by ‘nothing to do with Cursor’? I’m using it as advertised.

@Kiran
The default LLM is Claude Sonnet 3.5, giving similar answers to their main site. Cursor’s chat is a wrapper around those LLMs. If you’re dissatisfied, it’s a broader AI issue, not specific to Cursor.

@Teegan
Cursor, like Copilot chat, provides additional context from your coding environment, which can influence responses from the AI.

@Teegan
I finally found that Cmd+K is where I should input prompts instead of the editor. However, the responses were just scaffolding with placeholders, which isn’t what I needed. I still find the autocomplete unhelpful; I hope the service improves.

@Kiran
Autocomplete becomes useful after some coding, but it might not fit your approach optimally.

@Kiran
The autocomplete excels when refactoring code, helping with maintaining consistency across files more intelligently.

I think Cursor’s praise predominantly comes from less experienced developers who are not yet tackling complex software-building.

Sterling said:
I think Cursor’s praise predominantly comes from less experienced developers who are not yet tackling complex software-building.

Curiously, I’ve got 20 years of experience, and I find I can build complex software quicker than before thanks to AI tools. Your approach might not be suitable for all users.

@Zev
You might be underestimating the complexity of your coding tasks.

Issues in code generation often stem from not articulating the concepts appropriately or lacking reference material. Preparation helps you get better results from Cursor.

Rory said:
Issues in code generation often stem from not articulating the concepts appropriately or lacking reference material. Preparation helps you get better results from Cursor.

Does all that prep really save enough time to be worth it?

@Van
Absolutely. I leverage AI for boilerplate, generate components quickly, and check edge cases. The output saves significant time compared to doing it all manually.

Rory said:
@Van
Absolutely. I leverage AI for boilerplate, generate components quickly, and check edge cases. The output saves significant time compared to doing it all manually.

The evolving usage of these tools makes coding more efficient, especially with repetitive tasks.

Your example reflects the type of problem that LLMs struggle with, especially regarding character-level pattern recognition. They perform better on tasks where instructions are clear. After a decade of coding, I mostly generate my code now and experience high productivity levels.

@Scout
Debugging generated code can be challenging.

I find Cursor excellent for autocomplete and debugging assistance but wouldn’t use it for general programming queries.

The # characters in your prompt might confuse the AI, as they often denote headings. Try structuring your request without them or provide more examples.