Skip to content

[Feature]: Add symbols indexing(classes, methods, variables, etc...) for prompt referencing using the @ tag. #2386

@cironolaenvision

Description

@cironolaenvision

Before submitting

  • I searched existing issues and did not find a duplicate.
  • I am describing a concrete problem or use case, not just a vague idea.

Area

apps/desktop

Problem or use case

Although the file referencing is pretty powerful (much better then claude code BTW), I miss a way to reference symbols inside the code, in the same way cursor does.

Proposed solution

The solution will probably be per language based, written with good abstractions so that people in the community can contribute with "symbols streams" providers for a given language.

I would start with:

  1. Create the common abstraction and background execution of the asynchronous symbols parsing stream.
  2. Create the main chosen observers to start with. My specific use case is React/TypeScript and .NET.
  3. Plug the symbols index to the already implemented files index.
  4. Adapt the UI to display symbols with different Icons (I'm not quite the best UI/UX guy)
  5. Create a rich documentation so others can contribute with other languages symbols stream observers.

Why this matters

  1. As you know, programmers think much more in terms of the entities inside their solution compared to the files themselves, well this is my opinion at least. Of course that when you follow good standards you end up having a file per class, but a class can have multiple methods, overrides, variables, etc... And also, is not always the case that you have one file per class.

  2. It makes the @ tag much more magic, it helps the programmer type away what he needs by what he remembers about the code, and makes it easier and faster to reference relevant parts of the code to the LLM.

  3. Allows the application to send references of snippets of code to the LLM, instead of the whole file. I understand that the files referenced in the prompt are not sent to the LLM when I click send, but the LLM must then grep its way to find the relevant snippet, whereas with the proposed solution, line range numbers will be sent, making the LLM's job much easier and faster.

Smallest useful scope

In my case is not a problem, it is a new feature.

However the minimum scope would be:

  1. Create the common abstraction and background execution of the asynchronous symbols parsing stream.
  2. Start with the typescript observer using: https://github.com/microsoft/typescript/wiki/using-the-language-service-api
  3. Plug the symbols index to the already implemented files index.
  4. Adapt the UI to display symbols with different Icons (I'm not quite the best UI/UX guy)

Alternatives considered

I don't see other workaround

Risks or tradeoffs

  1. CPU and Memory consumption of the background service. The good news is that most of the symbols parsers are already asynchronous, and they do most of the job.
  2. Changing the already implemented files/folders autocomplete with the same speed quality.
  3. To be honest I didn't went through the code to check what type of data structure you are using to build the file index, however, as using symbols will grow the searchable space, I would implement a Ternary Search Tree or a Splay Tree (Good because it will display recently touched files first).

Examples or references

Image

Contribution

  • I would be open to helping implement this.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementRequested improvement or new capability.needs-triageIssue needs maintainer review and initial categorization.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions