Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

First, thank you for sharing this helpful link, but LOL at needing to use a third party server to search plain text data that could fit in RAM (at least on this developer's machine). JavaScript- and JSON-based developer tooling is a terrible idea.


I'm sure it can do a plaintext search just fine. What the author is talking about is language-aware features like "go to definition". Holding all of a whole web browser's C++ parsing tree in memory is a lot bigger ask than just its plain text.


You're assuming that the only way to a definition of an identifier is 1) parse the entire source tree 2) keep the entire source tree in memory 3) use that in-memory source tree to go to definition. If you accept those constraints and then implement in a slow language, then yes, it won't work.


You’re rather underestimating the vast expanse that is the Chromium codebase, I think. That said, distributing tags files was a common thing once, and a dedicated symbol package you could just download and feed into your language server (instead of being constantly tethered to a symbol server) could make for a nice affordance today.


So this actually exists, as it turns out. You can run clangd-indexer to produce a static index and then load it with clangd -index-file. The caveat is that

> [r]unning clangd-indexer is expensive and produced index is not incremental.


It's not searching plaintext though.

VSCode itself can deal with big text data being thrown at it, this will be some of the language server stuff


I'm glad VSCode handles large text files better for you than it does for me. Editing anything large in VSCode makes it slow to a crawl on my machines.


Interesting.

The only time I had problems was very big JSONs with everything on one line, but prettifying fixed that. (Or was it XML? I forgot. One of those.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: