Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm on the same camp, but in the end it turns out we were not putting it to the actual, real, hard-world test.

VSCode is very fast for me, when I open it in the morning and just starting my day.

But once I've opened the main project and 7 support library's projects, and I'm in a video-call on Chrome sharing my screen (which is something that eats CPU for breakfast), and I'm live-debugging a difficult to reproduce scenario while changing code on the fly, then the test conditions are really set up where differences between slow/heavy and fast/lightweight software can be noticed.

Things like slowness in syntax highlighting, or jankyness when opening different files. Not to mention what happened when I wanted to show the step-by-step debugging of the software to my colleagues.

In summary: our modern computer's sheer power are camouflaging poor software performance. The difference between using native and Electron apps, is a huge reduction in the upper limit of how many things you can do at the same time in your machine, or having a lower ceiling on how many heavy-load work tasks your system can be doing before it breaks.



> In summary: our modern computer's sheer power are camouflaging poor software performance. The difference between using native and Electron apps, is a huge reduction in the upper limit of how many things you can do at the same time in your machine, or having a lower ceiling on how many heavy-load work tasks your system can be doing before it breaks.

Same can be said about a lightweight web page and 'React' with tons routers all in SPA and vdom. Maybe the page is fine when it is the only page open, but when there are other SPA also open, then even typing becomes sluggish. Please don't use modern computer's sheer power to camouflaging poor software performance. Always make sure the code uses as little resource as possible.


That brings a Python "performance" talk to mind that I was recently listening to on YouTube. The first point the presenter brought up was that he thinks the laptops of developers need to be more modern for Python to not be so slow. I had to stop the video right there, because this attitude isn't going anywhere.


You know what? I actually believe in having developers work (or maybe just test) with slower computers (when writing native apps) or with crippled networking (when doing web) in order to force them consider the real-world cases of not being in a confy office with top-notch computers and ultra high-bandwidth connections for testing.


I agree with this approach. I used to always have hardware no more than 2 years old and were med-high to high spec. When I helped troubleshoot on my families and extended families devices and internet connection I saw how normal people suffered on slow systems and networks. I since operate on older devices and do not have gig internet at home every web and app designer should have to build or test with constraints.


I think dev containers can help here. You have a laptop that can run your editor, and a browser. The actual build is done on a remote machine so that we're not kneecapping you by subjecting you to compiling kotlin on a mid range machine, but your laptop still needs to be able to run the site.



I totally agree. However, I feel like this is an ageism :-) Are you 40+ perhaps :-)


Heheh no. I'm in my 30s. My opinion comes from experience. I like to travel a lot, and have been several times on trips that brought me to places where the norm is a subpar connection. Taking 30 seconds to load the simplest bloatware-infested blog that doesn't even display text without JavaScript enabled, teaches you a thing or two about being judicious with technology choices.


This is giving me flashbacks to editors of yore; EMACS, Eight MB And Continually Swapping. I remember reading almost the exact same comments on Usenet from the 80s and 90s.


Flashbacks? It’s 2024 and Emacs is still single threaded


It’s also 2024 and you still can’t share JavaScript objects between threads. Do not underestimate the horror that is tracing garbage collection with multiple mutator threads. (Here[1] is Guile maintainer Andy Wingo singing praises to the new, simpler way to do it... in 2023, referring to a research paper from 2008 that he came across a year before that post.)

[1] https://wingolog.org/archives/2023/02/07/whippet-towards-a-n...


And it still performs better than vscode.


That’s not entirely surprising. Emacs’s UI is a character-cell matrix with some toolkit-provided fluff around it; VSCode’s is an arbitrary piece of graphics. One of these is harder than the other. (Not as harder as VSCode is slower, but still a hell of a lot.)


I use Emacs in textmode. It's super fast! But I've also never found VS Code slow, and that's with viewing multiple large log files at the same time


Sure. Just allocate 10x the engineering resources and I can make it as fast and bug free as you like.


Getting the same amount of current engineers or possibly less that actually care and know about performance can work. There’s a reason applications are so much relatively slower than they were in the 80s. It’s crazy.


Anyone that believes this can prove it by taking down an existing popular product with a better engineered and better performing competitor built for the same cost.

I was using computers in the 80s. They did a very small fraction of what we ask them to do now and they didn't do it fast.


I have had to open the parent folder of all the different code bases I need in a single VSCode window, instead of having an individual window for each.

I much prefer having individual windows for each code base, but the 32G of ram for my laptop is not enough to do that.

If I were to run multiple instances of VSCode, then the moment I need to share my screen or run specs some of them will start crashing due to OOM.


I don't notice much of a problem from multiple windows. I sometimes have a dozen going.

It's the language extensions in the windows that can cause me problems e.g. rust-analyzer is currently using more than 10GB! If windows are just for reading code and I'm feeling some memory pressure then I kill the language server / disable the extension for that window.

I have more problems with jetbrains. 64GB isn't enough for a dev machine to work on 10s of Mbs of code any more...


Like the sibling, I have no problem with keeping multiple windows open and I only have 16GB RAM (MacBook Pro). It must be language extensions or something like that.


It's a Prisoners's Dilemma. Since apps are evaluated in an isolated fashion there is an incentive to use all the resources available to appear as performant as possible. There is further incentive to be as feature-rich as possible to appeal to the biggest audience reachable.

That this is detrimental to the overall outcome is not unfortunate.


There's not extra apparent performance in using Electron. A truly more performant solution will be still more performant under load from other applications.


The extra performance is on the side of the developers of the app. They can use a technology they already know (the web stack) instead of learning a new one (e.g Rust) or hiring somebody that knows it.


> In summary: our modern computer's sheer power are camouflaging poor software performance

I somewhat disagree. Features sell the product, not performance[1], and for most of the software development you could count on the rising CPU tide to lift all poorly performing apps. But now the tides have turned to drought and optimizing makes a hell of a lot of sense.

[1] They are more of a negative sell and relative to other feature parity products. No one left Adobe Photoshop for Paint, no matter how much faster Paint was. But you could if feature parity is closer, e.g. Affine vs Photoshop.*


performance is a feature.


Yes, but more in a QoL way. I say negative as in - if you don't have it you lose a customer, rather than if you have it, you gain a customer.

If performance is a feature, then it's not an important feature. Otherwise, people would use Paint, for everything.

Or put it another way, you want to do X1 task. It's editing a picture to remove some blemishes from skin. You could use a console, to edit individuals pixels, but it would take months/year to finish the task if you are making changes blindly, then checking. It could take several days if you are doing it with Paint. Or you could do it with Photoshop in a few minutes. What difference does a few ms make if you lose hours?

Now this is only task X1 which is edit blemishes, now you do this for every conceivable task and do an average. What percent of that task are ms loses?


> if you don't have it you lose a customer, rather than if you have it, you gain a customer

I completely agree with that take. That's exactly the reason why, for example, whenever I'm about to do some "Real Work" with my computer (read: heavyweight stuff), all Electron apps are the first to go away.

My work uses Slack for communications, and it is fine sitting there for the most part, but I close it when doing some demanding tasks because it takes an unreasonable amount of resources for what it is, a glorified chat client.


I use slack (and spotify) exclusively in the browser because I need a browser open anyway. Never met anything that required the desktop client.


Well, I think you are missing a subtle issue. They may not switch but they might pay more if it’s faster. They also might not switch to paint but if photoshop performed terribly they may switch to a dozen different tools for different purposes. This kind of thing already happens.


Yeah, all I need to do to reliably show the drastic performance difference is open 5 different windows with 5 different versions of our monorepo. I frequently need to do that when e.g. reviewing different branches and, say, running some of the test suites or whatever — work where I want to leave the computer doing something in that branch, while I myself switch to reviewing or implementing some other feature.

When I start VS Code, it often re-opens all the windows, and it is slow as hell right away (on Linux 14900K + fast SSD + 64GB RAM, or on macOS on a Mac Studio M2 Ultra with 64GB RAM).

I'll save a file and it will be like jank...jank... File Save participants running with a progress bar. (Which, tbh, is better than just being that slow without showing any indication of what it is doing, but still.)

I've tried to work with it using one window at a time, but in practice I found it is better for my needs to just quit and relaunch it a few times per day.

I try Zed (and Sublime, and lapce, and any other purportedly performant IDE or beefed-up editor that I read about on this website or similar) like every couple months.

But VS Code has a very, very large lead in features, especially if you are working with TypeScript.

The remote development features are extremely good; you can be working from one workstation doing all the actual work on remote Linux containers — builds and local servers, git, filesystem, shell. That also means you can sit down at some other machine and pick up right where you left off.

The TypeScript completion and project-wide checking is indeed way slower than we want it to be, but it's also a lot better than any other editor I've seen (in terms of picking up the right completions, jumping to definition, suggesting automatic imports, and flagging errors). It works in monorepos containing many different projects, without explicit config.

And then there's the extensions. I don't use many (because I suspect they make it even slower). But the few I do use I wouldn't want to be without (e.g. Deno, Astro, dprint). Whatever your sweet set is, the odds are they'll have a VS Code extension, but maybe not the less popular editors.

So there is this huge gravity pulling me back to VS Code. It is slow. It is, in fact, hella fucking slow. Like 100x slower than you want, at many basic day-to-day things.

But for me so far just buying the absolute fastest machine I can is still the pragmatic thing to do. I want Zed to succeed, I want lapce to succeed, I want to use a faster editor and still do all these same things — but not only have I failed so far to find a replacement that does all the stuff I need to have done, it also seems to me that VS Code's pace of development is pretty amazing, and it is advancing at a faster clip than any of these others.

So while it may be gated in some fundamental way on the performance problem, because of its app architecture, on balance the gap between VS Code and its competitors seems to be widening, not shrinking.


Vscode is very snappy for me on less powerful machine Ryzen 3900 (Ubuntu, X-windows). I have a good experience running multiple instances, big workspaces and 70+ actively used extensions and even more that I selectively enable when I want them. It's only the MS C# support that behaves poorly for me (intentional sabotage?!).

I wonder if you have some problem on your machine/setup? I'd investigate it - try some benchmarking. It's open source so you don't me afraid looking under the hood to see what's happening.

> I'll save a file and it will be like jank...jank... File Save participants running with a progress bar.

I don't see that at all. Saving is instant/transparent to me.

There is so much possible configuration that could cause an issue e.g. if you have "check on save" from an an extension then you enter "js jank land" where plugins take plugins that take plugins all configured in files with dozens of options, weird rules that change format every 6 months e.g. your linter might take plug-ins from your formatter, your test framework, your ui test framework, hot reload framework, your bundler, your transpile targets...

If saving is really slow then I would suspect something like an extension is wandering around node_modules. Probing file access when you see jank might reveal that.


I have that kind of fast, smooth experience with VS Code, too - but that is when I open my small hobby monorepo, or only when I don't leave it open all day. When I open a big work monorepo (250k files, maybe 10GB in size, or 200MB when you exclude all the node_modules and cache dirs, the slowness isn't instant but it becomes slow after "a while" — an hour, or two.

I do actually regularly benchmark it and test with no/minimal extensions, because I share responsibility for tooling for my team, but the fact that it takes an hour or two to repro makes that sort of too cumbersome to do. (We don't mandate using any specific editor, either, but most of my team uses VS Code so I am always trying to help solve pain points if I can.)

And its not just the file saves that become slow — it's anything, or seemingly so. Like building the auto-import suggestions, or jumping to the definition by ⌘-clicking a Symbol. Right after launch, its snappy. After 2-3 hours and a couple hundred files having been opened, it's click, wait, wait... jump.

Eventually, even typing will lag or stutter. Quitting and restarting it brings it back to snappy-ish for a while.

It is true that maybe we have some configuration that I don't change, so even with no or minimal extensions, there might be something about our setup triggers the problems. Like we have a few settings defined at the monorepo root. But very few.

    "editor.formatOnSave": true,
    "editor.codeActionsOnSave": {},
But before you think aha! the formatter! know that I have tried every formatter under the sun over the past 5 years. (Because Prettier gave my team a lot of problems. Although we now use it again.)

We have a huge spelling dictionary. I regularly disable the spelling extension though, but what if there was an edge case bug where having more than 1000 entries in your "cSpell.words" caused a memory leak on every settings lookup, even when the extension wasn't running? I mean... it's software, anything is possible.

But I suspect it is the built-in support for TypeScript itself, and that yeah, as you work with a very large number of files it has to build out a model of the connections between apps and libs and that just causes everything to slow down.

But then, like I mentioned nothing else I've seen quite has the depth of TypeScript support. Or the core set of killer features (to us), which is mainly the remote/SSH stuff for offloading the actual dev env to some beefy machine down the hall (or across the globe).

To us, these things are worth just having to restart the app every few hours. It's kinda annoying, sure, but the feature set is truly fantastic.


> Eventually, even typing will lag or stutter. Quitting and restarting it brings it back to snappy-ish for a while.

Hmm. I've not experienced that. Something is leaking which can be identified/fixed. There are quick things you could do to narrow it down e.g. restart extension host or the language server or kill background node processes etc.

I generally have it running for weeks... although I do have to use "reload window" for my biggest/main workspace fairly often because rust-analyzer debugging gets screwed up and it's the quickest fix from a keyboard shortcut. I may be not seeing your issue for other reasons :)

FWIW I can recommend "reload window" because it only applies to the instance you have a problem with and restores more state than quit/restart e.g. your terminal windows and their content so it's not intrusive to your flow.

> but the fact that it takes an hour or two to repro makes that sort of too cumbersome to do

Yeah, I know what you mean. I now schedule time for "sharpening my tools" each day and making a deliberate effort to fix issues / create PRs for pain-points. I used to live with problems way too long because "I didn't have time". It's not a wall-clock productivity win.... but the intangibles about enjoying the tools more, less pain, feeling in control and learning from other projects are making me happy.


It's too bad VSCode doesn't "hydrate" features on an as-needed basis or on demand. Imagine it opens by default with just text editing and syntax highlighting, and you can opt in to all the bells and whistles as you have the need with a keystroke or click.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: