The JS ecosystem definitely has a big problem from the culture developed in the IE6 era where people developed so many packages working around the limited language and runtime, but React does have part of the blame here: the way it’s designed forces everything into its proprietary model instead of web standards, so you end up with tons of components duplicating other projects but in React or providing shims for those projects, Facebook’s big devrel push prioritized getting started quickly on a proof of concept rather than maintaining a larger app so you had things like Create React App adding nearly 40k dependencies before you had written a single line of code, and the culture of focusing JavaScript over built-in browser functionality (which made some sense in the 2000s when you had users stuck with IE6) means that you’re going a lot of work in runtime JavaScript rather than the browsers’ heavily-optimized C++ – and it’s often hard to change that because it’s not a direct dependency but a nested chain.
This is also why it’s slow and memory hungry: it’s not just the inherent inefficiency of the virtual DOM but also that having such a deep tree makes it hard to simplify - and since interoperability makes it cheaper to switch away, framework developers have conflicting incentives about making it easier.
Same (I started writing JavaScript when it was called LiveScript in the Netscape betas), and I remember how the vDOM hype was conspicuously short on rigorous benchmarks – people would compare it to heavyweight frameworks which did things like touch elements repeatedly or do innerHtml cycles and say it was fast.
More specifically, they would use the default font, which IE in particular had set to Times New Roman, so that is what most people saw. To add insult to injury, there was no way to configure it for a very long time.
To this day I wonder if this particularly strange choice of a serif font that is very clearly intended primarily for printed documents rather than on-screen legibility is why this entire notion of using user-selected fonts for web pages has largely withered. What if they went with, say, Verdana instead?
Same, except my web backgrounds were in sepia. I had one of those old sepia monochrome monitors, so no grey for me. Or colors for that matter.
I even made my first website on that monitor (complete with animated gifs and <blink>, of course) - and seeing it finally on a color monitor was... interesting.
I was around for DHTML days, and as I recall, it was just a generic term for the ability to manipulate the actual (not virtual) DOM programmatically from JS.
Can’t help being sarcastic: I have seen a couple of “I ditched React” post-mortems that apparently start with “I decided to stop adding poorly vetted dependencies with poor package maintenance practices”, just worded differently.
It is unsurprising to me if the router library is the first accused. When I was starting with a new project where I am using React, I went through a bunch of router libraries. There are tons, it seems like a low-hanging fruit with many implementations and many people trying to make a living off theirs (can’t blame them for it, unless they make changes for the sake of making changes and to incentivise people to pay for support). Ultimately, I found something off in every one, so I… just decided to not use any!
That is the thing, React is a small rendering library[0] and you are free to build whatever you want around it with as many or as few dependencies as you want. If the ecosystem is popular enough, there will be dependency tree monsters (simply because the ecosystem is extensive and using many dependencies allows package authors to make something impressive with less effort); switching to a less popular ecosystem as a way of dealing with that seems like a solution but a bit of a heavy-handed one.
[0] Though under Vercel it does seem to suffer from a bit of feature creep, RSC and all that, it is still pretty lean and as pointed out has two packages total in its dependency tree (some might say it’s two too many, but it is a far cry from dependency hell).
Sorry, can't agree. React is a state management library that also implements efficient rendering on top of the DOM diff it computes as it propagates the state changes.
This allows React apps to remain so simple (one mostly linear function per component) and so composable without turning into an unmanageable dish of callback / future spaghetti.
There is a number of other VDOM libraries, but what sets React apart is the data / state flow strictly in one direction. This allows to reap many of the benefits of functional programming along the way, like everything the developer sees being immutable; not a coincidence.
Regarding the size, preact [1] is mostly API-compatible, but also absurdly small (3-4 kB minified), actually smaller than HTMX (10 kB). But with preact you likely also want preact-iso, so the size grows a little bit.
> implements efficient rendering on top of the DOM diff
Here your definition of React diverges from reality, I believe.
React does implement state management—it sort of has to to be of any use. It is a flavour of immediate mode GUI, and some way of managing what changes and what does not change between renders is necessary.
However, React famously does not know about DOM or even Web. (Unless you meant DOM in some more general sense?) People use React to make command-line interfaces, output to embedded LCD screens, etc.
Yes, coupling it to Web and DOM and abandoning the separation of concerns that makes core React so general-purpose could probably make React a bit smaller, but I think projects like Preact are welcome to do it instead.
You are right, and my calling it imGUI is a stretch since actually it does rely on state to know when to render.
If you think about it, the end goal is rendering, and React facilitates that, but the actual rendering (as in, changing page DOM, terminal buffer, etc.) happens outside of React. So my definition may need to be adjusted.
I still think it’s a stretch to call React a framework, though!
Which always seemed a bit ironic, considering those libraries are presumably intended to make state management work better if you really have to do a lot of it (i.e., have a large product).
At MPOW we have a bunch of state-managing components for complex cases, like multi-page forms or API-backed grids with tons of filters, etc. They take some effort to get the hang of them, but after that they save large amounts of boilerplate.
React comes with useState, useMemo, and useCallback, which is actually enough, but it may be too low-level when you think e.g. in terms of a huge interactive form. It's easy to write your own useWhatever based on these which would factor out your common boilerplate.
I suspect HTMX also does not come with every possible battery included, judging by the proliferation of libraries for HTMX-based projects. Modularity is a strength.
it is enough in the sense that NAND gates are enough to build computers. Yes, you can write a complex application using only those, and yes it's easy to write hooks to keep the boilerplate low - although Context still feels like too much boilerplate - but as complexity grows it's quite natural to want to share state between distant parts of the application, and then you're left choosing between lifting staten and prop drilling (not great) or Context and massive, frequent rerenderings. Hence the need for third party state management solutions
Are you sure contexts causing rerenders is not solvable by moving useContext() calls into hooks that each return only the part of the context that is required and ensure reference equality of returned value (meaning any component will not have a reason to rerender unless there is an actual change in that part of the context)?
I thought about it more and my reasoning is as follows (I may be wrong):
— useContext() returns an object. This object is new any time anything in the context changes. If the object has a nested sub-object, it may be new as well even if it did not change, though I suppose it may depend on how context provider works.
— All components where you use that context therefore will render any time it changes. (Takeaway 1: apply loose coupling & high cohesion principle to contexts, such that if you use the context and it changes in any way there is a high chance the change is relevant to wherever you use the context.)
— The render at that stage may be fine[0], especially if contexts are nicely organized, but care is needed because a downstream child that receives a nested sub-object from the context may render as well even if the sub-object is unchanged but referentially new (unless the child is wrapped in memo() and the memo handles reference equality, which may well be what you meant). (Takeaway 2: always remember that JavaScript is full of pointers and referential equality is important in React.)
— However, if part of the context is useMemo()ed for reference equality before being passed to a child, then the child will not have a reason to render from other unrelated context changes.
[0] It may make sense not to use context in large numbers of downstream leaf components (e.g., not use it in an item rendered in a map, but use it in parent list and pass relevant props to list items).
This may be frustrating to deal with in a large project, but it may be that the effort put into organizing contexts strategically and using them with care would lead to a more solid, refactorable and reusable architecture compared to state sprinkled around the place as essentially an equivalent of global variables. It depends.
Not sure, assuming a change in context through useContext() directly counts as state change and memo does not prevent re-renders on state changes…
Generally, re-renders should not be a problem (assuming nothing changed for this component it is a no-op as far as its DOM is concerned), but that is a separate issue, I suppose. I did have to worry about re-renders on a few occasions (and it never feels great when you put effort into memoing each prop for reference quality, but something still causes a rerender from within).
This is my main complaint about React - the "just don't worry about rerenders!" model works well until it really does not, but then you're left with very little help from the tooling to understand and fix it: "why did this render happen" is still a surprisingly difficult question to answer, and if you really want to take control of this you have to very carefully micro manage useCallbacks, useMemos, memo(), probably lie about your useEffect dependencies, check every single hook, and hope that your dependencies do the same. In the words of Ben Lesh[1]: React is not a pit of success.
That said, I fear your solution would not work - your usePartOfTheContext() would re-render every time the useContext() inside did, not helping with avoiding re-renders. But if you only passed the part of context to descendants that use memo(), it _should_ work. Having children of context providers always use memo() is probably a good rule of thumb.
This uncertainty is why I find it much more productive to just slap shared state inside Jotai, so I can be reasonably certain that rerenders will have the smallest granularity without any more work.
I am very hopeful about the compiler, which should help a lot with this, freeing a lot of mental bandwidth, but also useEffectEvent() which will finally make useEffect sane.
> your usePartOfTheContext() would re-render every time the useContext() inside did, not helping with avoiding re-renders
If a hook returns the same value with a stable reference across renders, and it is passed as a prop to some downstream components, it does not matter whether the hook itself uses context or not: for downstream components, prop did not change and no render can be triggered.
Thanks, I was not correct. Still, in my experience the impact from failing to ensure referential equality of props is usually a root cause of many issues.
If you make sure prop references are stable as early as possible, then if you run into poor performance you can always just wrap components in memo() (or some would just memo() all the things by default), but you may not even need it because renders also get cheaper when dependency diffing is effective and every hook does less work.
If prop references are unstable, things get messy in many ways.
In reality though, making a case to use only the React library in a minimalist setup is just as hard to convince a team filled with people who came up in the past 10 years of front-end development as it is to convince them of using HTMX or web components. Nowadays, when people use React they use the whole kit-and-caboodle, and when they say React they mean all of it.
Personally, I avoid React because I don't want a compile step. I do everything I can to avoid one. And if I do need to use a framework like React, I prefer to isolate it to exactly where I need it instead of using it to build a whole site.
For the first part: yes, but that is why I think it is important to stress dependency vetting.
For the second part, a couple of times when I had to add a bit of purely client-side reactivity to something pre-existing but did not want to introduce any build step I simply aliased createElement() to el(). That said, personally I prefer TypeScript for a project of any size, so build step is implied and I can simply not think about converting JSX. Webpack triggers bad memories but esbuild is reasonable.
I like when it simply does not compile, which also helps if there are other various team members (or future me) who can not always be trusted to ignore typing errors. Also, I may be wrong, but I feel like JSDoc types are a bit limiting (and more verbose & extra effort) compared to inline TypeScript. Coming from Python, I really enjoy the typing power of TS and do not want to compromise on that…
I prefer to just run tsc to check for type errors on GitHub commits instead of needing them for every change.
And yeah inline types are more verbose but I prefer to use .d.ts files for definitions and then declare with a comment (vim lets me move to definitions with ctrl-] which is nice).
I also come from a Go background so I actively don't like using the more esoteric and complex types that typescript provides.
Culturally, who writes React apps with only those dependencies? I’ve done it for quick benchmarks and the like, but actual sites almost always have a huge amount of code to load. It’s like saying that you can have a Java app using only builtin libraries: true in theory but rarely in practice.
I found this really noticeable while traveling over the summer with limited bandwidth: the sites which took 5 minutes to fail to load completely all used React or Angular along with many, many other things posturing at being an SPA but the fast sites were the classic server-side rendered PHP with a couple orders of magnitude less JavaScript. It really made me wonder about how we’ve gotten to the point where the “modern” web is basically unusable without a late-model iPhone and fast Wi-Fi or LTE even when you’re talking about a form with a dozen controls.
Most of the problem there are people implementing their own timeouts in javascript instead of relying on the browser. The browser knows the difference between something taking 5m and making no progress vs. something taking 5m and making slow progress. Your application does not.
In this case, it’s simply putting a mountain of code into the critical path. If you have to load 30MB before the page works, it’s just not going to be a good experience. You can try to handle and retry errors but it’s better not to get into that situation in the first place.
That's what I mean. I've seen async loaders that wait 5s and don't see the file, then request it again. Before you know it, you're downloading 50 files of the same file or making 100 api requests to the same api endpoint.
This is also why it’s slow and memory hungry: it’s not just the inherent inefficiency of the virtual DOM but also that having such a deep tree makes it hard to simplify - and since interoperability makes it cheaper to switch away, framework developers have conflicting incentives about making it easier.