If you really want to engineer web products for users at the edge of the abyss, the most robust experiences are going to be SSR pages that are delivered in a single response with all required assets inlined.
Client-side rendering with piecemeal API calls is definitely not the solution if you are having trouble getting packets from A to B. The more you spread the information across different requests, the more likely you are going to get lose packets, force arbitrary retries and otherwise jank up the UI.
From the perspective of the server, you could install some request timing middleware to detect that a client is in a really bad situation and actually do something about it. Perhaps a compromise could be to have the happy path as a websocketed react experience that falls back to a ultralight, one-shot SSR experience if the session gets flagged as having a bad connection.
If you are dropping packets and losing data, why would it matter if you're making one request or several?
Even if I SSR and inline all the packages/content, that overall response could be broken up into multiple TCP packets that could also be dropped (missing parts in the middle of your overall response).
How does using SSR account for this?
I have to deal with this problem when designing TCP/UDP game networking during the streaming of world data. Streaming a bunch of data (~300 Kb) is similar to one big SSR render and send. This is because standard TCP packets max out at ~65 Kb.
Believing that one request maps to one packet is a frequent "gotcha" I have to point out to new network devs.
The point is you just need to finish the request and you're done, the page is working.
If there's 15 different components sending 25 different requests to different endpoints, some of which are triggered by activities like scrolling etc, then the user needs a consistent connection to have a good experience.
Packet loss in TCP doesn't fail the whole request. It just means some packets need to be resent which takes more time.
> If you are dropping packets and losing data, why would it matter if you're making one request or several?
Let's just focus on the second part: why would it matter if you're making one request or several?
Because people make bad assumptions about the order that requests complete in, don't check that previous requests completed successfully, maybe don't know how (or care) because that's all buried in some frontend framework... maybe that's the point!
> Believing that one request maps to one packet is a frequent "gotcha" I have to point out to new network devs.
What is a "network dev"? Unless they're using UDP... maybe you're thinking of DNS? Nah probably not. QUIC? Is that the entire internet for you? Oh. What about encryption? That takes whole handshakes.
Send one packet, recipient always receives one packet, is a "gotcha" I have to point out to experienced network administrators... along with DNS requires TCP as well as UDP these days, what with DNSSEC, attack mitigations, etc.
> This is because standard TCP packets max out at ~65 Kb.
BTW, frags are bad. DNS infra is still kneecapped by what turned out to be an extremely exuberant kicking of the can down the road packaged as "best practice". I think the architectural discussion must have been "100 nameservers for an AD domain, plus AUTHORITY and ADDITIONAL, not to mention DNSSEC..." "Oh UDP is fine. Frags aren't a problem, the routers and smart NICs will handle it fine." "4096 ought to be enough for anybody." "Good. I'll have another Old Fashioned then." And then the clever attacks begin.
Jumbos are great, but the PMTU has to support it. Localhost or a datacenter, maybe a local network. Somewhere between BIND 9.12.3 and BIND 9.18.21 the default for max-udp-size changed from 4096 to 1232. Just sayin....
A well done PWA will absolutely beat SSR on a shitty connection if it's actually an app.
Cache-control immutable the code and assets of the app and it will only be reloaded on changes. Offline-first and/or stale-while-revalidate approaches (as in the React swr library) can hugely help with interactivity while (as quickly as possible) updating in the background things that have changed and can be synced. (A service worker can even update the app in the background so it's usable while being updated.) HTTP3/QUIC solves the "many small requests" and especially the "head of line blocking" problems of earlier protocols (though only good app/API design can prevent waterfalls). The client can automatically redo bad connections/requests as needed. Once the app is loaded (you can still use code splitting), the API requests will be much smaller than redownloading the page over and over again
Of course this requires a lot of effort in non-trivial cases, and most don't even know how to do it/that it is possible to do.
> HTTP3/QUIC solves the "many small requests" and especially the "head of line blocking" problems of earlier protocols (though only good app/API design can prevent waterfalls).
Client-side rendering with piecemeal API calls is definitely not the solution if you are having trouble getting packets from A to B. The more you spread the information across different requests, the more likely you are going to get lose packets, force arbitrary retries and otherwise jank up the UI.
From the perspective of the server, you could install some request timing middleware to detect that a client is in a really bad situation and actually do something about it. Perhaps a compromise could be to have the happy path as a websocketed react experience that falls back to a ultralight, one-shot SSR experience if the session gets flagged as having a bad connection.