> A "few extra ms" is up to 3 roundtrips difference, that's easily noticeable by humans on cellular.
That's a valid concern. That's the baseline already though, so everyone is already living with that without much in the way of a concern. It's a nice-to-have.
The problem OP presents is what are the tradeoffs for that nice-to-have. Is security holes an acceptable tradeoff?
I routinely have concerns about lag on mobile. It sucks to have to wait for 10 seconds for a basic app to load. And that adds up over the many many users any given app or website has.
Making the transport layer faster makes some architectures more performant. If you can simply swap out the transport layer that's a way easier optimization than rearchitecting an app that is correct but slow.
But it doesn’t allow you to multiplex that connection (HTTP pipelining is broken and usually disabled). So depending on the app setup you could be losing quite a bit waiting on an API call while you could be loading a CSS file.
Your static and dynamic assets should be served from different domains anyway, to reduce the overhead of authentication headers / improve cache coherency. https://sstatic.net/ quotes a good explanation, apparently mirrored https://checkmyws.github.io/yslow-rules/. (The original Yahoo Best Practices for Speeding Up Your Web Site article has been taken down.)
Consider HTTP semantics. If there are cookies in the request, and those cookies change, it has to be re-requested every time. If there are no cookies, the request can remain semantically compatible, so the browser's internal caching proxy can just return the cached version.
There are other advantages: the article elaborates.
Per the official HTTP semantics[1,2], what you say is not true: the only header that’s (effectively) always part of Vary is Authorization, the rest are at the origin server’s discretion; so don’t set Vary: Cookie on responses corresponding to static assets and you’re golden. The article only says that some (all?) browsers will disregard this and count Cookie there as well.
Even still, the question is, what’s worse: a cacheable request that has to go through DNS and a slow-start stage because it’s to an unrelated domain, or a potentially(?) noncacheable one that can reuse the existing connection? On a fast Internet connection, the answer is nonobvious to me.
Oh, would that anyone heeded the official HTTP semantics. (Web browsers barely let you make requests other than GET and POST! It's ridiculous.)
On a fast internet connection, the answer doesn't matter because the internet connection is fast. On a slow internet connection, cacheable requests are better.
Is HTTP the issue here though? Most of the time it seems to be more to do with the server taking ages to respond to queries. E.g. Facebook is equally poor on mobile as it is on my fibre-connected desktop (which I assume is using HTTP/3 as well) so I have my doubts that swapping HTTP versions will make a difference on mobile.
I did find it amusing how the author of the linked article says the megacorps are obsessed with improving performance (including the use of HTTP/3 to apparently help improve performance). In my experience the worst performing apps are those from the megacorps! I use Microsoft apps regularly at work and the performance is woeful even on a fibre connection using a HTTP/3 capable browser and/or their very own apps on their very own OS.
Most people still use Google, and so they're living the fast HTTP 3 life, switching off that to a slower protocol only when interacting with non-Google/Amazon/MSFT properties. If your product is a competitor, but slower/inaccessible users are going to bounce off your product and not even be able to tell you why.
MSFT provides some of the slowest experiences I've had e.g. SharePoint, Teams, etc. Am laughing at the assumption that non-MSFT/etc properties are seen as slower when it is i fact MSFT that are the slowpokes. I haven't used Google much lately but they can be pretty poor too.
AWS are pretty good though. However it is notable I get good speeds and latency using S3 over HTTP/1.1 for backing up several hundreds gigs of data, so not sure if HTTP/3 makes any difference if it is already good enough without.
Nonsense, most of the web is non Google, Amazon or MSFT. Much of the web apps already uses CDNs which will enable web3 and the browser will support it. Other parts like APIs will not benefit much it they hit the database/auth/etc. MSFT stuff is dead slow anyway, Amazon is out of date, Google is just ads (who uses their search anymore?)
That's a valid concern. That's the baseline already though, so everyone is already living with that without much in the way of a concern. It's a nice-to-have.
The problem OP presents is what are the tradeoffs for that nice-to-have. Is security holes an acceptable tradeoff?