I’m surprised at the negative knee-jerk reaction. I actually love this idea immediately. It encapsulates something I kind of already wanted when using HTTPS Everywhere.
This doesn’t guarantee the transport is end-to-end secure; I’m sure plenty will strip the encryption at an LB and then possibly send it back over the internet. But, I think it’s a good addition nevertheless. Here’s to hoping for more DoH and encrypted SNI adoption as well. No good reason to leave anything unencrypted if it doesn't have to be.
(I’m less happy with Firefox’s approach to DoH rollout, but I’m still glad to see DoH gaining some traction. Let’s hope the end result is worth it...)
I want my OS to do DNS - including DOH, not my browser. I want a single source for my DNS
I want my network to tell me a DNS server to use. As I own my computer I can override that, but much of the time I want to use the network provided DNS server.
> I want my OS to do DNS - including DOH, not my browser.
The cat is out the bag, so to speak. I foresee a lot of adware, spyware, and malware leveraging DoH now to evade just about every DNS-based monitoring/blocking/provisioning solutions.
Anyway, the right layer to monitor for Internet traffic has always been the IP layer (VPNs notwithstanding).
This has always felt like a strange concern to me. It’s a bit like refusing to have gloves in your house, so that a burglar can’t borrow your gloves to avoid leaving fingerprints.
Adware, spyware, and malware has always had the ability to avoid system DNS. At its most basic, they could hardcode lists of IPs into their malicious code. At its most complex, the same building blocks that DoH/DoT use were available to them: they could build similar tools that tunnel over HTTPS/SSH/etc, whichever protocol they felt was least conspicuous on their target system. Drop a list of hostnames into a pastebin post, tell your malware to check the list for updates, profit.
DoH in Firefox simply makes the above issue harder to ignore. Before this, an enterprise or individual sysadmin could implement DNS inspection at the border, see logs showing their users browsing naughty websites, and feel like they’d made progress. But they’d always be blind to attackers (or nefarious users) who just didn’t bother using the stock DNS offering to bootstrap their hijinks. Detecting that has always required device-level monitoring or further MITM of traffic; learning about DoH makes that more obvious to sysadmins, but it doesn’t materially change what adversaries were able to do already.
> At its most basic, they could hardcode lists of IPs into their malicious code.
Which makes the malware more fragile, because the hosts are often compromised machines themselves, or are the targets of takedowns. If they include only one IP at a time (as they can do with DNS) then when that machine gets cleaned by the owners, they have no way to switch to another one. If they list several machines then anyone analyzing the malware has a list of multiple compromised machines to go have them all cleaned at once. Also, then you can add the IP address to a block list and they can't update it like they can with DNS.
> At its most complex, the same building blocks that DoH/DoT use were available to them: they could build similar tools that tunnel over HTTPS/SSH/etc, whichever protocol they felt was least conspicuous on their target system.
Malicious javascript in a browser doesn't have access to SSH or similar. And that's assuming their code can even reach your machine if your Pi-hole is blocking the DNS name of the server hosting it.
> DoH in Firefox simply makes the above issue harder to ignore.
It makes it harder to prevent. Someone sends the user an email link to a URL containing an unpatched browser exploit or a link to a malicious binary. If the Pi-hole blocks the domain and/or the IP address in the email, the attack is prevented. If the browser bypasses the Pi-hole, you have malicious code actually running on the user's machine, and that's a much bigger problem.
Hardcoding the IP certainly has limitations but that is only the easiest example of bypassing DNS-based content blocking. A slightly less trivial solution where you grab the IP out of a file over HTTP instead could be easily implemented by any junior developer.
> Also, if they use an IP address then they can't be using SNI to host it on the same IP address as several other domains
Sure they can, just hardcode the "Host" header value as well
> Malicious javascript in a browser doesn't have access to SSH or similar.
They can access HTTP though, which is more than enough.
GitHub, PyPi, NPM, etc are all great options for hosting dynamic content from a fixed location which looks benign to scanning.
For non-tech companies, replace with any relatively popular wiki.
The goal is “make a connection over HTTPS or other normal-looking protocol, to a destination that is both justifiability relevant for normal user traffic and allows the attacker to seed the next-step”. And it turns out there are lots of sites that fit that bill.
> GitHub, PyPi, NPM, etc are all great options for hosting dynamic content from a fixed location which looks benign to scanning.
They're not going to leave malware payloads sitting there. Pastebins etc. get abused as malware command-and-control systems all the time, and of course some gets through, but countermeasures happen. Whereas for a DoH server it would be operating as designed.
As a parallel reply noted, the goal isn’t to host a payload on GitHub or a pastebin.
The issue is “how do I tell my bootstrapper where to get a payload from in an inconspicuous way”. To solve that, you find a benign site with existing business purpose (like GitHub or a wiki or a pastebin), and you insert your desired IP address. So maybe you add it as new file in a repo under your cool new fake GitHub account, and the file just says “1.2.3.4”. Now you can tell your bootstrapper to make an HTTPS request for that file, which looks boring to any introspection because outbound requests to GitHub are normal, and to use the resulting IP to request the payload.
Your compromised payload host gets shut down? Spin up a new one, and update the file on GitHub with the new IP.
> They're not going to leave malware payloads sitting there.
They do though, I usually check where spam emails lead me to as a curiosity, and the payload is sometimes hosted on github.
It's not a malicious DoH server. It's just any DoH server which is bypassing your Pi-hole and therefore resolves the malicious name instead of blocking it.
That's not the point. Don't you need to hardcode some kind of identifier in order to use your malware's preferred DoH server instead of the user's preferred one (which could have content blocking applied)?
The issue is with Firefox overriding the system DNS with its own by default, when the system DNS may have content blocking applied and the Firefox default may not.
I think if the user wants that, they should choose to apply it. Not the network operator. Same as how I wouldn't want my network operator inspecting my HTTPS traffic for malware.
I'm not sure why HN won't allow me to reply to ori_b's question below you, however DoH in Firefox (and in Chrome) have clearly spelled out ways to disable it at the network level for those folks who are network operators and want to restrict it due to interference in filtering or split-horizon DNS.
Someone previously mentioned Pi-Hole. Pi-Hole provides the DoH canary domain in it's default configuration, therefore if you run an up to date Pi-Hole on your network you will have DoH disabled. It is recommended (and there are good instructions on the Pi-Hole website) to set up cloudflared or a similar DNS resolver which implements DoH as the backend to Pi-Hole, allowing it to use DoH when leaving your network but still act as the local network's authoritative name server.
If you choose to be a network operator, there are some additional complexities you must think about. For those (majority of) users that choose not to be network operators, DoH is nearly transparent for them and provides significantly improved security and privacy by preventing local cache poisoning, DNS injection, and DNS snooping attacks on their browsing.
Additionally, while the default in Firefox is a non-filtering endpoint currently, you can manually configure the use of a filtering DoH endpoint like those provided by CloudFlare 1.1.1.1 for Families if you wanted to do so. Nothing prevents this, and it can be configured via policies network-wide in Firefox using the enterprise policy mechanisms.
> DoH in Firefox (and in Chrome) have clearly spelled out ways to disable it at the network level for those folks who are network operators and want to restrict it due to interference in filtering or split-horizon DNS.
I still don't really understand this. Yes, it solves that problem, but how is it not also defeating the entire premise? If you had a DNS server from an adversarial network operator, they could just resolve the canary. So it means the browser's DoH is only secure if you can trust your network operator's DNS, at which point, what was its purpose supposed to be?
And the fear is that at some point someone is going to try to "correct" that by removing the check for the canary.
Right now is a transition phase, it is absolutely intended that some other mechanism becomes available in the future which is more resilient. The canary domain provides a clear pathway though for existing Do53 configurations to prevent DoH when necessary.
There's still some specific cases where DoH is problematic, but these are being eliminated over time as the technology matures. As an example of this, split horizon DNS in corporate environments is a good choice. Right now, the only realistic way to provide this is with Do53, so DoH must be disabled, but with Microsoft adding DoH support into the Windows 10 DNS Client, they are likely soon going to also support it on Windows DNS Server, which would enable businesses to make use of DoH internally and to simply set the endpoint by enterprise policies.
DoH is a few different things, but at it's most basic it's transport security for DNS, which is a fundamentally good idea any way you slice it. At some point in the future, plaintext DNS will effectively cease to exist and that's a /good thing/. There are several different implementations of encrypted DNS in the wild besides DoH (DoT and DNScrypt as examples), whichever you choose, any option that encrypts DNS both on your local network and across external networks improves privacy and security.
It takes a pretty long time to transition fundamental protocols like this, but a future where more things are sent across encrypted transports is a good future. Plaintext DNS is incredibly vulnerable, and it's also in an area which isn't heavily visible to most users, which is not a recipe for success.
The user can choose to ignore the canary in the app's settings (not the default though), or they could ignore it at an OS level. And it is indeed intended to be removed and replaced with a different mechanism eventually
Currently, Firefox only has DoH rolled out in the US, so CloudFlare is in that same jurisdiction. Additionally, all DoH providers included in Firefox must meet the Trusted Recursive Resolver (TRR) guidelines, sign a contract to that effect, and undergo third-party audits to ensure they meet the requirements.
This is all publicly documented. As Mozilla rolls DoH out to other regions they will add additional TRR partners and CloudFlare won't be the only choice.
While I'm not glad about the current Cloudflare monopoly on encrypted DNS, and I hope more providers spring up soon, I'm inclined to think that Cloudflare are much better able to maintain privacy and resist government intervention (where possible) compared to any local operator.
The user can do whatever they want. It's their machine. But you know perfectly well that almost everybody is going to take the default because they don't even know what the setting does, and end up disabling content blocking when they didn't intend to.
And in many cases the user is the network operator. What the browser is doing is taking away the mechanism to set local policy. Ordinarily you hand out the DNS you want your devices to use via DHCP, and they use it. If you move that from the OS to the application and the application ignores the DHCP setting by default, now you've lost the ability to set a uniform policy for all your devices and applications. It becomes an arduous manual process to change the setting in every application on every device, and then you probably miss half of them.
Yes, and I am saying the default should be to use a guaranteed source of truth and not something set by the network operator's policy. Same as how your trusted root CA certificates don't come from a network policy for example. I don't think we should be making a convention of inspecting users' private traffic, regardless of whether it is by default or by opt-in, under the guise of protecting them from malware.
DNS has historically been set by network policy so that it's easy for network operators to map hostnames to their local network resources. The point of the design wasn't to enable traffic monitoring or content blocking by altering the results for hosts that aren't under your control.
The problem is "guaranteed source of truth" doesn't exist. When the network operator is you, or your family/company, you may trust the local DNS to respect your privacy more than you do Cloudflare. Not all names are intended to resolve the same everywhere -- sometimes the local DNS will give the RFC1918 address for a local server instead of the public one, or have a set of local names that are only accessible on the local network. How do you propose to globally resolve the reverse lookup zones for 10.0.0.0/8 et al?
> you may trust the local DNS to respect your privacy more than you do Cloudflare
For most people, their local DNS is someone like Comcast or Verizon, way less trustworthy than CloudFlare. We shouldn't reduce the privacy of the majority of people just to increase it for a small minority of people by default.
If the DNS set by DHCP were only used for local network resources and not internet resources, I wouldn't have a problem with that. That is what it is there for.
There isn't a bifurcation in the namespace for local resources. Any given name could resolve to a local address or a public one. There isn't even anything requiring a local-only server to have a private address -- it's common to have a public IPv6 address for something which is still only resolvable or accessible from the local network.
I'm the user and the network operator. Can you give me a comprehensive list of places I need to configure, and notify me when another place software can go around my configuration is added?
I never said there wouldn't be an increased configuration burden for that kind of setup. There was also an increased configuration burden when we moved to widespread HTTPS, but the benefits outweighed the costs.
As a user, what is the increased admin burden for using DoH, assuming you don't want to implement network level content blocking? Basically none.
What is the burden for using HTTPS assuming you DO want to be able to inspect and block HTTPS resources at a network level? Very extensive compared to plain HTTP.
>assuming you don't want to implement network level content blocking? //
Who doesn't want that? I have a pihole (and use OpenDNS). Bypassed it all yesterday, as an experiment, to look at an anime site my eldest started using and immediately was hit with a pretty impressive and convincing phishing attempt (mimicking my ISP).
Pretty much all of us on HN seem to use some form of 'DNS' based filtering (pihole, or even just hosts file). UK ISPs offer DNS based filtering as a default (and have to filter some pages by law, and are encouraged to filter others [through threat of law]) ... but a lot of people want that, many many families, schools, businesses.
Certainly I don't want it to be easier for advertising and malware to bypass my chosen DNS-style filtering.
I don't want it and I don't recommend it to my peers. I also don't think DNS-based content blocking is really as popular as the PiHole community would say. I use client-side content blocking like uBO instead, which I imagine is overwhelmingly the more common solution, and it is also more powerful and effective (and easier to use).
UK ISPs having mandatory DNS blocking is a great example of why DoH is important. The user should decide what they want, not the network operator.
Perhaps widespread use of encryption technologies means I can't do content blocking on proprietary appliances like the Chromecast for example. I think that's a necessary evil and I just won't buy such an appliance if it's that big of a problem. Only legislation can really fix that kind of problem anyway, it doesn't really matter whether Firefox implements DoH or not.
Make the libc resolver do DoH, and drop it from firefox,half the problems go away.
The other half of the problems are around defaulting to allowing cloudflare to inspect your traffic, instead of comcast. And the lack of integrity and encryption for the rest of the DNS infrastructure. And the expansion of the complex and difficult to secure x509 certificate regime which we should be moving away from. (Something like gossamer seems to be a big step in the right direction)
I want encrypted dns, just not a half-assed hack of an implementation that is going to be impossible to fix once deployed.
I agree with you there, OS vendors should be implementing DoH. Thankfully they are finally getting around to it after pressure from browsers.
Regarding the rest of the DNS infrastructure, I agree there is more work to be done. But I see DoH/DoT/etc as a necessary step and not a "half-assed implementation" that needs to be "fixed".
That's not an issue with overriding the system DNS. It's the point of overriding the system DNS. Content blocking by anyone but the user is a bad thing, and if you the user want it, then install uBlock Origin or something.
> The cat is out the bag, so to speak. I foresee a lot of adware, spyware, and malware leveraging DoH now to evade just about every DNS-based monitoring/blocking/provisioning solutions.
But tunnelling X in Y is not new at all and has a long tradition (even in regular protocol design). Is this really a shift waiting to happen in malware? As I would think, this has been available all along. Except for browser-based malware.
Right. It is ridiculous to say that DoH enables malware because it has always been trivial to bypass DNS-based access control with or without technologies like DoH. In fact, if anything, using DoH would be a particularly cumbersome way of doing it when there are many simpler solutions. Like for example, just putting an IP in a text file on a REST endpoint.
I think the best case I can make is that it's not about the ability to tunnel DNS, it's that there's now many fast highly-available public DoH resolvers that bypass DNS filtering for free. A hypothetical malware author just needs to use any of them rather than set up their own tunnel.
I've always assumed any device or program I don't control will bypass anything I tell it to use and tunnel all it's evil traffic.
It does feel the world is moving away from a multi-level network to run everything over TLS/TCP (and probably eventually mainly TLS/UDP), taking away the power from me as a network and device owner, and giving it to the developers
IBM z/OS has an interesting feature: AT-TLS (Application Transparent TLS). An app uses the OS sockets API to create plaintext sockets, and the OS adds TLS to them (based on policies configured by the sysadmin) transparent to the application. (There are IOCTLs that apps can call to discover this is going on, turn it on/off, configure it, etc, but the whole idea is you can add TLS support to some legacy app without needing any code changes, so adding those IOCTL calls to your app is totally optional.)
In some parallel universe, TLS would have been part of TCP not a separate protocol and the Berkeley sockets implementation in the OS kernel would handle all encryption, certificate validation, etc, on behalf of applications. AT-TLS is a bit like visiting that parallel universe
Linux kernel TLS, at least at the moment, only does the symmetric encryption part in the kernel, and the TLS handshake, certificate validation, has to be provided by user space. This is different from z/OS AT-TLS in which the OS does the TLS handshake, certificate validation, etc as well.
(Strictly speaking, I'm not sure if AT-TLS actually is implemented in the OS kernel – or "nucleus" to use the z/OS terminology – it may actually be implemented somewhere else, say in the C library. I know some of it is actually implemented by a daemon called PAGENT. But, from the application programmers viewpoint, it is provided by the OS, however exactly the OS provides it.)
I was thinking more of devices I don't control - IOT stuff that requires internet access to function (like a box to watch netflix). Clearly anything running on my machine is fine as it's under my control
So between the user and the network operator, who should have the final say? It seems like you're saying, "it should be the user when I'm the user and it should be the network operator when I'm the network operator". But I don't think we can have it both ways.
2) the lan either accepts that, or the operator puts their own values
3) the device either accepts that, or the operator puts their own values
Same as object inheritence. An ISP says "use this DNS server it's nearer", the network operator says "no thanks, I'll run my own with an upstream of google as I don't like your NXDOMAIN injects", the user says "actually I'll use cloudflare because I don't trust any of you"
That's all fine, and gives the power to the user.
Exceptionally if a device doesn't allow the network settings to be overwritten for some technical reason, it should simply accept what the lan sends it.
I think there will be some contention over 1 and 2 because we're at the point where both of these entities might be trusted (at home and work) but are inherently untrustworthy.
And so browsers are saying that the path should be.
1) The browser developers say what should be used.
2) The user either accepts that or puts their own values.
Ideally I think it should actually be something like
1) The OS vendor say what should be used.
2) The user either accepts that or puts their own values.
2.1) Lan operators can offer DNS servers which the user can (always) accept or (always) reject with reject being the default.
But this whole mess with browser DoH is because OS vendors are slow to move on this. Google Android does it kinda, Windows has it in beta, Linux works but it's a little DIY.
Only on a handful of domains. Most site admins are fearful of HPKP. The mitm proxy I am using to post this message only has to exclude a small handful of domains from mitm decrypting. A bunch of google domains, eff.org, paypal, a few mozilla domain names, dropbox and android. To my surprise, no traditional banks, no money management or stock trading sites, no "secure" portals used by law firms to share sensitive files. The sites I would expect to use pinning are the ones that do not.
On a side note, I would love to see a system that adds out-of-band validation of an entity. e.g. My bank should have a QR code behind glass on the wall that I can scan, import a key and further validate their site at the application layer, above and beyond HTTPS and CA certs.
Yep, and HTST (if you don't trust that you can use an Firefox addon to perform certificate pinning). I believe HTST got introduced in the wake of the DigiNotar debacle (listen Darknet Diaries #3). Mitmproxy also made it easy.
I guess this Mozilla Firefox change deprecates HTTPS Everywhere.
Either way, we've come a long way. For people in the Dutch security world, DigiNotar was a known joke. I knew about their terrible security (and the implications) back around 2000-2004. Although there have been more vulnerabilities too, such as Hearthbleed and POODLE.
If we're going to talk about the "right" thing to do then don't run closed software or anything that doesn't do what you want and the problem goes away.
This is nothing that wasn't possible before. Taking the traditional example of hosts-blocking the Adobe activation servers, what was stopping them from just querying 1.1.1.1 from the app? Or even falling back on a hard-coded IP?
Especially with IPv6, bypassing DNS-based blocks is rather trivial - the main reson we don't see it all that much is that companies simply don't care to do it. Users of PiHole and similar are usually the kinds of users who will figure out a way to block whatever they want one way or another, so there's no use in trying to stop them. Until we get hardware-enforced app signing (big middle finger to Apple here) we can block anything, regardless of DoH.
You're right, I guess the question is did they bother before. If normal DNS works fine for 95% of users, the hurdle of implementing a non-standard workaround is too much. If DOH is the norm, then the hurdle becomes lower.
Of course you can just drop a rule blocking the IP address on your firewall, which will probably work for a while.
I would argue that the hurdle of bypassing DNS-based content blocking was already so vanishingly small that it doesn't make any sense to impede useful and practical privacy technologies on that basis.
You could make the exact same kind of argument about widespread use of HTTPS for example. Do we want to allow encryption technology if it means the enemy can use it too? As a society we have agreed that encryption is a net positive even though terrorists and criminals benefit from it, but when malware uses it then that's too far?
>I would argue that the hurdle of bypassing DNS-based content blocking was already so vanishingly small [...] //
That doesn't hold up under scrutiny. My pihole blocks ~11% of domain lookups (blocking 1000 queries per day for our household), turning it off vastly increases the unwanted content. It might seem is logical a ready hurdle, but it's a hurdle that practically works.
I don't follow the reasoning that says this is a small barrier to malware so we'll remove it.
What are we, end users, getting out of routing all our domain lookups to Cloudflare and ceding control of filtering?
The next step is blocking all the traffic from all apps and whitelist the IP addresses app by app. I did it on my Android phone, a couple of phones ago. I don't remember the name of the app. It could be done on a desktop or server OS too.
I run my own networks and my own devices, I choose what options go in my DHCP server
If I were on a hostile network (say a hotel), then sure, I'll ignore their DNS server and use my own (or indeed just punch my way out via a VPN), but most of the time I use friendly networks, and I don't want to have to configure 20 different applications on a dozen different boxes to use a DNS provider of my choice. There's a reason I use DHCP in the first place.
You are able to run own network and have the know how to do so, typical physical Firefox users cannot.
Given your knowledge you can disable or even build firefox with DoH disabled , it is a sensible default for vast majority of users who do not know what DHCP is, or control their network. It cannotis trivial configuration for people who can control and do not want the DoH service provider given by Firefox.
Also many places do not allow VPN traffic either. It is not as easy to bypass monitoring in a locked down environment like typical corporate firewalls, college campuses etc. Yes for every block in place like deep packet inspection etc there is usually some workaround but these become increasingly difficult as more stringent the blocking becomes, and having options like DoH helps users who cannot or do not know how to run VPNs.
> Given your knowledge you can disable or even build firefox with DoH disabled , it is a sensible default for vast majority of users who do not know what DHCP is, or control their network. It cannotis trivial configuration for people who can control and do not want the DoH service provider given by Firefox.
I have mixed feelings about the issue, but it's not that simple.
I run a variety of services on my LAN for my users and guests. That includes Unbound, so even though their browser doesn't know it, their queries are secure from my ISP. But more importantly I have other stuff behind hostnames (which are resolved by Unbound). For example, my guests can navigate to music/ and use a web interface to play any of the music in my library over the stereo.
Hopefully it's obvious why this is something useful to have.
Now that Firefox is intercepting DNS requests, a conversation with someone might go like this: "Oh, it's not working? Are you using Firefox? Yeah, Firefox broke this recently, let me get the IP address for you." And then I have to log in to a computer, ssh into the relevant system, and get its IP address on the LAN.
And that's just the beginning. Last I checked Cloudflare still can't resolve the archive.is / archive.today domains. Even though I use Cloudflare over TLS in Unbound, I fix this for myself and my users by sending these domains to Google instead. Anything as convenient and simplistic as Firefox just sending everything directly to Cloudflare can't do that.
If you follow the DNS specs this will not a problem. If you use *.local for local domain names DoH will never be triggered
From Mozilla documentation.
"localhost" and names in the ".local" TLD will never be resolved via DOH. [1]
Lan based services are pretty common use case. Mozilla is hardly going to release this feature without considering this.
The Cloudflare / Archive.is point is esoteric debate and not a common occurence, DoH does support other providers than Cloudflare so not sure if this really a major concern
> If you follow the DNS specs this will not a problem. If you use *.local for local domain names DoH will never be triggered
I don't think PFsense + Unbound supports appending .local to every hostname automatically, so I'd have to change every last one of my hostnames to whatever.local and that seems like a real pain. (Surely most people are not using whatever.local in their /etc/hostname, right?)
> The Cloudflare / Archive.is point is esoteric debate and not a common occurence, DoH does support other providers than Cloudflare so not sure if this really a major concern
Sure, but my general point is that there's all sorts of different reasons why it can be useful for a LAN administrator to override the remote DNS response in certain specific cases. Given that Firefox is using Cloudflare by default, the fact that you can change it also doesn't really help anything, since after all the thing I'm specifically complaining about is that stuff randomly breaks for any of your guests using Firefox.
> Specifies the domain name passed to the client to form its fully qualified hostname. If the Domain Name is left blank, then the domain name of the firewall it sent to the client.
The client will automatically try adding the domain when looking up hostnames. Normally on Linux the FQDN is not specified in /etc/hostname but only in /etc/hosts.
It is perhaps reason for you(and others) not wanting migrate, however there is nothing wrong in the way DoH functionality is implemented.
Cloudflare broke a lot of traffic when the 1.1.1.1 dns server came up, google broke lot of dev setups when they got the . dev TLD, however it is not theit fault though.
The default behaviour is to fallback to the network DNS if DoH doesn't resolve the domain, so your use case would work fine.
archive.is didn't work for Cloudflare users because they purposely sabotaged the results for Cloudflare (they did actually return valid, but wrong, results).
EDIT: Although you might have a problem if the owners of the ".music" TLD decide to put something else there.
it does not matter under what port including 443 you are running the service, deep packet inspection (DPI) can sniff out VPN traffic, perhaps you may not encountered this type of firewall as it somewhat more expensive to run both computationally and licensing wise.
It is not possible to sniff out DoH traffic via DPI as looks exactly the same as regular traffic
While running flash servers for media use in corporate environment (when flash was still a thing) back I used to run into similar problems with RTMP/ RTMPS constantly.
You can use an HTTPS "CONNECT" proxy to protect your VPN traffic in the same way (I assume that's the kind of setup they were referring to on port 443)
Why should Firefox want enable users and devices to bypass network owners configuration in this way? A company should control their network, just as a home network's owner should have control.
How can DNS be "full of spyware"? Or are you saying that it is used for spying on you?
But anyway, it is your decision to use them - you can use 1.1.1.1 (CloudFlare), 8.8.8.8 (Google - if you don't mind the tracking) or any other DNS provider.
I don't know what GP meant with "full of spyware" but the most popular ISP in my region (Telefónica) used to redirect to pages filled with ads when a domain couldn't be resolved. Changing to 8.8.8.8 wouldn't work because it was unencrypted, they intercepted the requests and still redirected to ads. They stopped doing it some time ago but any ISP or middle man has the ability to continue doing that if they want.
Yes this is the same thing that happens with me, all unencrypted websites are redirected to adware. On phone i use 1.1.1.1 app from cloudfare but on other devices this is still issue.
Aren't all these public DNS getting unencrypted requests, so I assume ISPs snoop the domain lookups already, regardless of Google/Cloudflare/OpenDNS/Yandex doing so.
That's the point of using DoH, to avoid sending unencrypted DNS requests so your ISP can't spy or intercept those requests. If you are using unencrypted DNS from Google/Cloudflare/etc you are just adding one more party that can see your requests. If you use DoH, in theory, you are replacing who can see your requests. In practice your ISP can still know what websites you visit thanks to unencrypted SNI or if the domain you are visiting is the only one on that IP (and probably other techniques I'm not aware of). There are many more variables than just DNS requests so if you really don't want your ISP knowing what websites you visit you have no choice but to use a VPN or Tor.
> I want my OS to do DNS - including DOH, not my browser. I want a single source for my DNS
I'll go further and say I want my router to handle this. It is unrealistic to expect every device on my network to natively support the standard, but it's pretty easy to have your local DNS endpoint reroute the traffic through DoH on it's way out of your network. Right now I accomplish this by running cloudflared upstream of my Pihole.
I don't like applications that do their own DNS resolution. I use a text-only browser that relies on the OS to do DNS resolution.
But imagine your DNS is filtered. Would you still want only a single source, e.g., your ISP? In that case, wouldn't you want multiple sources?
When in a DNS-filtered environment, e.g., a hotel, DOH outside the browser can actually be useful. For example, I can retrieve all the DNS data for all the domains on HN in a matter of minutes using HTTP/1.1 pipelining from a variety of DOH providers. The data goes into a custom zone file served from an authoritative nameserver on the local network. Browsing HN is much faster and more reliable when I do not need to do recursive DNS queries to remote servers. The third party resolver cache concept is still really popular, but IME most IP addresses are long-lived/static and "TTL" is irrelevant. I rarely need to update the existing RRs in the zone files I create and the number that need to be updated is very small.
DOH servers are not the only alternative source of DNS data.
Ideally, I prefer to avoid third party DNS altogether. Nor do I even need to run a local cache. I wrote some utilities that gather DNS data "non-recursively", querying only authoritative nameservers and never setting the RD bit. It is very fast. The only true "single source" of DNS data is the zone file for the particular RR at the designated authoritative nameserver(s). Everone else is a middleman.
I am not a fan of applications doing their own DNS resolution, even if they use DOH. But I have found the existance of DOH servers, i.e., third party DNS caches served over HTTP, can be useful.
I have a root hints dns sever running in my home environment. I'd prefer to control dns at os level. I also route my mobile devices through openvpn to cover when I'm not home. I control every aspect of all my devices network traffic.
What you want would make censorship and surveillance easier against the vast majority of people. Networks I'm on shouldn't be able to tell which CloudFlare-hosted site I'm visiting, or to block some of them without blocking them all. Letting the network give me a DNS resolver instead of using a known-good one would allow exactly those bad things.
My perspective is this is my home network and this application is infringing on my freedom. I should have the right to monitor my network and my traffic. An application is a guest in my house/computer it does not set the rules.
> I should have the right to monitor my network and my traffic.
The key is that if it's really your traffic, then you can easily reconfigure Firefox so that you can monitor it. The benefit of DoH is that if someone else is using Firefox on their own computer, you can't snoop on or hijack their DNS just because they're on your network.
OK, what if you own the network? The corollary to that is 'you shouldn't be allowed to stop devices on your network from accessing malware, or exfiltrating data'.
How is this further centralising the Web? You can still use whatever DoH provider you want (and there's plenty of them); the choice just shouldn't be tied to the network you're on.
But you don't want it tied to a specific application either. It should be an OS level setting that lets you configure what DNS to use based on circumstance. This is possible today for power users (on Linux at least) and wouldn't be hard to implement for normal users.
This is a fair point, but the reality today is that basically every OS uses the network-provided DNS servers by default, so Firefox is completely right today to ignore the OS by default. If this ever changes such that it is common for the OS to use DoH instead of the network-provided DNS servers by default, then I'd agree that Firefox should follow the OS.
There is no reason they couldn't have done this without DoH being an available standard. Remember, DoH is just an ordinary HTTPS request. Anyone can hide almost anything in that, including their own homemade DNS.
Your concern about smart devices using DoH can be trivially avoided by not buying smart devices that use DoH, or even connect to the internet. Make sure that whatever you buy works with a local server such as Home Assistant and is actually under your own control. A system like this will be more reliable and customizable too.
Sometimes you don't own the computer on which the browser is running, e.g. public web browsing kiosk. Or else you sort of own the computer, but not the DNS.
If you work in a setting in which web surfing is intercepted, and outright breaks pages, the in-browser DNS can be a godsend.
So now I have to add fake DNS records for every application that decides to do their own special snowflake thing and ignore the source of truth for what DNS server to use?
Oh joy.
I run DHCP for a reason. That reason is telling devices on my network what their settings should be. I expect those settings to be honored, not them doing the electronic equivalent of "okay boomer" and using whatever arbitrary settings they were told to by a corporation.
Software and hardware vendors need to start honoring the local configuration choices of the owner of the hardware and network. At a certain point ignoring my decisions about what can traverse my network becomes a crime and should be prosecuted as such.
I think in 2020 we can declare that Californian companies dictate what you can and can't do on your computer, which DNS server to use and what goes through a VPN client and what does not. The same way they decide what is a fact, what is newsworthy and what you are allowed to read / post.
"Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents."
There are infinitely many observations. They don't select themselves. Humans do that, and not for neutral reasons [1].
Your comment was obviously trying to strike a blow for one side of a well-known argument that's going on right now. There are a few problems with that. First, blow-striking is not curious conversation; we can't have both, and HN exists for the latter [2]. Second, these arguments are so well known that the threads they lead to get increasingly predictable as they get more intense [3]. I'm sure you can see how that's bad for curious conversation too.
Fully support this argument and Mozilla's initiative.
I work for a firewall co and we had taken a strategic decision to not allow plaintext traffic onto the internet (from cloud deployments). It's just lazy on the client or server operator's part to not have it so.
Even "simple objects" can be MITM'd. I know I'm on the extreme theoretical edge, and so maybe your perspective is pragmatic enough to pass. But even small images, javascripts, etc. should be protected by HTTPS, not just "sensitive" pages.
As an end user, I don't want the possibility of anything being tampered with along the route. As a content owner / webmaster, I want the same. So publisher and consumer are both aligned in their desire, making HTTPS ideal for everyone except for people reading/manipulating traffic along the way.
Since it's becoming harder and harder to implement transparent proxies and caches, someone should define a local cache protocol so that network administrators can configure explicit shared caches for the devices on their networks.
Because now almost every Firefox user will be sending their DNS straight to one centralized provider, a large corporation, which makes them more vulnerable to various kinds of government interference.
There is nothing about DNS over HTTPS that requires you to use one centralized provider, and unencrypted DNS has always been easier for large corporations, ISPs, and the government to sniff.
I think people are just totally off-base on this. The instances of government/corporation reactions to DOH that we have seen suggest that untrustworthy organizations and governments largely oppose the change. They would not oppose the change if it was in their best interest.
Worth noting that anyone can set up a DOH server. You can even set up your own server in your own house and use it the same way that you would a Pi-hole. To the extent that malware providers or IoT providers will use this to circumvent blocks, they already had the ability to do that -- and IoT services like Chromecasts have already experimented in the past with setting their own DNS providers and ignoring network settings.
We do not need to open up our networks to MiTM attacks to avoid centralization. Unencrypted DNS is a bad idea, it isn't complicated.
Firefox made DoH to Cloudflare the default, right?
This is not responsive to my argument that it will impact most Firefox users. Most people won't change their defaults. Defaults matter. And that goes double when you need to dink with your own DNS server to override this crap.
Firefox made DoH default to Cloudflare, but that's because Firefox was the browser that pushed DoH the hardest when it came out, and at the time Cloudflare was (and arguably still is):
a) the best provider available
b) more importantly, the most private provider available
But there's nothing about the technology locking Mozilla into keeping Cloudflare the default in perpetuity, and in any case, the solution to your concern is to adjust the defaults, not to throw out DoH entirely.
There's nothing about DoH in specific that's causing the concerns you have. Mozilla could have set Cloudlfare DNS to be the default even for regular, unencrypted DNS traffic. There's nothing inherent in DNS technology that would force them to respect your OS settings. If your concerned about Cloudflare taking over, rejecting DoH isn't the solution to that problem.
It's not an underhanded centralization push, it just so happens that when DoH was still new, there was effectively one provider that was widely available, that could confidently handle a large upsurge in traffic, and that made extremely strong privacy guarantees compared to the rest of the industry. At the time Mozilla started pushing DoH, most ISPs in the US didn't even offer it as an option at all. As that changes, I expect that browser defaults will change as well.
Cloudflare is the default DoH provider for Firefox only in the US. NextDNS is another option in the the Firefox preferences UI and more options (such as Comcast) are coming soon in the US and internationally. You can also specify a custom DoH provider URL:
The kind of user who wouldn't change their default setting is probably already happily (or unknowingly) sending all their DNS traffic to their ISP.
I get your concern about cloudflare, but I can tell you right now which of the two options I would trust more with this data, and it's not Comcast/AT&T/Cox/<insert shady ISP here>
No offence, but how is that more centralised than all of your traffic being plainly visible to your ISP, which in many countries cooperate fairly closely with law enforcement and government, if they're not straight up government owned depending on what country you're in? For example in the UK, a fairly liberal country, DoH still is very useful to avoid countless of the ISP level content blocking that happens, not to mention if you're in a place like Russia.
For most users on the planet Cloudflare is a hell of a lot better than your ISP
There are dozens or hundreds of competing ISPs. In the UK I certainly trust A&A a lot more than Cloudflare (who among other things are the #1 provider of fake HTTPS endpoints that send all your data in plaintext across the public internet).
This site is likely to get annoyed at Chrome as well.
Also, just because Chrome gets away with it doesn't mean its an action that should be accepted purely for that reasoning.
DoH changes who gets all your DNS traffic from your ISP and your router to (in practice) a single central DoH provider. Which of those you trust least depends on who you are.
There is nothing preventing your ISP from providing DoH itself. Firefox (currently) does not use your ISPs settings by default (which imo is the correct move for now), but Chrome will use your ISP/router's DoH settings if it provides DoH.
There is nothing about DoH that forces you to use a single company.
Depends on the client. Firefox doesn't support a fallback and tries unencrypted or fails depending on how you configured it. AdGuard Home can use fallbacks so you can run a local instance and point Firefox there. Some providers supply fallbacks (https://1.1.1.1/dns-query and https://1.0.0.1/dns-query for example).
> This doesn’t guarantee the transport is end-to-end secure; I’m sure plenty will strip the encryption at an LB and then possibly send it back over the internet.
Hijacking this a bit - If plenty are stripping the encryption at an LB (like Cloudflare for example), how can you be sure that NSA is not wiretapping in Cloudflare's infrastructure where it is not encrypted? Seems like a really easy way to get unencrypted data and not care about if everything is HTTPS or not.
Are there any counterexamples to this? Do we know that Cloudflare or AWS is not doing this?
Agreed. I've been using this for several versions already (mentioned by other commenters - dom.security.https_only_mode). Very few websites break, and they should know better (e.g. HTTPS redirects to HTTP, redirects to another HTTPS location).
I've often daydreamed of a new HN freature where non-HTTPS links have a preceeding red marker "[HTTP only]" (or similar) but never could find the correct place to write it down. Considering that Firefox is now a minority browser :( perhaps there is still usefulness in this idea?
100% with you that by itself, browser HTTPS-only mode (even by default) is A Good Thing. In isolation, this is a no-brainer and Mozilla's doing the right call.
I'm not happy with DoH though, at all. I fall in the crowd who wants to control my own DNS on my own devices (and I do realize that for those less technically knowledgeable, the status quo is putting that in the hands of the network admin or even ISP, but at least in principle they have the means to do so if they just figure out how, which is relatively trivial). DoH effectively completely cripples things like pihole. I'd have to start whitelisting IPs/hostnames for port 443 :/
Another practical negative consequence is the further centralization of TLS termination in (most notably) Cloudflare and Akamai, as I am sure this will de the default for those who are now rushed to TLS-enable currently uncompliant endpoints. Great sieves for XEyes and private tracking industry.
I don't understand this criticism at all; could you clarify what the issue is?
None of this - DoH, nor HTTPS-only - is required. It's not even on by default (yet). If you have some specific wishes; it's trivial to pick a different DNS system, or leave https-only off.
Additionally, DoH and https-only aren't really closed or locked in in any way. There's a cloudflare-base DoH option that's used by default, but just as you can pick your own DNS servers, you can also pick your own DoH servers. Sure, it's new (and thus not on by default), so selection is still fairly limited. But even ISPs are starting to offer DoH, and surely others will too. There's no reason to assume that you won't soon be able to pick from several choices for your DoH setup, including local or LAN options.
In other words: all change has some friction, but if you want local control: you still have it. The only thing you really lose is the ability for networks to hijack DNS of devices that don't trust it. And that's a feature, right? If you trust the network; sure, use that DNS if you want. But if you don't: better that its easier to avoid control by that network.
> I had to disable DoH on all 5 of my machines because it was enabled automatically.
Some googling: it's on by default in the US now, not yet globally.
> pihole supports using Cloudflare as an upstream DoH provider, not acting as a DoH provider.
That's unfortunate; are you sure it's not just a little poorly documentend? At least the underlying software they link to https://developers.cloudflare.com/1.1.1.1/dns-over-https/clo... clearly mentions a proxy functionality, but maybe that's not on yet?
In any case: the point is that just as DNS proxying became easier over time, so will DoH proxying. There's nothing wrong with sticking to old tech as you chose to if it's not yet convenient for you to switch; especially if pihole supports upstream DoH, since presumably you trust the network between your device and your pihole ;-) - rendering the rest of the DoH benefits moot.
I'm sure time will resolve issues like that, but in the meantime, the vast majority of people that didn't customize their DNS and don't choose at all, or choose based on privacy or performance get a more private & secure option by default. I mean, it's transition pains are annoying, but it sounds like a good idea in general, so I suppose the transition costs are worth it in the long run, especially since the vast majority of users won't even notice. It's annoying to be in the camp that gets to bear the burden, for sure.
My objection is that the change was made without notice or explicit permission, and changed the chain of trust. Mozilla decided that I should trust Cloudfare and that I should not trust my own network or the corporate network.
Yeah, the rollout (as opposed to the feature) sounds poor. Since I didn't experience this - you're saying you upgraded FF and without notice your DNS settings were replaced? I.e. anything only resolvable on your LAN suddenly stopped working? That's pretty annoying to debug!
Not OP. Did we ever allow browsers to set their own DNS server for plain DNS requests? (Let's ignore IE and it's mess of mixing OS settings + browser settings a la "Internet Settings").
For me at least, the biggest criticism of DoH is that it's not necessarily a centralized config at the OS level and seems to be a step in the direction of the general theme "Browser As Your OS".
> It's probably not that efficient anyway to be using pihole's filtering in Firefox compared with just ublock origin anyway.
Pihole operates at the network level. It can block Windows Telemetry, ads on your Roku, smart devices trying to phone home, etc. Any guest devices that connect to your network also benefit without you having to install blockers on them.
It's not a replacement for ublock, it's used in conjunction with it.
Yea, I understand that which is why I keep pihole.
My argument though is in the context of Firefox, in which case the benefit of the pihole bit is dubious when you can install ublock origins. Pihole doesn't provide much additional benefits that can't be achieved with ublock origin advanced filters.
Granted someone could argue what would be the benefit of using the dnscrypt-proxy at that point. In that context, you benefit from better privacy (DNS requests aggregated) and caching benefits.
Your Pi-hole working on other people's devices is a bad thing. After all, there's nothing stopping you from configuring your Pi-hole to filter political content you disagree with instead of just ads, trackers, and malware.
I wasn't aware of this part of dnscrypt-proxy, thanks for sharing. I find network-level blocking and browser extensions work complementary.
In a broader sense, I guess the larger concerns is malware and trackware on various devices where DoH is used maliciously. Especially smartphones. Granted there is nothing stopping them today, but the normalization of DoH will put it in arms reach for everyone.
I just hope we don't get compromised CA because a lot of governments currently try to fight encryption.
But agreed, it is a good idea. The only disadvantage I see is that some sites might not want to pay for a certificate and don't know how to easily obtain free ones. So it might kill some sites.
Not 100% sure why this is being downvoted, I think it is true that some sites, for one reason or another, probably will not adopt ACME/Let's Encrypt...
Could this or HTTPS Everywhere warn you when a site is known for encryption stripping? I think this happens on the free cloudflare tier and we can’t determine that.
How would you detect something like that? Thats at the discretion of the party running the webserver/service and it's not directly observable. Cloudflare acts as a reverse proxy in front of the real origin server (which uses https), but that affects really a lot of internet sites nowadays (Cloudflare, Akamai, aws/gcp/azure and so on). CDN-origin connectivity is also encrypted, afaik you cannot downgrade and strip ssl (at least not with Akamai)
All HTTPS does is ensure security between your client and the server with the private key of the certificate. You typically trust certificate signers (globalsign, letsencrypt, etc) who have their own policies for ensuring who gets a certificate (in LE's case you have to prove ownership of the domain)
If a domain owner gets a certificate and gives it, and a private key, to a third party, then that's their business.
> All HTTPS does is ensure security between your client and the server with the private key of the certificate.
Just for completeness, HTTPs does not necessarily guarantee you are connected to the right server. This depends on the CAs you trust.
For example, larger enterprises commonly inject their own CA into their workstations in order to prevent loss of sensitive data. This allows SSL inspection proxies to terminate all SSL connections with a valid, trusted certificate from the perspective of that workstation. .
I just briefly played with it and there is an option to enable this mode globally and exclude specific sites (permanently or temporarily). Which is exactly what i needed.
Great to see this built into firefox, I have been using HTTPS Everywhere https://www.eff.org/https-everywhere to achieve similar results, it won't warn you if it is not https (i think) but it will try and upgrade to https if it can. It is available for chrome and firefox.
What particularly annoyed me was using http to sites which supported https.
I remember years ago when Facebook wasn’t using https and a bunch of articles came out with how to access someone’s account if you’re both on unencrypted public wifi. Since then I’ve been a fan of https everywhere
IIRC this was around the time when someone created Firesheep [0] which made it really easy to steal unencrypted session cookies via packet sniffing, which sparked all those articles you are talking about.
For me at least it warns me very blatantly and I have to asked to go to an insecure site if it's HTTP only. Perhaps we don't have the same configuration.
HTTPS Everywhere will attempt to connect to a site using SSL, and if that times out will pop out an error message and allow you to load the site over HTTP temporarily. At least that's how it works on Firefox + latest version of the extension.
As others pointed out, you can set it up to always upgrade to HTTPS and warn you when it's not supported. But that's not gonna work all the time: some websites will have self-signed certificates (which is better that nothing I guess), some will return 403 or 404 (for HTTPS only), some will just not load after trying for half a minute.
awww crap - I've got loads of low-traffic websites that don't need https[1] that I'm now going to have to spend time sorting out certificates for.
To be honest, it's about time that cert enablement is built into all web server configs (on all OSs) as a native feature instead of having to manually roll the config using this-weeks-currently-preferred letsencrypt script.
---
[1] Yes, yes, I know everyone on HN prefers everything to be https, but out in the real world, most people don't care if all they are doing is browsing for information.
The main problem with keeping sites http is that someone in the middle can modify the content and inject arbitrary code, be it ads, crypto mining or just a redirect to a worse website.
Therefore I believe it should be a social duty to make everything https so as to ensure that we don’t create something that can be used to harm others.
I didn’t use to think like this until I actually tried it out by going to a mall and doing it myself. Whoever was accessing simple http websites for the very short end of a stick (metaphorically speaking)
Funny how "security experts" here complain about accidental non-repudiation misfeature of DKIM, but apparently being forced to do a bunch of crazy crap HTTPS forces you to do when all you need is content signature verification is perfectly fine with those same people. Security is becoming a field dominated by some bizarre corporate ideology.
I don't understand your point, content signature verification would have the same consequences that HTTPS for non-repudiation, no?
Also it's really different from DKIM: the problem with DKIM is that since the signature is part of the email itself, so unless the receiver bothers to strip it (why would they?) then it's stored forever in the metadata, even though arguably its use as an anti-spam feature stops being relevant once it's been delivered to the MUA. So basically every time you send an email through gmail you effectively also send a signature saying "I, Google, vouch that h4x0r@gmail.com did send this email" and the receiving end will keep this signature for as long as they keep the email.
HTTPS session keys however are not typically saved unless somebody goes out of their way to do it. As such it's a lot less likely to be used for blackmail in hindsight. In general people use archive.org to prove that some content used to exist in this scenario, not old HTTPS session dumps.
And like for DKIM the solution is fairly trivial if it's really an issue: every time you rotate your keys (which should be fairly frequent if you use something like letsencrypt) be sure to make the expired private keys available publicly to give you plausible deniability.
I have yet to hear a good argument against HTTPS everywhere honestly, it generally boils down to "but I don't want to do it" with some weak post-hoc justification for why it's bad.
It’s perfectly fine only to the people who already did it; there are visibly annoyed comment from people who haven’t yet because they think it’s pointless, too. The same issue exists with DNS-over-HTTPS: most people don’t want to have to take new and extra steps to monitor DNS traffic in their home or work, and the concept of having to do that work varies from annoying to offensive (whether they use them or not).
DNS-over-HTTPS is one in a long series of decisions that contradict the assumption that network operators deserve cleartext access to your traffic. I suppose we can thank the NSA’s Room 641A for inspiring the tech world to pivot to this view all those years ago. It’s finally reaching critical mass, and endpoint network operators are furious at having their sniffing/spying capabilities hindered.
Captive WiFi portals are next on that list of institutions that are at risk of failing. I can’t wait, personally.
What "crazy crap" are you talking about? I get that HTTPS might be overkill for some situations, but it's not difficult to use and every client device supports it. If you came up with your own signature-but-no-encryption protocol (maybe something similar to DKIM, actually?), nobody would support it. Even if they did, I'm sure that some people would use it when HTTPS is a better solution, just because they don't understand the tradeoffs.
A standardized overkill solution that covers most use cases is probably better than n+1 standards with different tradeoffs.
I'm not sure if that "security expert" note is pointed at me (I have not talked about DKIM at all, nor do I consider myself a security expert by any means).
I'm merely pointing out what is in my humble opinion, even when no sensitive data is traveling, the main reason why HTTPS should be everywhere. If you believe there are other easy to deploy and maintain solutions with the same amount of relevant user outreach, then by all means suggest them.
(I could of course point other reasons, like the fact that even if your website has no sensitive data it can still be scraped to build a profile of the visitor by a third party, etc).
Doing content signature verification securely would require all the same "crazy crap" that HTTPS does (certificates, CAs, OCSP, ACME, etc). The PKI is the hard part; once you have that there's very little reason not to encrypt as well as sign.
I've seen this said in several spots and I am curious. Could you point me to a resource that teaches me how to do this with a HTTP website please? I have no experience in doing this and am very interested to learn.
Thanks in advance!
The answer to your question will depend on the set of technologies that you are using, but a great place to start is the EFF's Certbot[1]. Certbot will, for many common web servers, verify your servers' domain address and install a cert that will work for ~3 months. It's free and mostly automated.
I've been getting certs for all of my side projects and it takes about ~10 minutes each. Highly recommend.
Oh wait I didn't out across what I wanted. I meant how do I do a simple "attack" on a http website that's at a mall as OP said. I know how to use certbot for the certificates, (thank you Digital Ocean docs).
I was wondering. If I run a simple http page on my home network, how can I, from another device, change it or make another client get a modified page with the same address?
there is a middle ground of authenticating traffic without encrypting it - c.f. IPSEC AH mode - granted this type of thing isn't in HTTP but easily could be
> has anyone got any source where someone has been victim of such attacks?
Yes, the most well-known victim was GitHub. A malicious MITM injected JavaScript code into an unrelated non-HTTPS page, making browsers which visited that page do a DDoS attack against GitHub. Quoting https://arstechnica.com/information-technology/2015/04/meet-... "[...] The junk traffic came from computers of everyday people who browsed to websites that use analytics software from Chinese search engine Baidu to track visitor statistics. About one or two percent of the visits from people outside China had malicious code inserted into their traffic that caused their computers to repeatedly load the two targeted GitHub pages. [...]"
Kazakhstan did not do that to inject ads. I believe that they wanted to block webpages on a granular level (for example some specific blogs). Right now they block complete websites, because it's not possible to find out which URL user is visiting.
Other have pointed it out, but ISPs (especially on mobile, at least in portugal) often do this. They add a little banner on the top to tell you some random bullshit about their data plan or some other such shit they want to sell you.
As for attacks, others have pointed out examples as well, and I can assure you you can go far with them (and often with some simple social engineering).
Therefore I believe it should be a social duty to make everything https so as to ensure that we don’t create something that can be used to harm others.
Those who give up freedom for security deserve neither.
Benjamin Franklin, to the legislature of Pennsylvania, on the topic of basically letting the Penn family buy their way out of paying taxes indefinitely by providing enough hired mercenaries to protect the colony's frontier during the French and Indian war.
The freedom he was talking about was a legislature's freedom to govern.
The temporary security he was talking about was hired guns.
And none of it applies to the question of the tragedy of the commons that is "HTTP protocol is default embarrassingly insecure."
Indeed, if I were to torture the quote enough to make it fit, it looks more like "Those who would give up essential Liberty (Mozilla, in this case, to design the user agent the way they think best benefits their users and the World Wide Web as a whole) to purchase a little temporary Safety (... of their user count by delaying inconveniencing users in the corner cases where HTTPS can't be used), deserve neither Liberty nor Safety." But it barely fits.
Agree to disagree. I've met enough Americans to believe that if we made voting mandatory, we'd just end up with Optimus Prime at the top of the ticket.
You can make an act compulsory on the whole population but you can't legislate duty-of-care upon the whole population.
Voting should be mandatory but the first listed option for every office should be "I approve of no candidate and think none of them should win" (the disaffected vote), and the second listed option should be "I approve of every candidate and don't care who wins" (the apathetic vote). With our current system it's impossible to tease out how many voters are disaffected vs apathetic vs simply disenfranchised (in the above scheme, those who don't make it to the polls at all), which makes it impossible to confirm or deny that any candidate has the mandate of the people.
Technically those options are trivial with a non-defective system like approval voting, but assuming you're stuck with first-past-the-post, that's pretty much correct.
Not quite, even with normal approval voting there's no way to distinguish between voters who want to say "all these options are undesirable and everything is broken" and those who want to say "life is good, I'm cool with whoever" and those whose votes get lost or obstructed. The reason why the notion of a "protest vote" is so pointless under our current system is specifically due to the indistinguishability of the first from the latter ("people didn't stay home because they didn't like the candidates, they stayed home because they're so content!"). Additionally without mandatory voting it's difficult to determine where and how voter disenfranchisement is happening, but at the same time it would be unethical to force someone to cast a vote without giving them the option to voice discontent with the options.
What definition of "approval voting" are you using? I'm talking about a ballot where each candidate is marked "approve" or "reject". With that system, "all these options are undesirable and everything is broken" is reject-all, while "life is good, I'm cool with whoever" is approve-all (and lost or obstructed is obviously no ballot at all; there's not much you can do about that).
If you're ok with fronting your sites with Cloudflare you can get "fake" HTTPS by using the flexible option (HTTPS from the client to Cloudflare, and HTTP from them to your server)
This satisfies the need to be on HTTPS, without actually having to change anything in your server.
Not saying this is ideal, but for websites that don't _need_ it it could be the best/easiest approach.
It doesn't completely block users from your site. It just makes them extra aware that any middleman will be able to see what they are reading, see anything they send and could possibly modify the traffic.
If you are concerned about the power-users who enable this being aware of these facts then maybe you do need https after all.
(Of course I suspect this mode will be made the default at some point, but that is probably at least a couple of years off.)
Take a look at Caddy server. Every site is configured for https by default with auto-renewed letsencrypt certificates. It even has an nginx-compatible configuration (but I think you'll like it's own (really simple) config more)
> about time that cert enablement is built into all web server configs
It feels like we're slowly getting there! +1 for Caddy for making HTTPS a surprisingly pleasant process. My home router and NAS also support (and enable) Letsencrypt out of the box, which was a nice surprise.
This is incredibly bad for the health of the web. In kneejerk response to the invasion of privacy from world governments we're handing absolute control of the web back to those very governments.
Everyone being forced to get permission from a centralized cert authority that is easily influenced, pressured, etc in order to host a visitable website is the end of the web as we know it. This is a slide into a total loss of autonomy.
I give it about 3 years before all commercial browsers stop allowing you to visit HTTP sites at all and Firefox only allows if you use their unstable beta build.
At that point you won't be able to host a visitable website without getting permission from someone else. And that's the end of the personal web.
No, I'm just assuming that you're going to follow through on that thought and realize statistically no one does that for self signed certs except corporations which aren't human persons anyway.
The point isn't that you can run it yourself, it's that there isn't a central one and there are dozens or hundreds of different options already, based in different countries.
Firefox HTTPS-Only Mode is not enabled by default. The mode just shows the user an interstitial warning when navigating to non-HTTPS sites; the user is not blocked from visiting non-HTTPS sites.
I don't think so. My proposed change only affects manually entered urls without protocol/schema. HTTP urls (entered manually or from links) would still work as expected, while https mode blocks them. I believe this change is small enough that they can make it the default, while http mode will likely remain optional for several years.
They can't make it the default yet without breaking a lot of things since a bunch of marketing people decided to break the security properties of TLS by using HTTP only vanity redirect domains. While I've found HTTP-only to be most common, sometimes these redirects do support HTTPS but hand out the main site certificate without updating it to include the vanity domain, resulting in a certificate error (however, this new built in HTTPS only gives you a HTTP warning in this case rather than a certificate error, unlike HTTPS Everywhere's EASE). Some sites also have HTTP only redirect from example.com to www.example.com.
I tried out the "HTTPS Everywhere" Firefox extension but found it cause me more trouble than it was worth, then found "HTTPS By Default" which suits my use much better. It automatically requests all awesomebar requests to https:// by default; one can manually use http:// to bypass it.
I hope they aren't going to force users in https-only in the future. Software shouldn't cut off legacy content (old websites that aren't going to be upgraded with https) something just because in theory it is more secure. If someone is surfing the web as an adult he is responsible of himself.
Other than this there are historical components (web firewalls) that aren't going to work anymore .. so security is a matter where it is an interest of someone (certificate sellers?)
Because browsers run code, they exist in that tricky space where some design decisions have to be made for the good of the commons.
If you visit an HTTP site and get MITM'd, it's not just that the attacker can put you at risk by spoofing a credential input box; it's that the attacker can put third parties at risk by having your browser
XMLHttpRequest as fast as it can at at someone else's site to try and DDOS them.
At that point, the calculus shifts and we see a world where user-agent engineers have to make decisions like Microsoft did (to start forcing people to install security patches to the most popular OS on the planet, because we have enough evidence from human behavior to know that at some point, forcing-via-inconvenience becomes necessary).
HTTP is fundamentally broken in that it can be abused to damage the network itself, and even though it's a deeply entrenched protocol, it's one that people have to be backing towards the exits on for that reason.
I think your comment goes specific, but i was talking generally. I don't really understand if you are arguing against my opinion.. I don't know what to respond.. bye
Hot take: HTTPS-only mode is a bad idea if it is not paired with first-class support for self-signed certificates authorized using DANE+DNSSec. It just forces everyone to use broken/redundant CA model.
Can you articulate precisely the problem you believe this will solve? From my perspective it seems like it’s just making the system more fragile and harder to fix since DNSSEC requires OS updates to improve, while not meaningfully preventing state-level attacks.
> The current number of CAs does not prevent state-level attacks either.
Right, so the question is why we should put a huge amount of effort into implementing and operating a system which doesn't make significant improvements.
> DNSSEC works I don’t really get that point.
It's mostly a layering question: if a new cryptographic algorithm is released or a problem with an old one comes out, browsers can update very quickly. Updating the operating systems and network hardware which implement DNSSEC takes considerably longer. DNSSEC lingered on 90s crypto for ages, key rotations were put off for years, etc. because everyone in this space has to be extremely conservative. That has security implications as well as delaying most attempts to improve performance or usability.
Similarly, browsers can have extensive UI and custom validation logic for HTTPS. A lot of that information isn't present if you use DNSSEC without implementing your own resolver, so you get generic error messages and you don't get control over the policies set by your network administrator. This is especially interesting both as a risk if you don't trust your ISP or for dealing with compromises — if I compromise your DNS server and publish DNSSEC records with a long TTL, your users are at risk until you can get every ISP with a copy to purge the cached records ahead of schedule.
All of those issues can be improved but it's not clear that there's enough benefit to be worthwhile.
> The “root of trust” problem is hard to solve. I kinda hear the DANE guys’ argument, I’d rather trust one authority than a thousand.
This is the best argument for DNSSEC but it's not clear to me how much difference it makes in practice when you're comparing the still nascent DNSSEC adoption to modern TLS + certificate transparency which also catches spoofing and is far more widely implemented.
Kinda off-topic, but I wonder if the adoption of DNS-over-HTTPS will eventually solve the ossification problem you're referring to by moving DNS resolution to the application level.
It can definitely help since you're removing the network operator from the critical path. Large enterprises and ISPs are, not without reason, very conservative about breaking legacy clients but a browser vendor only has to worry about their own software in the release they ship DoH in (with some caveats they've addressed about internal split-view DNS, etc.) so they don't have to deal with complaints if, say, including an extra header breaks 5% of old IoT devices which haven't had an update in a decade.
Finally! I've been waiting for HTTPS to be the default for a while now. From a security standpoint it's annoying that bar something like HSTS it's trivial for a man in the middle to force a downgrade to non-secure HTTP. The fix is to force yourself as a user to look for the lock symbol in the address bar, but that's terrible from a usability perspective.
However, I'm not sure whether it'd be best to make this the mode the default for everyone. I imagine regular users would be quite scared/confused when encountering such a message, and that might lead to lots of valuable (mostly older) websites still running plain HTTP to be effectively cut off.
The real and true bothersome part, is that Firefox (and others) do not seem to allow permanent exceptions.
Going through multiple prompts, each and every time someone wants to do what they want, is problematic.
I have gear on my LAN. No SSL, or self-signed, expired certs. I don't care. Ever. Why would I spent 10 seconds setting such things up? They're locked behind a firewall, (a secondary firewall), have no direct network access, and can't even be reached without port forwarding via SSH.
Yet, do you think I can tell my browser "never ever prompt for this again"? Nope. Nada.
I have zero issues with safer defaults, prompting when required. However, the idea that "Firefox knows best" is the sort of asinine behaviour that causes big tech issues all the time.
My point is, this is going to be annoying... not because of the feature, but instead, if I enable it, I'll be cut off from legacy sites by a wall of forever "Do you want to really do this?" with likely 10 clicks of 'yes' and 'ok' and 'i understand', combined with never storing this as a default.
That's why it's important for features like this to be enabled in at least a couple of major browsers at roughly the same time. That way, users will blame the website operator instead of the browser when they suddenly can't access an insecure website.
I wonder how it will work against websites like http://neverssl.com (which helps me to log in to some wifi portals, HTTPS Everywhere shows the prompt for a temporary exception.)
An alternative I use is http://captive.apple.com (other OS vendors have their own). Which may have a higher chance of being detected by the portal (more likely to be white-listed) and triggering the prompt correctly.
Frustratingly it doesn’t always work that way - one I have seen that is just bizarre is Qantas inflight wifi. It actually allows captive.apple.com to bypass the captive portal, so your iPhone, iPad or Mac thinks it has internet access. So you try to navigate to a page or use an app and just hit HTTPS certificate errors! So you have to think of some other site that is only HTTP or get the information card and enter the address it tells you to log in!
It’s crazy, because somebody must have had to configure something to explicitly let that through (not understanding the purpose of it?) and it just completely breaks it! I’ve tried to leave feedback (there is a link from the portal page) that they’ve screwed it up but it hadn't been fixed the last flight I went on..
It might be forging responses from captive.apple.com and not actually sending those out to the internet. If you set up your own intercept that responds 'Success', iOS will assume it has internet as well.
Like the sibling post said, you're probably seeing certificate errors caused by the portal, not traffic being allowed to captive.apple.com.
With http, a captive portal system will intercept your connection and redirect you to the portal authentication page. Most modern devices deal with it automatically by checking those plain http urls when the network comes up. For example, I think the way it works on iOS is that when you connect to a WiFi network the OS tries to hit http://captive.apple.com which triggers the redirect and prompts you for authentication.
With https, there's no way to have a valid TLS certificate for a random site the user is connecting to (ex: captive.apple.com), so you get a TLS error if you're attempting to connect to an https site while the portal is trying to redirect you for authentication.
If I go to https://neverssl.com/ I get a warning explaining that this site doesn't have a certificate for neverssl.com but only for Cloudfront (presumably where it's hosted)
But if I try to go to http://neverssl.com/ then I get the message explaining that the HTTPS site doesn't work, do I want the insecure HTTP one instead?
If I specify http://whatever.com in the address bar, or if I follow a link to http://whatever.com, I'd expect it to attempt to connect to port 80 on whatever.com, and not redirect to https unless the page responds with a Location header
If I type "whatever.com", I'm happy with it to try port 443 first
I'm not sure if a http/80 page should be at least HEADed to see if there's a redirect to https/443 before throwing up the "this is not secure"
I can understand your use case, but I know a lot of people (often those who use computers infrequently) type out the full URL all the time. They don't know what HTTP or HTTPS is, they don't realise you can omit that part. They just want to access the website.
For those people, it makes sense that typing "http://" would take them to the "https://" site if available. Although they did specify HTTP, it isn't necessarily what they actually wanted.
I think the use case you describe (whilst valid) only applies to a relatively small pool of people. Most people don't really understand HTTP or HTTPS very well. They know it's part of the web address, and some know that "https://" is "secure", but that's about it.
I think it makes sense to direct people to the secure version of the site as much as possible, whilst of course providing a mechanism to switch to the HTTP version if necessary.
If the server says that the thing on port 80 is better served from port 443, then it can issue a 302 permanently move (and a HSTS header to make it stick). If the server offers different content on port 80 and port 443 then the server can do so just fine.
The browser should not try to second guess my explicit instructions.
Many users don't know the difference between http and https, so if you're trying to get them redirected to a captive portal page it's a lot easier if the default is http.
That kind of sucks because if a user misses the initial OS redirect for a captive portal login, the easiest way to get them back to the authentication page is to have them hit an http site. However, things like HSTS make that really hard to do without having a site that does NOT use https and defaulting to https is like having HSTS triggered on every site.
Having to tell them to click through a non-https warning is almost as bad as having to tell them to click through a TLS warning.
Captive Portals are the thing that sucks in this situation though.
If you offer "free" WiFi behind an annoying Captive Portal chances are I'll just use my 3G service if it works. Now, in my mind do you think I consider you offered me "free" WiFi? No, it was too annoying to use. So your competitor that didn't bother with a Captive Portal site and just posted their WiFi password on a chalkboard - they have free WiFi and you don't.
I'll have to remember that for next time I get "wireless on the train doesn't work, I get some security error" via SMS.
Last time I pointed them at one of my sub-domains that still serves plain HTTP to bring up the captive portal (which wasn't trying to charge, or apparently even advertise, the network just insisted you hit it at least once to be told "Hello!" and presumably have your MAC added to the whitelist for a time).
The name "neverssl" might confuse non-techies though. Maybe I'll register something like iswirelessbroken.com for doing the same thing.
Captive portals are not unique to wireless networks so if you are gonna register a new name you might wanna go with something more generic (like "isnetworkbroken.com" or something like that).
Firefox has is own such domain, detectportal.firefox.com, which would presumably be excluded. But otherwise, it looks like the user will just have to turn it off for these kind of sites. Same goes for things like browsing APT update servers that don't use HTTPS by design.
Actually the future is ambient network access. But on the way there, the likely pathway is larger and larger federated network authentication. Most of the world's higher education students/ staff are enrolled into EduROAM, so that it doesn't matter if they're in a classroom in Tokyo or London, the federated system concludes they are a legitimate user somewhere and so they can connect here. In these federated systems there's no use for a "Captive portal" since it could not safely achieve federated authentication, so there isn't one.
I personally access 10.0.0.1 and that works at numerous places with wifi portals. Especially useful when my device/browser doesn't automatically detect that there is a captive portal.
This feature is not enabled as default in Firefox 83.
To enable it, make sure you have Firefox 83 installed. Then go to about:preferences#privacy and scroll down to the section "HTTPS-Only Mode" and make your selection.
This is a great step, but I wish browsers would allow you to set domains that are considered to be secure origins in all cases. I have a decent intranet with transport security guaranteed by VPN, but because it isn't "HTTPS" I can't access tons of browser features.
Have a look at Let's Encrypt DNS challenge. I created a DNS wildcard certificate for a subdomain I own and use it for all my internal domains. A great way to get HTTPS on non-public networks.
HTTP over VPN is still weaker than HTTPS over VPN. For example HTTPS also handles authentication which HTTP doesn't. If you're outside of your VPN, a MitM could redirect you to http://my-internal-domain.example and resolve its DNS to an attacker's website. Your browser would not understand the difference between this and your actual website in your VPN. It would send all the site's cookies to the evil website, and if Service Workers[1] worked over HTTP, this would actually be a way to completely compromise an internal HTTP website. So it's important not to whitelist such HTTP sites as if they're secure.
This is generally the best way, as it allows you to use things like client certificates or some other form of authentication to enforce AAA, and reduce the requirement of using VPN (which itself increases the security as it reduces the number of holes in your network)
Of course it's still unsecure from your proxy to the device, but that's a more managable risk
Also note that with DNS01 challenge you can add multiple wildcard domains under one certificate. There is a limit of total domains in a certificate but I still find it interesting and helpful.
I've set up internal CA using minica [0] and trusted that CA in Chrome and Firefox with success. Each host got it's own key, and I'm not even using proper DNS server - I use Avahi, so all of my hosts are available as somehostname.local on all clients with Avahi/Bonjour installed.
HTTPS guarantees a higher level of security than "intranet", it works against on-path adversaries and provides end-to-end confidentiality & integrity & authenticity, plus provides forward security.
As a developer I likely won't use this feature much, considering most of our internal development sites are http only. For the general public it might be useful though, especially the auto-upgrade feature, protecting them from the lazy network operators that didn't add a proper auto-redirect.
According to another comment, you can still allow certain sites through http, so your Internet dev sites are still fine but the global sites will be blocked by default
I haven't tried out the release UX yet but if it is just a couple of clicks inline when you fist visit this doesn't sound that bad. I would go though it for the added feeling of privacy and security whenever I am on a public connection.
> As a developer … most of our internal development sites are http only.
As a developer, should you not work against an environment closer to production behaviours? Otherwise you might miss performance issues (due to different caching behaviours between http/https) or other problems until your code is released.
All our internal services are HTTPS. Then if a new hack is found to weaken wireless protocols we have that extra line of protection. Security in depth.
We serve strongly regulated industries and are subject to in-depth audits by clients on occasion, so perhaps my level of paranoia would be less warranted elsewhere. I'd still HTTPS everything though, even if the potential payoff is small because the required effort is too.
Public wildcard cert for centrally managed things.
Of course only a trusted few have access to the private parts of the certificate that covers centrally managed things. For local dev instances I suggest having a local only meaningless domain and a wildcard off that,
If we were using per name certs and name leaking were a significant issue we could instead sign with a local CA and push the signing cert out as trusted to all machines we manage.
I've used this for a few months now. It ugrades non-https connections on secure pages automatically. Very useful. Even big sites like microsoft, google images serve things over http
I was interested in this as well. Unfortunately, excluded sites are configured using PermissionManager <https://bugzilla.mozilla.org/1640853> which stores data in permissions.sqlite.
I do worry about the sort of monoculture with Let's Encrypt. A second and third provider that do the same thing would reduce the blast radius for potential outages. Grateful for LE, but there's a lot riding on it. Similar for Cloudflare.
https://zerossl.com/ Does offer wildcards, but slightly less compatible, because you need to sign up with a web form, and provide your credentials via the optional ACME EAB feature (External Account Binding), so not all tooling will support it.
That's good news. I know there was some flip-flopping over this because of covid and gov sites requiring old encryption. Great to hear that it's now done with.
Once this sort of thing is widely accepted, we'll see various blogs and websites silenced by having a certificate revoked. Not right away but soon enough.
It's a very exciting development. It's managed to use the geek "Everything has to be like this!" fanaticism to drag in a mechanism of control.
I wonder which of the Four Horsemen it will be used against first.
"When Firefox autocompletes the URL of one of your search engines, you can now search with that engine directly in the address bar by selecting the shortcut in the address bar results."
HTTPS is not secure because it is centralized and it does not protect against MITM.
HTTP is the foundation of our civilization, it will never go away how much certificate sellers try.
But I would go one step further and point out that HTTP can be made secure manually selectively so that you only secure the things that need security!
HTTPS wastes energy by encrypting cat pictures, and we don't have that much cheap energy left!
But don't worry this will not kill HTTP only Mozilla/Chrome. Chromium will always allow adblockers for free and HTTP, because if they remove it, I'll fork it and add it back in, even if it takes 1 day to compile!
To me, HTTP is the wrong target. It would be much more interesting to replace IP, like Yggdrasil does (and I think gnunet, cjdns, hyperboria & others).
If you IP is a cryptographic identifier:
* It cannot be forged
* Anyone can generate a new one on-demand
* Every packet is authenticated, every packet can be encrypted
* TLS becomes redundant
However, the DNS part remains a hard one. How to securely link to websites you have never seen? Pet names seem like a way to do so. Asking users to type IP addresses isn't really an answer, I think, but I don't know if there's a lot of "basic" users who type URLs in nowadays, they all seem to rely on google providing the right website anyway, or the web browser itself.
It's not like DNS is also our single source of trust nowadays, but at least certificate providers are competent enough to make sure names are resolved correctly.
One option would be to make signed DNS records over a DHT: the root authority "." signs "com", "net", etc, that sign "ycombinator", etc. Publish to DHT, hash-indexed.
Of course, point-to-point connections have their weaknesses as well, it might be interesting to migrate to something like beaker browser (html on top of hypercore, formerly DAT, kind of like mutable torrents in a DHT). At the end of the day, the core issue is: migrating users is difficult if the benefits are not immediately obvious.
And yes, massively adopting anything else would litterally "kill the old web", in the protocol sense. In the community or content sense? Not so sure.
IP is even harder to replace, it is completely fossilized by this point and that is a feature. The challenge is not to replace the pipes but to build something meaningful using the pipes we have without too much added complexity:
I built a realtime MMO stack using only HTTP/1.1 for network.
You can always build a transition infrastructure on the top of IP, like https did by coexisting with http, and a bit like gemini does (it has https://gemini.circumlunar.space/ at least).
Then build some links without it, and have the compatibility layer the other way. If there are compelling reasons to use the new protocol, it might gain some traction. Maybe we'll turn off IP, but probably not within a century, unless some organization acquires a tremendous amount of control over the Internet.
WebExtensions have permissions and I doubt the EFF requests passwords and history for this extension. It’s also open-source, and although installing it from addons.mozilla.org could introduce some sort of MITM opportunity, as a recommended extension Mozilla puts it through a review process, so it’s about as tame as an extension this capable can get. But yes, it is always nice to reduce extensions installed.
I already trust Mozilla with everything. I do everything on my browser. It is always better to reduce extensions. I look forward to a day when I have nothing but uBlock Origin in my browser
> In summary, HTTPS-Only Mode is the future of web browsing!
It has certainly seemed like HTTPS is the future of the web for the last few years. I love this HTTPS-Only mode and wish it would become the default (with better downgrades and messages for users who may not understand what it means). With the number of HTTP-only sites dwindling, this could result in a faster experience for sites that do not want to (or haven’t figured out how to) use HSTS or HSTS Preload (no redirects from HTTP to HTTPS) and for users who haven’t heard of the HTTPS Everywhere [1] extension.
There are a lot of confusing comments in here. Maybe I'm in the minority but I use chrome, which seems to default convert to https on any site that supports it, and will provide visible warnings when the site doesn't support https.
Also man in the middle attacks seem massively overblown. If you are sitting at home on your private network, the likelihood of a man in the middle attack is stunningly small, such that it's completely irrelevant - especially in regards to the likely trivial content being viewed over http.
There's a phenomenon I observe quite regularly in tech. A problem exists and creative people develop an innovative solution to said problem. The solution then becomes popular and a singular goal of uncreative people who deploy said solution everywhere and push it to its logical extreme.
I remember seeing this in the mid-2000s when HTML tables were shunned in favour of "divs". I saw people reinventing tables using divs and CSS to display tabular data. Completely missing the point, of course.
This is an example of that for me. How can I possibly trust every single website I visit? It means nothing to connect to a news website, say, and see the "green padlock". Who am I trusting exactly? That I've successfully connected to some load balancer that is operated by "super-trustworthy-tech-news.com"? What's the use in that? Am I supposed to trust them more than some man-in-the-middle just because they own a domain name?
But maybe it's for privacy? If you want privacy you use tor. HTTPS does nothing for privacy when it's the same tech giant on the other end that is collecting all the data. It just means that said tech giant gets exclusive access to that data. Great.
All this does is train people to not care about security and to just trust us to do the right thing because they are too stupid to get it. Sooner or later there will be an event where a government compromises a CA. Bad luck. Some Americans already decided this was a solved problem and that this could never happen.
Couldn't agree more. Benefits of using HTTPS for most websites are doubtful, but the costs are real, in overhead and caching problems. We don't need the same level of security everywhere. (I'm not even sure we need that much "security" in general, but that's another topic.)
It's sad to see Firefox continuing to bother itself with solving non-problems while serious bugs go uncorrected for years.
1. You are talking to the domain that you think you are.
2. No one else can see the traffic in transit.
3. No one has modified the traffic.
It does not provide any guarantee around the trustworthiness of the domain itself. (Well EV Certs try but as far as I am concerned that is worthless.)
1 and 3 are not helpful in your example of "Am I supposed to trust them more than some man-in-the-middle" but it does help once you establish other trust in that domain. For example a friend sent you the link, or you start using the site regularly.
However personally even just 2 is a tangible benefit when I am using public connections. I very much like the idea that my browser will not send out an unencrypted data without my explicit approval.
Depends on the content of the website. A lot of websites wouldn't benefit at all from that assurance. A lot of blogs and random personal projects come to mind. My own blog does not care about authentication or MITM at all. It would be just an unnecessary complexity without any benefit.
> Am I supposed to trust them more than some man-in-the-middle just because they own a domain name?
The green padlock will not turn any unreliable fake news site of your choise in a trustworthy outlet but it does make some guarantees about it being the same site as yesterday (barring security leaks or missed DNS renewal)
AFAIU the elefant in the room is that if your DNS resolver is malicious and points all domains to a malicious IP then https is completely useless.
I was oversimplifying my limited understanding. What I belive should be true is that you are connecting to a server that:
1. managed to obtain a valid certificate from some recognized autority
2. managed to steal a valid certificate for the domain in question from another server
3. managed to convince enough DNS resolvers to point to their IP for a given domain (so to get your traffic and/or pass LE challenges)
and/or other conditions. From an operational perspective it says very little, especially considering the third case where any public wifi network has in most cases total initial control over your DNS traffic.
Tables are better as well in the sense that they are a higher level representation than divs. The problem was that people were then using tables as a way to layout pages rather than to use them to display tabular data.
I think it is deeper, the problem is that HTML tables are serialized in row and columns separately, so for example if you wanted a cell to be 2 rows tall and 2 columns wide there wasn't a local change that could allow it.
To my understanding CSS Grid is meant to solve this
Because that’s a lot of hidden UI for something people rarely want to do (be on an insecure version of a website while a secure version exists) and might not work and that they can still do relatively easily by typing ”s”.
A long time ago I suggested a new uri prefix - "secure://" - that would be a synonym for "HTTPS-only". If you visit a secure:// link, every single page load in the session would require strong encryption, secure cookies, etc.
The idea was to allow http if needed, and alternately allow strict https if needed, in a backwards compatible way (visiting a https:// url would work as before, but visiting secure:// would trigger the strict security for the rest of the session). This way you have the best of both worlds and the user (and server) get choice/agency.
The purpose was to stop MITM. The ability to MITM only requires blocking port 443 and letting the browser fall back to 80; this works even against HSTS because most people will just try other URLs until one works. So you need a way to avoid MITM at least for some specific requests. Banks, e-mail providers, etc would say "type in secure://mybank.com in your browser for strong security".
Another option was a "secure only" button on the browser. It seems they're moving towards this. They've buried it in preferences, which hopefully they'll change to the front UI. But I still think the secure:// links are easier for laypeople.
A lot of things should be better in theory, like adopting PAKE schemes (like "OPAQUE") that could perform a two-way authentication and key negociation over an insecure connection by just displaying a login prompt.
As always, the issue is adoption. What good is a solution if no browsers implement it? Catch-22, which is usually broken when a giant (google nowadays) decides to break it. And they need incentives to do so. Which is why we need the non-profit Mozilla giant. badly.
Why depreciating http in the long term?
It should simply ask an annoying user confirmation that take all the screen.
Removing my ability to visit the old web is pure nonsense
Where you see security, I see control.
A way to commoditize the launch of ideas and information.
Maybe 30 years from now, they will not prohibit any type of communication that is not properly licensed and standardized. As they do with commercial imports and exports. In Brazil today when you buy a product from another state of the federation, the tax goes partly to the origin of the product shipped and partly to the destination where it is purchased.
I guess, but only in the way your power provider has veto over the content. The CA doesn't care about the content, only about you proving that you own the domain.
Like CAs, the power company can be controlled by the government.
Let's say governments ban youtube-dl. The prosecution gets a court order to order CAs to not renew youtube-dl.org. If they're using a Let's Encrypt style 30 day certificate, youtube-dl has 30 days to comply or be soft-banned from the internet.
Why go to the CA when they can just order the domain itself to be shut down? If there's a court order against you, your site being HTTP won't save you.
Not every web service is easy to set up with HTTPS as a simple Let's Encrypt service. Take a game. It dynamically balances between servers rented and destroyed on the fly. It needs a wildcard DNS certificate. Then all the sub-servers need to have that wildcard certificate.
In conclusion, deprecating HTTP makes it harder for people to get started on the web. How are you going to get certificates for IP address' control dashboard, after all?
I have a side project that plays webradio streams. A lot of them don't have https, some links even are ip addresses and Chrome won't allow to request them on my https site. In the end it forced me to create a stream proxy that I have to host...
> Just give each server a separate domain and certificate as you create them. The matchmaking algorithm returns a url pointing to the server.
Do you mean a separate subdomain? (A separate domain would be very expensive.) I would need to figure out how to generate and provision certificates, in that case.
A game absolutely should be use HTTPS anyhow, for their own security.
But I'd say that anyone that has an operation so big that it dynamically creates and destroys servers to balance load should probably already be paying for their own wildcard cert anyhow.
> A game absolutely should be use HTTPS anyhow, for their own security.
The game has no log-in or sign-up; no account system. The only potentially sensitive data sent is the nickname the user enters.
> But I'd say that anyone that has an operation so big that it dynamically creates and destroys servers to balance load should probably already be paying for their own wildcard cert anyhow.
The game is free and ad-supported. Dynamically creating and destroying servers is a basic requirement.
Good feature in general. A few old sites that will get a bit more annoying to use because that fact is now pointed out to the user. But otherwise no impact for users.
I hope it does not get too annoying for backend developers for running things locally because locally you typically don't set up https.
- DuckDuckGo Browser — for curious reasons if target website is not working in Termux/Links2 & Privacy Browser.
P.S. In conclusion, happy to see Firefox is still growing, but every release just brings many "hardcoded" features that, as for me, should not be "hardcoded" in free & open-source browser.
Like others, I'm not exactly inspired by this feature. I'm an advocate for HTTPS-everywhere, but I think we're quickly moving past the point of usefulness for most people.
On a personal level, as a developer, I actually find the ban on mixed connections on a web page much more frustrating. It's easy for me to get a cert for nginx for my side project. It's another thing entirely to figure out how to give my application server access to the certs in the "right" way so that the application can terminate wss:// connections. I have to figure it out, of course, because firefox will refuse to connect a ws:// connection on a https page.
I don't understand your problem. Nginx should proxy all connections including websocket ones. Just don't expose your application server and use nginx as a reverse proxy.
I don't understand my problem either - if I did it wouldn't be a problem! I'm not here fishing for tech support, but this is a real thing I'm encountering and I don't know that expressing disbelief feels productive?
The point I'm trying to express is that giving a cert to your webserver is often only the first step in a relatively complicated process of securing all you assets. I wish browser makers would ask about blocking insecure connections instead of doing it by default.
If you're interested in the kind of thing I mean see below:
So, for example - I'm running a quart server on hypercorn for a side project. Just giving nginx the certs will not, for reasons I don't understand, allow a wss:// connection to successfully connect (returns a 400). The developer conversation around this[1] suggests giving the certs to the application server. I can confirm that this works, but again I don't understand why.
One thing I’ve noticed in HTTPS mode is that sites where I used to lazily type "foobar.com" (resolving to "www.foobar.com" over HTTPS) do not necessarily auto-direct anymore, instead displaying the scary message first. Whereas, typing "www.foobar.com" directly does not trigger the message.
I’m not sure where the auto-switch from "foobar.com" to "www.foobar.com" occurs; if it’s in the browser, ideally Firefox would attempt this auto-correct first and try the HTTPS connection to the corrected location, to minimize the chance of triggering a warning.
The redirect gets sent by the server. foobar.com and www.foobar.com are technically different domains, even if conventionally one should always redirect to the other.
This is great and all--and cheers to HTTPS Everywhere fans throughout this thread, but none of you are answering the question "can I uninstall HTTPS Everywhere now as a result of this feature shipping?"
Do you want the most HTTPS connections you can get with almost no chance of inconvenience? Then use HTTPS Everywhere in the default mode. If you don't mind seeing that you are about to connect to a HTTP site and click ok if you want to contine then yes, you can get rid of HTTPS Everywhere. In this case you will get occaional shocks like "Why is my bank's website giving me a HTTP only warning? Oh, it is because there is a HTTP only redirect to the www domain."
I wonder if that will get an update next January. I'd also be more interested in a list of popular sites that don't support HTTPS at all, i.e. the sites that will trigger a warning prompt under this new HTTPS-Only Mode.
"we expect it will be possible for web browsers to deprecate HTTP connections and require HTTPS for all websites"
Thinks about the implications of this. It will be impossible to host any web content unless you get blessed by a well-known CA (packaged in the platform/browser CA store).
That's why we have Let's Encrypt, right? Yes, thanks to them.
Now imagine a future where Let's Encrypt goes away, for whatever reason.
Now it's impossible to host any web content without getting approved by a commercial CA.
Firefox is my primary browser on desktop and Mobile.
On Windows 10, I use Cold Turkey to block distracting websites.
I often have issues with Firefox bypassing the blocks on Windows, ignoring the Hosts settings.
On Android, Block Site addons are not compatible with the new Firefox for Android.
I love Firefox, but some recent changes make me feel that I have less control over my browsing experience.
I hope they don't make things 'default' for our 'protection', rather leave some things to the user to decide as per their preferences.
An insecure connection can be trivially eavesdropped (e.g. by your network peers, router, ISP or intermediate hops). Much of the time this will include identifying information such as your IP address and browser cookies. Consider reading medical publications or other personal topics and having that logged by an unrelated third-party. TLS adds significant privacy to your browsing habits, even when not transmitting data, per se.
Edit: and, as others have said, the content can be modified by a man-in-the-middle attacker, which can inject fake content or malware.
No, you're still susceptible to MITM attacks. I could imagine an election info site, that said the election was November 3, being MITM'd by an adversary who changed the page to say November 4, causing many voters to miss the election by a day.
Just switch to a different provider for your certs. The startup I previously worked for switched from LetsEncrypt to AWS-provided certificates since we were already using their ALBs.
This is a "rewriting reality to fit our agenda" kind of a post.
> The majority of websites already support HTTPS
When I run a webserver on a machine I just set up, it has no certificate, certainly not one signed by anybody else, and there's no reason I need to be forced to use encryption with it.
> and those that don’t are increasingly uncommon.
False. Although - for fashionable Silicon Valley companies, "most websites" probably means something like Facebook, Google, Amazon, Wikipedia and a few others.
> Regrettably, websites often fall back to using the insecure and outdated HTTP protocol.
HTTP is "outdated"? "Fall back"? ... Seriously?
I guess we're just lucky FF's share has dropped so far that we shouldn't worry about this stuff. I just hope other browsers don't do this (although - who knows, right?)
How many people actually understand what securities HTTPS provides. I remember clicking through them not knowing what they meant or thinking there was nothing I can do about it. Or thinking no one REALLY can snoop except in theory.
And that is the problem.
HTTPS messages need to reform. How about “the website and its data is observable and can be spied on by a third party along the network. Please be careful when entering data.”
I agree. HTTPS is great, at definitely needed for a lot of things. But I don't need my cat pictures encrypted, I don't need lots of things encrypted, and frankly, I don't want it to be encrypted when it's not required, it's a waste of resources, both processing and network.
Then there is the case of all the old computers that either lack the processing power or support for modern algorithms.
If a page doesn't use HTTPS, even if it is cats, you cannot trust that the traffic has not been modified in transit. You try to load a cat but a network attacker can add malware or mining code or a worse exploit.
Every page needs HTTPS because you can't trust any content sent to you over HTTP. You don't know if it's "just a cat picture."
They don't intentionally execute any code, they do sometimes have a vulnerability that allows memory corruption in a way that can be exploited to run attacker-provided code.
If you're not familiar with this omnipresent class of exploit, I wouldn't hope for many people on HN to take your advice on whether a security measure is needed or not seriously. Even if your comments were underlined and flashing on the page instead of grayed out.
I'd be more receptive to this if ISPs weren't snooping on traffic and selling their customer's browsing history. As long as we have to operate under the assumption that every scrap of data we send or request will be picked apart and used against us whenever possible I'd rather encrypt everything and have a little less to worry about.
Sorry to burst your bubble, but intelligence agencies are going to be monitoring your traffic regardless. The Internet is a global network; laws in specific countries or economic zones don't affect data in transit through other parts of the world.
When there's executable code there needs to be encryption.
JS
HTML
CSS
WASM
etc
...all need to be tamper-resistant.
Processing power, meh. More of an issue is older devices not getting the updates to software for the newer algorithms, and not getting the updated certificates. I got rid of a perfectly good tablet for just this reason. A bit slow perhaps but workable.
You don't need your cat pictures encrypted per se, but you do want to ensure that your Webportal cannot MITM your communications with catpictures.com and inject malicious javascript into the webpage.
In an adversarial situation, you also want your opponent to spend time and resources storing or cracking gigabytes of cat pictures for every kilobyte of email they get.
Here is the thing. If you enter domain.com into the address bar of your browser, your browser will always go to the http site unless you do HSTS preloading. Your first visit to a website is not secured by https. So lots of people do a http -> https redirect, which means you can do a man in the middle attack on the http port and the HSTS header will never get loaded in the first place. https is significantly less effective than it should be.
What makes it more expensive? A certificate is free (With LE or self-signed), the performance impact is negligible and there's a clear reason for why everyone should be using it.
It does add a "tax" of sort in the form time or attention that must be paid to keep a website up. You can't just sling some files in a directory and be done -- you have to pay for certificates or pay (in time and executable capability) to keep LetsEncrypt up to date.
And, as wonderful as LetsEncrypt is, it's not forever. At some point, they're gonna' get tired of messing with it or it will get taken over by private equity (see .org) and for whatever reason, it won't work any more.
And sure, that's always been true, new stuff obsoletes old and things fall by the wayside. But my current browser can access modern websites as well as sites from the dawn of the Web. But FireFox 85, 87 or 90 will probably make https mandatory -- and that amazing continuity is gone.
You cannot say that certificates are reliably free (especially in the long run), if there's only one entity providing them and that entity is dependent on corporate sponsors.
Tons of major websites rely on Let's Encrypt, so I think it's fair to say that they're probably not going anywhere soon. Free certificates are now standard on services like Cloudflare and Google App Engine. I think that AWS can generate free ones too.
It's part of the culture of making everything web terribly complicated, which has resulted in the death of all but three web browsers.
It's now practically impossible to write a new web browser from scratch, unless you're a mega corp with endless resources and a grudge against Google, and they're still adding more complexity every day.
The web started out as a very optimistic project with no security and a lot based on trust. As it evolved a lot of security had to be bolted on which now makes it a bit more complicated than in the early days. But what's the alternative?
Of course a perfect protocol where nothing needs to be added later would be great, but that's not very realistic.
The problem isn't just HTTPS, it's the ever-expanding array of various APIs and technologies that "must" be implemented to be a "real" or "complete" browser. Even Firefox, that's been around for a long time and has a fairly large mind share, is at best an afterthought in many web projects.
The amount of APIs that need to be implemented to be considered even a basic web browser is so huge that it's not an approachable project for just about any organization, and as an individual it's just not possible.
Gemini is a monoculture. They almost say it in the FAQ:
> 2.5 Why not just use a subset of HTTP and HTML?
> [...] The problem is that deciding upon a strictly limited subset of HTTP and HTML, slapping a label on it and calling it a day would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way. [...]
The protocol itself has very strong opinions on what is allowed and what is not. It is simple but mandates TLS (so, not simple), because authors think encryption is important but other things are not. It is also deliberately non-extensible.
Not saying it is a bad thing, I mean, they didn't hurt anyone. But that protocol is clearly intended as a rallying point for like-minded individuals rather than something for everyone to use.
Therefore version 83 is 3 versions ahead of version 80. Note that "version" in this context is shorthand for "major version". It does not include minor patches that only fix a bug or security issue.
Each new major version of Firefox comes with new features and may occasionally deprecate or remove old features. They are currently released roughly every 4 weeks.
As long as there is an escape hatch this is great. I absolutely loathe having to setup http to https redirects because it means the first visit for many users is completely unsecured. HSTS preloading is a hack. Why not just connect to https first?
As other posters mention, you can disable this when you need to, but also browsers generally treat localhost and/or 127.0.0.1 as secure origins in themselves anyway, so I suspect that won't be necessary. See https://developer.mozilla.org/en-US/docs/Web/Security/Secure...:
As described in the article HTTPS-Only mode is opt in, you can also disable it at will, you can add exceptions on a site-by-site basis, and even when it's on you are prompted on whether or not you wish to proceed to non-HTTPS sites.
> For the small number of websites that don’t yet support HTTPS, Firefox will display an error message that explains the security risk and asks you whether or not you want to connect to the website using HTTP. Here’s what the error message looks like: ..
It would help greatly if this kind of things (that and security exceptions would not apply to any host in the RFC1918 address space. Would make life easier for IT dept around the world.
To add to this: What interests me is how will they handle accessing the management interface of routers and various network equipment once HTTP gets deprecated.
That helps a bit. I could be wrong, but I think the common belief is that professionally-developed HTTP-only sites don't really have a place anymore on the 2020s internet due to HTTP/2, security, referral tracking, and SEO limitations.
Upon launch, Firefox 83 displays essentially a full-page ad for Pocket, with a tiny link at the bottom (have to scroll to it) to get the release notes for “what else” is new in Firefox.
So all kinds of significant enhancements in 83, including HTTPS mode, might essentially be missed by most users. (Heck, I only knew because of this HN post.)
Why do programs insist on “hijacking” things? Release notes seem particularly vulnerable to this, e.g. iPhone apps love to have “notes” that don’t actually tell you anything at all, just marketing-speak.
There had better be an about:config option to turn this stupidity off.
Perhaps one of the downvoters can explain why the implied opinion "Nobody should be able to access your site without clearance from a third-party gatekeeper" belongs on a site called "Hacker News."
And no, it won't be opt-in for long. Read the rest of the page: "Once HTTPS becomes even more widely supported by websites than it is today, we expect it will be possible for web browsers to deprecate HTTP connections and require HTTPS for all websites. In summary, HTTPS-Only Mode is the future of web browsing!"
This is something you should be speaking up against.
> Perhaps one of the downvoters can explain why the implied opinion "Nobody should be able to access your site without clearance from a third-party gatekeeper" belongs on a site called "Hacker News."
I didn't vote down, but ironically this is news to real hackers who will have a harder time doing mitm downgrade attacks once this is widespread.
I believe that web browsers should alert users if a website uses a less secure protocol than "nearly all" of the rest of the websites they visit, for some value of "nearly all".
It's not preventing the user from visiting, just saying "heads up, the assumptions you make about the websites you visit don't hold for this one."
Which means any pure HTML resources would either have to rely on let’s encrypt or pony up some certificate money. So basically a death knell for homepages.
especially if the diplayed warning looks like a "ThIs Is An InSeCuRe SiTe" warning.
self signed cert? WaRnInG!!!111eleven
no https? WaRnInG!!!111eleven
cert expired 2 hours ago? WaRnInG!!!111eleven
HTTPS is not about gatekeeping, you can use "let's encrypt" for free certificates for any domain.
HTTPS-only is about forcing all traffic to be encrypted by banning clear-text traffic. I've been using the "HTTPS everywhere" extension for years and it's great.
yes it is. someone has to give you a certificate which the users browser accepts. even if its free today.
lets say a simple website which someone uses to display some holiday pictures. why would we need https here, if there is no login or anything like that?
it just adds an extra hurdle for not so tech-savvy users and increases the trend to abolish small private websites.
I don't know. Let's say that some non-technical family member goes to this site intending to look at vacation pictures.
Imagine if those pictures have been replaced by something else. If you can't think of a long list of replacement images that could be very useful for a spearphishing attack, then you're not having enough imagination.
This attack could also be used to get the poster of the photos in trouble.
If my choices are to implement a security control which forces a layer of security, or forgo that security control so Alice can upload her Holiday pictures to a host which doesn’t support HTTPS either, I know which one I’ll pick. Alice should either host her photos on Instagram, or learn how to run letsencrypt.
The day where certs are no longer freely obtainable is the day another self governed free TLS provider will appear and force their way into the market by providing installers to inject CAs into system cert stores.
> That's already pointless on Android, user-installed CAs are ignored by default unless an app developer opts in to using them.
And? App developers should opt in to ignoring transport security. I’m sure a bunch of Android shitware attempts to install CAs either via user interaction or exploitation.
> Once we go down this path there's no turning back to the user-centric Web of the 1990s / 2000s
The landscape we live in now is very different to then. I’m all for a free web, but not at the cost of security. The web is now a multi billion trillion dollar industry. Weakening security just so Bob can see Alices’ holiday pics in situation where Alice can’t figure out letsencrypt, is frankly unhinged.
If you want a ‘free web’ you’re welcome to disable any HTTPS enforcement and disable TLS cert checking entirely. Hell, fork a browser, be very clear about the security weaknesses and publish on github if you feel that strongly, I’ll even star it for you.
The web is now a multi billion trillion dollar industry.
Maybe your web service is, but mine isn't. Mine is a specialized embedded device server that now has an expiration date for no reason on God's green earth.
As a visitor to the website, how can I be sure it's only holiday pictures ? If I get to your friendly website and it asks me for private information, and I'm willing to give it because I trust you, what tells me only you will receive it ? How do I know it's your holiday pictures, and not some scam someone else wants to trick me into ?
But you don't know if it's the real content. There's a problem even before entering information. What tells me it's your holiday pictures and not someone else's, and a person in the middle wants to tarnish your name ? What if your ISP/your hosting provider adds ads in the page, or a MiTM adds a link to a scam site ?
Of all the parts invloved in setting up a web server, is adding a letsencrypt a significant further barrier? In what situation would a non-tech-savvy user ever be doing that in the first place?
Hint: not all web servers run inside Facebook or Google or Amazon data centers. Some of them run inside individual devices, which will now end up in the landfill once their certificates expire. Many such devices were, and are, just fine running plain old HTTP, but now they're all going to be subject to service life limits imposed by a third-party authority.
This is not how this was supposed to work. This is not how any of this was supposed to work. But it's hard to voice any objections over the proverbial thunderous applause.
> HTTPS-only is about forcing all traffic to be encrypted by banning clear-text traffic
Banning clear text might work for browsers but it would disable ACME clients that rely on plain http to initiate a certificate request from Let's Encrypt.
You can use let's encrypt... until you can't. And then, after all browsers had deprecated HTTP, it will be time to seriously rake all website owners for certificate money. It is pretty brilliant, if you ask me.
At first it's opt-in then it becomes the default setting.
Are you a developer? If you are then you should be used to thinking at least 2 steps ahead and just seeing what's literally visible in front of you.
I kind of agree, I don't want to have to click through warnings all the time to do my job. Just re-architect everything to have legit public domain names and network access to get a let's encrypt cert, yeah right I'll get right on that.
This doesn’t guarantee the transport is end-to-end secure; I’m sure plenty will strip the encryption at an LB and then possibly send it back over the internet. But, I think it’s a good addition nevertheless. Here’s to hoping for more DoH and encrypted SNI adoption as well. No good reason to leave anything unencrypted if it doesn't have to be.
(I’m less happy with Firefox’s approach to DoH rollout, but I’m still glad to see DoH gaining some traction. Let’s hope the end result is worth it...)