Hacker Newsnew | past | comments | ask | show | jobs | submit | 0xsee4's commentslogin

To be fair, this function ignoreSslErrors is not from the authors of qBittorrent, it comes from QT framework. The idea behind the function is that you provide it a small whitelist of errors you wish to ignore, for example in a Dev build you may well want to ignore self-signed errors for your Dev environment. The trouble is, you can call it with no arguments and this means you will ignore every error. This may have been misunderstood by the qBittorrent maintainers, maybe not.

Much more likely is that someone knew they had implemented this temporary solution while they implemented OpenSSL in a project which previously never had SSL support - a major change with a lot of work involved - and every programmer knows that there is nothing more permanent than a temporary solution. Especially in this case. I can understand how such code would make it into the repo(I think you do too), and it's very easy for us to say we would then have immediately amended it in the next version to properly verify certs.

Having been in contact with the maintainers, I have to say I was disappointed in how seriously they took the issue. I don't want to say any more than that.

Source: author of the article


Temporary solutions can become more dangerous with time. Years ago, in one of our projects, someone wrote a small helper class, HTTPClient, to talk to one of our internal subsystems. The subsystem in the dev environment used self-signed certificates, so one of the devs just disabled SSL validation. Whether SSL errors were ignored or not was specified in a config. Later, someone messed up while editing the configs, and SSL validation got disabled in the live environment, too. No one noticed, because nobody writes tests to check if SSL validation is enabled. But that's only part of the story, this HTTPClient class was still only used to communicate with our internal subsystem on our own network.

The real problem came later when the next generation of developers saw this HTTPClient class and thought, "Hey, what a nifty little helper!", and soon they were using it to talk to pretty much everything, including financial systems. I was shocked when I discovered it. An inconsequential temporary workaround had turned into a huge security hole.


This is interesting, I haven't ever used the Qt framework but I'm surprised that it would even have an SSL implementation, sounds a bit out of scope for a GUI toolkit. I think I'd prefer to do all my networking separately and provide the fetched data to Qt.

Edit (just noticed this was the author): I'm curious what torrent client do you prefer? I like Deluge but mostly go to it because it's familiar.


Qt isn't just a GUI toolkit - it's an everything toolkit. It's somewhat intended to be used (potentially) alone with C++ to allow the creation of a wide variety of apps. It includes modules like Bluetooth, Network, Multimedia, OAuth, Threading and XML.

See a full list: https://doc.qt.io/qt-6/index.html


In the spirit of the C++ compiler frameworks that were quite common during the 1990's before C++98, and then we got quite a thin standard library instead, and a mess of how to manage third parties that is still being sorted out.


(most?) programming languages have a way to handle these scenarios, something like `#warning | TODO | FIXME` ...

I understand temporary, but 14 years seems a bit ...too long


How much notification did you give the developers before you disclosed? Did you enforce a timeline?


In total it was about 45 days or so from the initial conversation. I waited for a patched version to be released, because the next important milestone after that would be finished backports to older versions still in use, which is clearly going to take a long time as it is not being prioritized, so I wanted to inform users.

Initially I had said 90 days from the initial report, but it seemed like they were expanding the work to fill that time. I asked a number of times for them to make a security advisory and got no answer. Some discussions on the repo showed they were considering this as a theoretical issue. Now it's CVE-2024-51774, which got assigned within 48 hours of disclosing.


> Some discussions on the repo showed they were considering this as a theoretical issue.

That's hilarious. It's all theoretical until it's getting exploited in the wild...


Any proof that actually happened or you just wearing a tin foil hat? Crypto enforcement en masse matter, intercepting highly specific targets using BitTorrent does not.


I feel as though there is a generational gap developing between people who do and do not remember how prolific Firesheep used to be.


Lol wait til you get personally targeted by a 0'day in extremely popular software for that sentiment to make you look stupid both ways.


I think a better question is: why are you looking for evidence (not proof!) on me for something you are supposing?


Honestly I think full disclosure with a courtesy heads-up to the project maintainers/company is the most ethical strategy for everyone involved. “I found a thing. I will disclose it on Monday. No hard feelings.” With ridiculous 45-90 day windows it’s the users that take on most all the risk, and in many ways that’s just as if not more unethical than some script kids catching wind before a patch is out. Every deployment of software is different and downstream consumers should be able to make an immediate call as to how to handle vulns that pop up.


Strongly disagree. 45 days to allow the authors to fix a bug that has been present for over a decade is not really much added risk for users. In this case, 45 days is about 1% additional time for the bug to be around. Maybe someone was exploiting it, but this extra time risk is a drop in the bucket, whereas releasing the bug immediately puts all users at high risk until a patch can be developed/released, and users update their software.

Maybe immediate disclosure would cause a few users to change their behavior, but no one is tracking security disclosures on all the software they use and changing their behavior based on them.

The caveat here is in case you have evidence of active exploitation, then immediate disclosure makes sense.


What if we changed the fundamental equation of the game: no more "responsible" disclosures, or define responsible as immediate and as widely published as possible (ideally with PoC). If anything, embargoes and timelines are irresponsible as they create unacceptable information asymmetry. An embargo is also an opportunity to back-room sell the facts of the embargo to the NSA or other national security apparatus on the downlow. An embargoed vulnerability will likely have a premium valuation model following something which rhymes with Black Scholes. Really, really think about it...


Warning shots across the bow in private are the polite and responsible way, but malicious actors don't typically extend such courtesies to their victims.

As such, compared to the alternative (bad actors having even more time to leverage and amplify the information asymmetry), a timely public disclosure is preferable, even with some unfortunate and unavoidable fallout. Typically security researchers are reasonable and want to do the right thing with regard to responsible disclosure.

On average, the "bigger party" inherently has more resources to respond compared to the reporter. This remains true even in open source software.


This is a pretty dangerous take. The reality is that the vast majority of security vulnerabilities in software are not actively exploited, beause no one knows about them. Unless you have proof of active exploitation, you are much more likely to hurt users by publicly disclosing a 0-day than by responsibly disclosing it to the developer and giving them a reasonable amount of time to come out with a patch. Even if the developers are acting badly. Making a vulnerability public is putting a target on every user, not on the developer.


Your take is the dangerous one. I don’t disagree that

> the vast majority of security vulnerabilities in software are not actively exploited

However I’d say your explanation that it’s

> because no one knows about them

is not necessarily the reason why.

If the vendor or developer isn’t fixing things, going public is the correct option. (I agree some lead time / attempt at coordinated disclosure is preferable here.)


> (I agree some lead time / attempt at coordinated disclosure is preferable here.)

Then I think we are in agreement overall. I took your initial comment to mean that as soon as you discover a vulnerability, you should make it public. If we agree that the process should always be to disclose it to the project, wait some amount of time, and only then make it public - then I think we are actually on the exact same page.

Now, for the specific amount of time: ideally, you'd wait until the project has a patch available, if they are collaborating and prioritizing things appropriately. However, if they are dragging their feet and/or not even acknowledging that a fix is needed, then I also agree that you should set a fixed time as a last ditch attempt to get them to fix it (say, "2 weeks from today"), and then make it public as a 0-day.


Indeed, we’re in agreement. Though I’d suggest a fixed disclosure timeframe at time of reporting. Maybe with an option to extend in cases where the fix is more complex than anticipated.


> Unless you have proof of active exploitation

Wouldn’t a “good criminal” just exploit it forever without getting caught? Your timeline has no ceiling.


My point is: if you found a vulnerability and know that it is actively being exploited (say, you find out through contacts, or see it on your own systems, or whatever), then I would agree that it is ethical to publicize it immediately, maybe without even giving the creators prior notice: the vulnerability is already known by at least some bad actors, and users should be made aware immediately and take action.

However, if you don't know that it is being actively exploited, then the right course of action is to disclose it secretly to the creators, and work with them to coordinate on a timely patch before any public disclosure. Exactly how timely will depend on yours and their judgement of many factors. Even if the team is showing very bad judgement from your point of view, and acting dismissively; even if you have a history with them of doing this - you still owe it to the users of the code to at least try, and to at least give some unilateral but reasonable timeline in which you will disclose.

Even if you don't want to do this free work, the alternative is not to publicly disclose: it's to do nothing. In general, the users are still safer with an unknown vulnerability than they are with a known one that the developers aren't fixing. You don't have any responsibility to waste your own time to try to work with disagreeable people, but you also don't have the right to put users at risk just because you found an issue.


100%

It’s unethical to users who are at risk to withhold critical information.

If McDonalds had an e-coli outbreak and a keen doctor picked up on it you wouldn't withhold that information from the public while McD developed a nice pr-strategy and quietly waited for the storm to pass, would you?

Why is security, which seriously is a public safety issue, any different?


It's different because bad actors can take advantage of the now-public information.

The point of a disclosure window is to allow a fix before _all_ bad actors get access to the vulnerability.


And some may already be taking advantage. This is a perfect example where users are empowered to self mitigate. You’re relatively okay on private networks but definitely not on public networks. If I know when the bad actors know then I can e.g. not run qbittorrent at a coffee shop until it’s patched.


What about a pre-digital bank? If you came across knowledge of a security issue potentially allowing anyone to steal stuff from their vault, would you release that information to the public? Would everyone knowing how to break in make everyone's valuables safer?

Medicine and biosafety are PvE. Cybersecurity is PvP.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: