Acceptable security afforded today - through usability - is better than superior security, that could've theoretically been gained, but wasn't, because it was too difficult to set things up.
In particular, reviewing open source code has been repeatedly proven to be way harder of a task, than the proponents of this strategy are painting it to be. If you want an auditable codebase, you pretty much have to throw Linux, Chromium/Firefox, Gnome/KDE all out the window - there's just way too much code.
Auditable code is naturally always preferable to non-auditable, but you need to choose your trade-offs - or at least stop pretending you can read a hundred million lines in your life time.
On top of that - do you know a single non-tech person who knows how to set up a VPS, or knows what Veracrypt is? OTOH I can just show my wife: click here to enable backups.
Let me reframe the problem: What is your threat model? How much effort are you willing to commit to mitigate the dangers?
This is a succinct explanation of the problem. Do we give the vast majority of users extremely easy, frictionless access to very high levels of security and privacy? Or do we give the vast majority of users a fundamentally insecure solution that with lots of learning and configuring and time can be have very very very high levels of security and privacy?
The crazy thing is that apple hardware beats most other hardware, too, at a high price. Better phones, better tablets, better laptops. More secure, more private OS than the popular consumer alternatives (Windows, Android). Arguably much better OS all around, too (at least IMO -- iOS beats even stock Pixel Android at use-ability, MacOS v Windows is like the Harlem Globetrotters playing the Washington Generals.)
> stop pretending you can read a hundred million lines in your life time.
For me, and I assume most others, it's not that we expect to read all the code ourselves. It's that there's a large developer community and security researchers who have access to the code who will collectively read it all. Of course this isn't a guarantee that there are no security flaws, and you still have the pipeline problem of ensuring the binaries you get actually come from the code you think they do. But all else being equal, I think open source provides a significant level of threat mitigation.
Even if you fully trust Apple not to intentionally back door anything, there's far fewer eyeballs on their code. Given that access to source code also has the potential to reveal security holes that may have gone unexploited, there of course a tradeoff here too.
> It's that there's a large developer community and security researchers who have access to the code who will collectively read it all. Of course this isn't a guarantee that there are no security flaws.
Yeah, about that, I'm as much of an Open Source buff as anyone, but:
> Analysis of the source code history of Bash shows the Shellshock bug was introduced on 5 August <<1989>>, and released in Bash version 1.03 on 1 September 1989.
[...]
> The presence of the bug was announced to the public on <<2014-09-24>>, when Bash updates with the fix were ready for distribution, though it took some time for computers to be updated to close the potential security issue.
Especially older Open Source software tends to have maintainers that haven't adopted modern software development practices so we're back to square one, since most of this older software is foundational technology, like Bash.
I'm not sure I understand the concern. I don't think it's at all unlikely that there are such long standing bugs in closed source software that's been around the same amount of time. We might just never hear about it or those bugs might never be found. Of course, I have no proof that's the case, but I'm not convinced that finding longstanding bugs in open source software is evidence of inferior quality (this is what you seem to be implying, but I may be mistaken).
> but I'm not convinced that finding longstanding bugs in open source software is evidence of inferior quality (this is what you seem to be implying, but I may be mistaken).
I'm not implying inferior quality, I'm implying no correlation.
There was a very strong assumption from back in 1999, that "lots of eyes make all bugs shallow", with a focus especially on security.
In reality, there's no correlation.
You need those eyes to actually be looking at stuff proactively, you want automated scans, you want modern software development practices and CI/CD pipelines, you want those eyes to actually be qualified to look at what they're looking correctly, etc.
Just putting stuff out there and assuming "people will look at its insides" is a bad assumption.
Open Source in my experience is not inherently superior from a security perspective to proprietary software.
In particular, reviewing open source code has been repeatedly proven to be way harder of a task, than the proponents of this strategy are painting it to be. If you want an auditable codebase, you pretty much have to throw Linux, Chromium/Firefox, Gnome/KDE all out the window - there's just way too much code.
Auditable code is naturally always preferable to non-auditable, but you need to choose your trade-offs - or at least stop pretending you can read a hundred million lines in your life time.
On top of that - do you know a single non-tech person who knows how to set up a VPS, or knows what Veracrypt is? OTOH I can just show my wife: click here to enable backups.
Let me reframe the problem: What is your threat model? How much effort are you willing to commit to mitigate the dangers?