Nothing is secure. Once we remember that, we'll stop nitpicking improvements.
Use your own server? Great, it's secure software-wise, but if someone broke into your house, it's all of the sudden the worst liability ever. The next thing you know, your entire identity, your photos, everything is stolen. You have excellent technical security, perhaps the weakest physical security.
So new plan, you use a self-hosted NextCloud instance on a VPS somewhere. That's actually not much smarter than using iCloud - VPSs handle data warrants all the time. They also move your data around as they upgrade hardware, relocate servers, and so forth.
So new plan, you use iCloud E2E encryption. You have to trust that Apple does as they say, and trust that their algorithms are correctly functioning. Maybe you don't want to do that, so new plan:
You use a phone running GrapheneOS, with data stored on a VPS, with your own E2E setup. Great - except you need to trust your software, and all the dependencies it relies on. Are you sure GrapheneOS isn't a CIA plant like ArcaneOS was? Are you sure your VPN isn't a plant, like Crypto AG? And even if the VPN is legitimate, how do you know the NSA doesn't have wiretaps on data going in and out, allowing for greatly reducing the pool of suspects? Are you sure that even if the GrapheneOS developers are legitimate, the CIA hasn't stolen the signing key long ago? Apple's signing key might be buried in an HSM in Apple Park requiring a raid, but with the GrapheneOS developer being publicly known, perhaps a stealth hotel visit would do the trick.
So new plan, you build GrapheneOS yourself, from source code. Except, can you really read it all? Are you sure it is safe? After all, Linux was nearly backdoored with only two inconspicuous lines hidden deep in the kernel (the 2003 incident). So... if you read it all, and verify that it is perfect, can you trust your compiler? Your compiler could have a backdoor (remember the "login" demo?), so you've got to check that too.
At this point, you realize that maybe your code, and compiler, is clean - but it's all written in C, so maybe there are memory overflows that haven't been detected yet, so the CIA could get in that way (kind of like with Pegasus). In which case, you might as well carefully rewrite everything in Rust and Go, just to be sure. But at that point, you realize that your GrapheneOS phone relies on Google's proprietary bootloader, which is always signed by Google and not changeable. Can you trust it?
You can't, and then you realize that the chip could have countless backdoors that no software can fix (say, with Intel ME, or even just a secret register bit), so new plan. You immediately design and build your own CPU, your own GPU, and your own silicon for your own device. Now it's your own chip, with your own software. Surely that's safe.
But then you realize there's no way to verify, even after delidding the chip, to verify that the fabrication plant didn't tweak your design. In which case, you might need your own fabrication plant... but then you realize that there's the risk of insider attacks... and how do you even know those chip-making machines are fully safe? How do you know the CIA didn't come knocking and make a few minor changes to your design, and then gag the factory with a National Security Letter from giving you any whiffs about it?
But even if you managed to get that far, great, you've got a secure device - how do you know that you can securely talk to literally anyone else? Fake HTTPS Certificates from Shady Vendors are a thing (TrustCor?). You've got the most secure device that is terrified to talk to anybody or anything. You might as well start your own Certificate Authority now and have everyone trust you. Except... aren't those people... in the same boat now... as yourself... And also, how do you know the NSA hasn't broken RSA and the entire encryption ecosystem with that supercomputer and mathematicians of theirs? How do you know that we aren't using a whole new DUAL_EC_RBG and that Curve25519 isn't rigged?
The rabbit hole will never end. This doesn't mean that we should just give up - but it does mean we shouldn't be so ready to nitpick the flaws in every step forward, as there will be no perfect solution.
Oh, did I mention your cell service provider always knows where you are, and your identity, at all times, regardless of how secure your device is?
Edit @INeedMoreRAM:
For NextCloud, from a technical perspective it's fantastic, but your data is basically always going to be vulnerable to either a technical breach of Linode, an insider threat within Linode, or a warrant served (either a real warrant, or a fraudulent warrant, which can happen).
You could E2E encrypt it with NextCloud (https://nextcloud.com/endtoend/) which would solve the Linode side of the problem, but there are limitations you need to look into. Also, if a warrant was served (most likely going to be authentic if police physically show up, at least more likely than one they served your data over), you could always have your home raided, recovery keys found, and data accessed that way. Of course, you could destroy the keys and only rely on your memory - but, what a thing to do to your family if you die unexpectedly. Ultimately, there's no perfect silver bullet.
Personally... It's old school, I use encrypted Blu-rays. They take forever to burn, but they come in sizes up to 100GB (and 128GB in rare Japanese versions), they are physically stored in my home offline, and I replace them every 5 years. This is coupled with a NAS. It's not warrant-proof but I'm not doing anything illegal - but it is fake-warrant-resistant and threats-within-tech resistant, and I live in an area where I feel relatively safe (even though this is, certainly, not break-in-proof). Could also use encrypted tape.
So, I've read most of these. Here's a tour of what is definitely useful and what you should probably avoid.
_________________
Do Read:
1. The Web Application Hacker's Handbook - It's beginning to show its age, but this is still absolutely the first book I'd point anyone to for learning practical application security.
2. Practical Reverse Engineering - Yep, this is great. As the title implies, it's a good practical guide and will teach many of the "heavy" skills instead of just a platform-specific book targeted to something like iOS. Maybe supplement with a tool-specific book like The IDA Pro Book.
3. Security Engineering - You can probably read either this orThe Art of Software Security Assessment. Both of these are old books, but the core principles are timeless. You absolutely should read one of these, because they are like The Art of Computer Programming for security. Everyone says they have read them, they definitely should read them, and it's evident that almost no one has actually read them.
4. Shellcoder's Handbook - If exploit development if your thing, this will be useful. Use it as a follow-on from a good reverse engineering book.
5. Cryptography Engineering - The first and only book you'll really need to understand how cryptography works if you're a developer. If you want to make cryptography a career, you'll need more; this is still the first book basically anyone should pick up to understand a wide breadth of modern crypto.
_________________
You Can Skip:
1. Social Engineering: The Art of Human Hacking - It was okay. I am biased against books that don't have a great deal of technical depth. You can learn a lot of this book by reading online resources and by honestly having common sense. A lot of this book is infosec porn, i.e. "Wow I can't believe that happened." It's not a bad book, per se, it's just not particularly helpful for a lot of technical security. If it interests you, read it; if it doesn't, skip it.
2. The Art of Memory Forensics - Instead of reading this, consider reading The Art of Software Security Assessment (a more rigorous coverage) or Practical Malware Analysis.
3. The Art of Deception - See above for Social Engineering.
4. Applied Cryptography - Cryptography Engineering supersedes this and makes it obsolete, full stop.
_________________
What's Not Listed That You Should Consider:
1. Gray Hat Python - In which you are taught to write debuggers, a skill which is a rite of passage for reverse engineering and much of blackbox security analysis.
2. The Art of Software Security Assessment - In which you are taught to find CVEs in rigorous depth. Supplement with resources from the 2010s era.
3. The IDA Pro Book - If you do any significant amount of reverse engineering, you will most likely use IDA Pro (although tools like Hopper are maturing fast). This is the book you'll want to pick up after getting your IDA Pro license.
4. Practical Malware Analysis - Probably the best single book on malware analysis outside of dedicated reverse engineering manuals. This one will take you about as far as any book reasonably can; beyond that you'll need to practice and read walkthroughs from e.g. The Project Zero team and HackerOne Internet Bug Bounty reports.
5. The Tangled Web - Written by Michal Zalewski, Director of Security at Google and author of afl-fuzz. This is the book to read alongside The Web Application Hacker's Handbook. Unlike many of the other books listed here it is a practical defensive book, and it's very actionable. Web developers who want to protect their applications without learning enough to become security consultants should start here.
6. The Mobile Application Hacker's Handbook - The book you'll read after The Web Application Hacker's Handbook to learn about the application security nuances of iOS and Android as opposed to web applications.
Use your own server? Great, it's secure software-wise, but if someone broke into your house, it's all of the sudden the worst liability ever. The next thing you know, your entire identity, your photos, everything is stolen. You have excellent technical security, perhaps the weakest physical security.
So new plan, you use a self-hosted NextCloud instance on a VPS somewhere. That's actually not much smarter than using iCloud - VPSs handle data warrants all the time. They also move your data around as they upgrade hardware, relocate servers, and so forth.
So new plan, you use iCloud E2E encryption. You have to trust that Apple does as they say, and trust that their algorithms are correctly functioning. Maybe you don't want to do that, so new plan:
You use a phone running GrapheneOS, with data stored on a VPS, with your own E2E setup. Great - except you need to trust your software, and all the dependencies it relies on. Are you sure GrapheneOS isn't a CIA plant like ArcaneOS was? Are you sure your VPN isn't a plant, like Crypto AG? And even if the VPN is legitimate, how do you know the NSA doesn't have wiretaps on data going in and out, allowing for greatly reducing the pool of suspects? Are you sure that even if the GrapheneOS developers are legitimate, the CIA hasn't stolen the signing key long ago? Apple's signing key might be buried in an HSM in Apple Park requiring a raid, but with the GrapheneOS developer being publicly known, perhaps a stealth hotel visit would do the trick.
So new plan, you build GrapheneOS yourself, from source code. Except, can you really read it all? Are you sure it is safe? After all, Linux was nearly backdoored with only two inconspicuous lines hidden deep in the kernel (the 2003 incident). So... if you read it all, and verify that it is perfect, can you trust your compiler? Your compiler could have a backdoor (remember the "login" demo?), so you've got to check that too.
At this point, you realize that maybe your code, and compiler, is clean - but it's all written in C, so maybe there are memory overflows that haven't been detected yet, so the CIA could get in that way (kind of like with Pegasus). In which case, you might as well carefully rewrite everything in Rust and Go, just to be sure. But at that point, you realize that your GrapheneOS phone relies on Google's proprietary bootloader, which is always signed by Google and not changeable. Can you trust it?
You can't, and then you realize that the chip could have countless backdoors that no software can fix (say, with Intel ME, or even just a secret register bit), so new plan. You immediately design and build your own CPU, your own GPU, and your own silicon for your own device. Now it's your own chip, with your own software. Surely that's safe.
But then you realize there's no way to verify, even after delidding the chip, to verify that the fabrication plant didn't tweak your design. In which case, you might need your own fabrication plant... but then you realize that there's the risk of insider attacks... and how do you even know those chip-making machines are fully safe? How do you know the CIA didn't come knocking and make a few minor changes to your design, and then gag the factory with a National Security Letter from giving you any whiffs about it?
But even if you managed to get that far, great, you've got a secure device - how do you know that you can securely talk to literally anyone else? Fake HTTPS Certificates from Shady Vendors are a thing (TrustCor?). You've got the most secure device that is terrified to talk to anybody or anything. You might as well start your own Certificate Authority now and have everyone trust you. Except... aren't those people... in the same boat now... as yourself... And also, how do you know the NSA hasn't broken RSA and the entire encryption ecosystem with that supercomputer and mathematicians of theirs? How do you know that we aren't using a whole new DUAL_EC_RBG and that Curve25519 isn't rigged?
The rabbit hole will never end. This doesn't mean that we should just give up - but it does mean we shouldn't be so ready to nitpick the flaws in every step forward, as there will be no perfect solution.
Oh, did I mention your cell service provider always knows where you are, and your identity, at all times, regardless of how secure your device is?
Edit @INeedMoreRAM:
For NextCloud, from a technical perspective it's fantastic, but your data is basically always going to be vulnerable to either a technical breach of Linode, an insider threat within Linode, or a warrant served (either a real warrant, or a fraudulent warrant, which can happen).
You could E2E encrypt it with NextCloud (https://nextcloud.com/endtoend/) which would solve the Linode side of the problem, but there are limitations you need to look into. Also, if a warrant was served (most likely going to be authentic if police physically show up, at least more likely than one they served your data over), you could always have your home raided, recovery keys found, and data accessed that way. Of course, you could destroy the keys and only rely on your memory - but, what a thing to do to your family if you die unexpectedly. Ultimately, there's no perfect silver bullet.
Personally... It's old school, I use encrypted Blu-rays. They take forever to burn, but they come in sizes up to 100GB (and 128GB in rare Japanese versions), they are physically stored in my home offline, and I replace them every 5 years. This is coupled with a NAS. It's not warrant-proof but I'm not doing anything illegal - but it is fake-warrant-resistant and threats-within-tech resistant, and I live in an area where I feel relatively safe (even though this is, certainly, not break-in-proof). Could also use encrypted tape.