> To make a cluster useful for the average workload a menagerie of add-ons will be required. Some of them almost everyone uses, others are somewhat niche.
This is the concern I have with k8s. All this complexity introduces operational and security concerns, while adding more work to do before you can just deploy business value (compared to launching on standard auto-scaling cloud instances)
If you are using a managed kubernetes cluster from a cloud provider you mostly don't need to worry about these sorts of things. If you're not, and deploying to bare metal, the main things you need to worry about are: load balancers, storage & monitoring. If you're large enough that you can effectively run kube on bare metal you probably have enterprise solutions for load balacing [0], storage [1] & monitoring your applications that you've already validated as being secure/stable.
If you want to go all out you can also grab an operator to manage rolling out databases for you (postgres [2], mongo, etc).
A lot of the complexity people bump into with kube is really poorly planned out tools like Istio that have way too many features, a very overly complex mode of operation (out of the box it breaks CronJobs!!!), and very sub-standard community documentation. If you avoid Istio, or anything that injects sidecars and initcontainers, you'll find the experience enjoyable.
It's the classic trade-off of cost vs benefit. In places I've worked, the benefit has been worth it. The kind of add-ons mentioned in the article are in keeping with the decision for the orchestrator (which is already complex) not trying to do absolutely everything. I feel that is a good thing.
It's kind of like an API gateway with traditional microservice instances. If you have DNS and load balancers pushing requests directly to your services, you might wonder why you would ever need such a thing. Until you do.
As a rule of thumb, the quality of engineering of the core Kubernetes distribution is rock solid and incredible, but anything that's not is in varying stages of maturity.
How much Kubernetes-adjacent code you actually need to adopt (and therefore, how much risk you take on) depends from project to project and organization to organization.
This is the concern I have with k8s. All this complexity introduces operational and security concerns, while adding more work to do before you can just deploy business value (compared to launching on standard auto-scaling cloud instances)