Not yet. We are still deluding ourselves that the 3x cost increment and insane complexity increase we can barely manage to keep spinning is actually a business benefit.
Note: this isn't everyone's end game but I suspect it's realistic for a lot of people.
I would like to go back to cleanly divided, architected IaaS and ansible. It was fast, extremely reliable, cheaper to run, had a much lower cognitive load and a million less footguns. What's more important possibly is not everything can be wedged into containers cleanly despite the promises.
Also a big fan of sticking to ansible and plain VMs, at least for most cases I've encountered. To me, a VM in the cloud already feels like a container and you can use the cloud provider's APIs to scale up and down virtual instances as needed
> To me, a VM in the cloud already feels like a container
This is the mental abstraction I've been operating with for over a decade now.
All of our products are monolithic binaries that can be installed on bare-ass windows or linux machines. For all intents & purposes, basic AWS/Azure/et. al. VM hosting is our containerization strategy. We just pushed the tricky bits down into our software.
95% of our pain is resolved by using a modern .NET stack and leaning hard on their Self-Contained Deployment model. Our software has zero external dependencies at deploy time, so there isn't much to orchestrate. Anything that talks to a 3rd party system is managed purely via configuration in our software.
Agreed. But I do think there are places for containers. I will often package single binaries in containers for built in distribution and capabilities for rolling upgrades. Especially tooling that relies on a lot of externalities that can taint the system. Python applications, as an example are much easier to deploy and manage this way than dealing with Terraform / Ansible to provision correctly. Even if you're just using host networking and good ol' Docker there is a ton of operational upside with very low maintenance overhead (mental and otherwise).
I'm working with a product now that's made their k8s deployment the standard and all it's done is create bigger issues. Ops got behind on Strimzi and so we got stuck on 1.21 because we couldn't upgrade due to being locked to the Strimzi version. This caused issues because of log4j and we ran into a wall quickly with customers on GCP as soon as 1.22 ended up as GA. Honestly I'm not sure we're getting much, if any, overhead advantage since I feel like the app has become bloated due to container creep.
That and supporting 4 different ways to provision storage across customers on every cloud / on-prem is a nightmare. Customer environment installed applications on k8s is a nightmare today.
Unless you have massive scale VMs are your best option. If you need VM configuration on startup (elastic scaling), you may need to maintain your own image. Salt Stack and/or Fabric are good alternatives to Ansible.
You could look at containerization without K8S (podman or docker) especially if you use python and don’t want to mess with the Linux native python installation.
Unless you have money to burn K8s excels compared to VMs in my experience.
It's original purpose wasn't to do elastic scaling or anything like that - it was to binpack workloads onto a set of nodes, and not everyone has Silly Valley money to pay Silly Valley prices (especially when one's currency is weak against dollar)
Considerable portion of the internet, even people who supposedly know k8s, have this weird notion that it's for "scaling up" ... Except they never talk about scaling what single engineer can do, but some less useful things like dynamically adding lots of servers ;)
Note: this isn't everyone's end game but I suspect it's realistic for a lot of people.
I would like to go back to cleanly divided, architected IaaS and ansible. It was fast, extremely reliable, cheaper to run, had a much lower cognitive load and a million less footguns. What's more important possibly is not everything can be wedged into containers cleanly despite the promises.