Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Uh. Because the parent said that kubernetes and VMs are different because "with VMs you have to configure things [..] like networking performance".

But you configure the exact same things as with VMs and kubernetes.

Network performance (as per OP) is not configurable on either.

You just accept whatever accidental default you happen to have, it's not a conscious decision people are making, and it's an awkward assumption to say that you have to think about it.

Because if you have to think about it: that doesn't go away with kubernetes anyway: if anything it probably gets worse.



Uh, no. If network performance is an issue then you configure some labels (slow, medium, mega-fast). You then launch your services on “hardware with network:slow” labels, and new instances will be brought up based on the labels it needs.

Even better: you define a “bandwidth” resource and each service requests a slice of that. Kubernetes takes care of the box packing. If you care so much about it you can then enforce that in a number of different, flexible ways depending on your infrastructure or requirements. At the end of the day it’s no different to CPU, memory or GPU requests.


Leaving aside the fact that I don’t believe anyone does this.

You just maybe proved the point that I originally asserted: the things you configure on kubernetes are the same things you configure on cloud VMs.


> Leaving aside the fact that I don’t believe anyone does this.

Everyone who uses GPUs with Kubernetes does exactly this. GPUs are not a native thing to Kubernetes.

> You just maybe proved the point that I originally asserted: the things you configure on kubernetes are the same things you configure on cloud VMs

You are of course entirely missing the point, and I’m not sure if you’re doing it on purpose or not.

You have 100 units of work that you need to run. A unit of work is some “thing” that needs a certain number of CPU cores, memory, GPUs and other user-defined resources. Each unit of work also needs an individual identity, distinct from other units of work.

Go and code something to run that workload on the minimum number of cloud VMs, taking into account cost and your own user-defined scaling policies, minimizing the amount of unused resources. Now make it handle adapting to changes in the quantity and definitions of those units of work. Now make it handle over-committing, allowing units of work to have hard and soft limits that depend on the utilization of the underlying hardware. Now make it provision some form of secure identity per unit of work.

After you’ve spent time coding that, you’ll realize that:

1. It’s hard

2. You’ve re-invented part of Kubernetes

3. Your implementation is shit

4. It’s very much not “the same things you can configure on cloud VMs”


Except 2 things:

1. Kubernetes manifests require "requests" to be specified. (mem/CPU allocation)

2. Getting 100 VMs identical is not difficult on the cloud.

The point I'm making is that you've already abstracted a lot of the things away with Cloud, and we abstract the same exact things even more on top of kubernetes.

If K8S was running on bare metal I'd agree with you though.


If you can’t understand why it’s more expensive and less efficient to run 1 unit of work per server on 100 servers as opposed to fitting them into 20 larger servers then I’m not sure what to say.


I'm not sure why we're shifting around so much, I never claimed "efficiency" and especially not "efficiency of micro workloads".

This whole thread is discussing the "mental overhead" of managing VMs vs Kubernetes.

if you have to define the "size" of your workload, it hardly matters if it's a VM or k8s. You need to define the size.

Kubernetes can be more fine-grained (I want 1/4th of a CPU!) but you still define it.

I'm not talking about cost, or really anything, only that the original claim I originally responded to: "hurr durr but with VMs I have to configure everything!" but that is the same on kubernetes.

VMs are already a pretty good abstraction if you're looking at carving up compute resources. My "frustration" if I even have one is that we are doing both, one on top of the other. Which feels extremely wasteful.

But like everything it depends on your workloads, and I'm used to having things that consume entire CPU cores, not 1/4th of one. (I'm also not used to making web services these days, and kubernetes is optimised primarily for that kind of stateless workload)


I'm also used to having workloads that consume entire CPU cores, and as such I'd like the number of CPU cores dedicated to log aggregation, system monitoring, metrics etc to be as reduced as possible. I'd also like to not spin up a bunch of new VMs to do a rollout, and I'd also like to run all those small satellite workloads that always appear on the same platform. Oh, and I'd not like to have to run something that needs 3gb of memory and 3 cores on a machine with 4gb of memory and 4 cores because I'm constrained by AWS instance sizes.

Mixed workloads on smaller, larger machines are great for this.

With VMs you do need to configure everything, compared to a baseline stripped down AMI/image that runs nothing but docker and a Kubernetes daemon.

Yes, you can enumate Kubernetes with a bunch of custom tooling. No, it's not better. Yes, it is harder.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: