Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If AWS truly cared about customers they would implement spending limits. Note the plural: Customers don't want their S3 data deleted because some GPU stuff went crazy.

But AWS prefers profit.



How would that actually work? When you reach your spending limit, delete your data from S3 that you’re being charged for? Stop allowing egress traffic? Stop allowing any API calls that cost money? Stop your EC2 instance?

AWS has over 200 services. How would you implement that conceptually?

I know it’s a real concern when learning AWS for most people. I first learned AWS technologies at a 60 person company where I had admin access from day one to the AWS account and then went to AWS where I can open as many accounts as I want for learning. So I haven’t had to deal with that issue.

But what better way would you suggest than LightSail where you have known costs up front?


I think it could be done reactively, as long as two things are true:

1. spending limits are fine-grained — rather than having one global budget for your entire AWS project, instead, each billable SKU inside a project would have its own separate configurable spending limit. The goal here isn't to say "I ran out of money; stop trying to charge me more money"; it's rather to say "I have budgeted X for the base spend for the static resources, which I will continue paying; but I have budgeted Y for the unpredictable/variable spend, and have exceeded that limit, so stop allowing anything to happen that will generate unpredictable/variable spend."

This way, you can continue to pay for e.g. S3 storage, while capping spend on S3 download (which would presumably make reading from buckets in the project impossible while this is in effect); or you can continue paying for your EC2 instances, while capping egress fees on them (which would presumably make you unable to make requests to the instances, but they'd still be running, so you wouldn't lose the state for any ephemeral instances.)

2. AWS "eats" the credit-spend events of a billing SKU between the time it detects budget-overlimit of that billing SKU, and the time it finishes applying policy to the resource that will stop it from generating any more credit-spend events on that billing SKU. (This is why this kind of protection logic can never be implemented the way people want by a third party: a third party can only watch AWS audit events and react by sending API requests; it has no authority to retroactively say "and anything that happens in between the two, disregard that at billing time, since that spend was our fault for not reacting faster.")

Note that implementing #2 actually makes implementing #1 much easier. To implement #1 alone, you'd have to have each service have some internal accounting-quota system that predicts how much spend "would be" happening in the billing layer, and can respond to that by disabling features in (soft) realtime for specific users in response to those users exceeding a credit quota configured in some other service. But if you add #2, then that accounting logic can be handled centrally and asynchronously in an accounting service which consumes periodic batched pushes of credit-spend-counter increments from other services. The accounting service could emit CQRS command "disable services generating billable SKU X for customer Y starting from timestamp Z" to a message queue, and the service itself could see it (and react by writing to an in-memory blackboard that endpoints A/B/C are disabled for user Y); but the invoicing service could also see it, and recompute the invoice for customer Y for the current month, with all spend events for billing SKU X after timestamp Z dropped from the invoice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: