Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Lambda are easily two of their 'simplest'

Not if you want to build something production ready. Even a simple thing like say static IP ingress for the Lambda is very complicated. The only AWS way you can do this is by using Global Accelerator -> Application Load Balancer -> VPC Endpoint -> API Gateway -> Lambda !!.

There are so many limits for everything that is very hard to run production workloads without painful time wasted in re-architecture around them and the support teams are close to useless to raise any limits.

Just in the last few months, I have hit limits on CloudFormation stack size, ALB rules, API gateway custom domains, Parameter Store size limits and on and on.

That is not even touching on the laughably basic tooling both SAM and CDK provides for local development if you want to work with Lambda.

Sure Firecracker is great, and the cold starts are not bad, and there isn't anybody even close on the cloud. Azure functions is unspeakably horrible, Cloud Run is just meh. Most Open Source stacks are either super complex like knative or just quite hard to get the same cold start performance.

We are stuck with AWS Lambda with nothing better yes, but oh so many times I have come close to just giving up and migrate to knative despite the complexity and performance hit.



>Not if you want to build something production ready.

>>Gives a specific edge case about static IPs and doing a serverless API backed by lambda.

The most naive solution you'd do on any non-cloud vendor, just have a proxy with a static ip that then routes traffic whereever it needed to go, would also work on AWS.

So if you think AWS's solution sucks why not just go with that? What you described doesn't even sound complicated when you think of the networking magic behind the scenes that will take place if you ever do scale to 1 million tps.


> Production ready

Don’t know what you think should mean but for me that means

1. Declarative IaaC in either in CF/terraform

2. Fully Automated discovery which can achieve RTO/RPO objectives

3. Be able to Blue/Green and % or other rollouts

Sure I can write ansible scripts, have custom EC2 images run HA proxy and multiple nginx load balancers in HA as you suggest, or host all that to EKS or a dozen other “easier” solutions

At the point why bother with Lambda ? What is the point of being cloud native and serverless if you have to literally put few VMs/pod in front and handle all traffic ? Might as well host the app runtime too .

> doesn’t even sound complicated .

Because you need a full time resource who is AWS architect and keeps up with release notes and documentation or training and constantly works to scale your application - because every single component has a dozen quotas /limits and you will hit them - it is complicated.

If you spend few million a year on AWS then spending 300k on an engineer to do just do AWS is perhaps feasible .

If you spend few hundred thousands on AWS as part of mix of workloads it is not easy or simple.

The engineering of AWS impressive as it maybe has nothing to the products being offered . There is a reason why Pulumi, SST or AWS SAM itself exist .

Sadly SAM is so limited I had to rewrite everything to CDK in couple of months . CDK is better but I am finding that I have to monkey patching limits on CDK with the SDK code now, while possible , the SDK code will not generate Cloudformation templates .


> Don’t know what you think should mean but for me that means

I think your inexperience is showing, if that's what you try to mean by "production-ready". You're making a storm in a teacup over features that you automatically onboard if you go through an intro tutorial, and "production-ready" typically means way more than a basic run-of-the-mill CICD pipeline.

As most of the times, the most vocal online criticism comes from those who have the least knowledge and experience over the topic they are railing against, and their complains mainly boil down to criticising their own inexperience and ignorance. There is plenty of things to criticize AWS for, such as cost and vendor lock-in, but being unable and unwilling to learn how to use basic services is not it.


> Even a simple thing like say static IP ingress for the Lambda is very complicated.

Explain exactly what scenario you believe requires you to provide a lambda behind a static IP.

In the meantime, I recommend you learn how to invoke a lambda, because static IPs is something that is extremely hard to justify.


Try telling that to customers who can only do outbound API calls to whitelisted IP addresses

When you are working with enterprise customers or integration partners it doesn’t even have to be regulated sectors like finance or healthcare, these are basic asks you cannot get away from .

people want to be able to know whitelist your egress and ingress IPs or pin certificates. It is not up to me to say on efficacy of these rules .

I don’t make the rules of the infosec world , I just follow them.


> Try telling that to customers who can only do outbound API calls to whitelisted IP addresses

Alright, if that's what you're going with then you can just follow a AWS tutorial:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-v...

Provision an elastic IP to have your static IP address, set the NAT gateway to handle traffic, and plugin the lambda to the NAT gateway.

Do you think this qualifies as very complicated?


This architecture[1] requires the setup of 2 NAT gateways (one in each AZ), a routing table, an Internet Gateway, 2 Elastic IP and also the VPC. Since as before we cannot use Function URLs for Lambda we will still need the API Gateway to make HTTP calls.

The only parts we are swapping out `GA -> ALB -> VPC` for `IG -> Router -> NAT -> VPC`.

Is it any simpler ? Doesn't seem like it is to me.

Going the NAT route means, you also need to have intermediate networking skills to handle a routing table (albeit a simple one), half the developers of today never used IP tables is or what chaining rules is.

---

I am surprised at the amount of pushback on a simple point which should be painfully obvious.

AWS (Azure/GCP are no different) has become overly complex with no first class support for higher order abstractions and framework efforts like SAM or even CDK seem to getting not much love at all in last 4-5 years.

Just because they offer and sell all these components to be independently, doesn't mean they should not invest and provide higher order abstractions for people with neither bandwidth or the luxury to be a full time "Cloud Architect".

There is a reason why today Vercel, Render or Railway others are popular despite mostly sitting on top of AWS.

On Vercel the same feature would be[1] quite simple. They use the exact solution you suggest on top of AWS NAT gateway, but the difference I don't have to know or manage it, is the large professional engineering team with networking experience at Vercel.

There is no reason AWS could not have built Vercel like features on top of their offerings or do so now.

At some point small to midsize developers will avoid direct AWS by either choosing to setup Hetzner/OVH bare machines or with bit more budget colo with Oxide[3] or more likely just stick to Vercel and Railway kind of platforms.

I don't know how that will impact AWS, we will all still use them, however a ton of small customers paying close to rack rate is definitely much much higher margin than what Vercel is paying AWS for the same workload is going to be.

--

[1] https://docs.aws.amazon.com/prescriptive-guidance/latest/pat...

[2] this https://vercel.com/docs/connectivity/static-ips

[3] Would be rare, obviously only if they have the skill experience to do so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: