Hacker Newsnew | past | comments | ask | show | jobs | submit | roman_sf's commentslogin

You can fill the customize with all this instructions, threats, adjustments, instructions and conditionals - most of it has effect on format only.

It’s still going to gaslight you when it gets a chance (e.g. you not certain about something)


>> always email > standard user > administrator

maybe its the boomers that can't give up Outlook? Otherwise they could've migrated everybody to google workspaces or some other web alternative.


What I don't get why nobody questions how's OS that needs all third-party shit to function and be compliant, gets into critical paths in the first place??


Hahaha, well this time Jessy got paged so yeah... the summary got priority over turkey.


But this time it was a _different_ dependency. They just want to make sure all dependencies are ruled out before migrating to GCP :)


What do you mean by cross-regional dependencies? Isn't running in multi-region setup is by itself adding dependency?

Speaking about multi-region services. What do you think about Google now offering all three major building pieces as multi-regional?

They have muti-regional buckets, LB with single anycast IP, document db (firebase). Pubsub can route automatically to nearest region. Nothing like this is available in amazon, well only DIY building blocks.


If your workload can run in region B even if there is a serious failure of a service in region A, in which your workload normally runs, then no, you have not created a cross-regional dependency.

When I talk about cross regional dependency, I talk about an architectural decision that can lead to a cascading failure in region B, which is healthy by all accounts, when there is a failure in region A.

AWS has services that allow for regional replication and failover. DynamoDB, RDS, and S3 all offer cross region replication. And Global Accelerator provides an anycast IP that can front regional services and fail over in the event of an incident.


I haven't used global accelerator but it doesn't look like the same. On landing page it says: "Your traffic routing is managed manually, or in console with endpoint traffic dials and weights".


“Global Accelerator continuously monitors the health of all endpoints. When it determines that an active endpoint is unhealthy, Global Accelerator instantly begins directing traffic to another available endpoint. This allows you to create a high-availability architecture for your applications on AWS.”

https://docs.aws.amazon.com/global-accelerator/latest/dg/dis...

Alternatively, global load balancing with Route 53 remains a viable, mature option as well. Health checks and failover are fully supported.


>> A design proposal gets created and refined over a few weeks of presentations to the team and direct leadership.

Dat soundz like a bank and not a cloud provider.


> Dat soundz like a bank and not a cloud provider.

The first stage in making something reliable, sustainable, and as easy to run as possible is to understand the problem, and understand what you're trying to achieve. You shouldn't be writing any code until you've got that figured out, other than possibly to make sure you understand something you're going to propose.

It's good software engineering, following practices learned, overhauled, and refined over decades, that have a solid track record of success. It's especially vital where you're working on something like AWS, Azure, etc. cloud services.

If you leap feet first in to solving a problem, you'll just end up with something that is unnecessarily painful down the road, even in the near term. It's often quicker to do the proposal, get it reviewed, and then produce the product than it is to dive in and discover all the gotchas as you go along. The process doesn't take too long, either.

Every service in AWS will follow similar practices, and engineers do it often enough that whipping up a proposal becomes second nature and takes very little time. Just writing the proposal in and of itself is valuable because it forces you to think through your plan carefully, and it's rare for engineers not to discover something that needs clarified when they write their plan down. (side-note: All of this paperwork is also invaluable evidence for any promotion that they may be wanting, arguably as much as actually releasing the thing to production). It shouldn't take a day to write a proposal, and you'd only need a couple of meetings a few days apart to do the initial review and final review. Depending on the scope of what came up in the initial review, the final review may be a quick rubber stamp exercise or not even necessary at all.

Where I am now, we've got an additional cross-company group of experienced engineers that can also be called on to review these proposals. They're almost always interesting sessions because it brings in engineers who will have a fresh perspective, rather than ones with preconceived notion based on how things currently are.

An anecdote I've shared in greater detail here before: Years ago we had a service component that needed created from scratch and had to be done right. There was no margin for error. If we'd made a mistake, it would have been disastrous to the service. Given what it was, two engineers learned TLA+, wrote a formal proof, found bugs, and iterated until they got it fixed. Producing the java code from that TLA+ model proved to be fairly trivial because it almost became a fill-in-the-blanks. Once it got to production, it just worked. It cut down what was expected to be a 6 months creation and careful rollout process down to just 4 months, even including time to run things in shadow mode worldwide for a while with very careful monitoring. That component never went wrong, and the operational work for it was just occasional tuning of parameters that had already been identified as needing to be tune-able during design review.

In an ideal world, we'd be able to do something like how Lockheed Martin Corps did for the space shuttles: https://www.fastcompany.com/28121/they-write-right-stuff, but good enough is good enough, and there's ultimately diminishing returns on effort vs value gained.


The deal was it all abstracted :) And now developers have to start all over with subnet masks?!


The abstraction was file a ticket and let someone figure it out for you, or script it if it happens often enough.


good thing kubernetes can restart pods fast and have persistent volumes!


Persistant volumes rely on NFS (or a flavor thereof), which is not great for database performance.

But that's a moot point anyways, since Vitess doesn't use persistent volumes - it reloads the individual DBs from backups and binlogs when a pod is moved or restarted.


> Persistant volumes rely on NFS (or a flavor thereof), which is not great for database performance.

NFS is an option, but it’s not the only option. If you need locally attached storage you can use local PV’s which went GA in Kubernetes 1.14, or any of the plethora of volume plugins that exist for various network storage solutions.


Network storage, NFS or other, is never optimal for DB storage. It can and is used, but it’s never going to be as good as local storage.

I had forgotten about local storage; it’s not something we can use in our environment.

It’s a moot point, in either case. Vitess doesn’t rely on persistent storage, it relies on replicas and backups.


Are you saying that iscsi is a “flavor of nfs”? What makes it not suitable for good db performance?


I sincerely beg your forgiveness for forgetting about a feature that’s only about a year old, and a feature we can’t take advantage of in our own kubernetes environment, or with the Vitess cluster.

Forest, trees.


couldn't they shovel it into dynamodb?


Theoretically, sure. But Vitess is - as Kubernetes is - provider agnostic.

But if you’re talking about the DB, sure to that too. But, not my decision.


Yeah, I've seen these resume-driven, anti-vendor-locking, everything-is-a-nail-for-kubernetes-hummer decisions :)


I agree. It is impressive how much it can orchestrate. It is also very useless in the real cloud because developers there are dealing with higher-level abstractions to solve problems for the business.

The most simplistic task - execute some code in response to even in a bucket - makes kubernetes with all its sophisticated convergence capabilities completely useless. And even if somebody figures this out and puts the opensource project on github to do this on kubernetes - it just going to break at slightest load.

Not to mention all the work to run kubernetes at any acceptable level of security, or keep the cost down, do all patching, scaling, logging, upgrades... Oh, the configuration management itself for kubernetes? Ah sorry, I forgot, there are 17 great open-source projects exists :)


> The most simplistic task - execute some code in response to even in a bucket - makes kubernetes with all its sophisticated convergence capabilities completely useless.

That's because you're not thinking web^Wcloud scale. To execute some code in response to event you need:

- several workers that will poll the source bucket for changes (of course you could've used existing notification mechanism like aws eventBridge, but that will couple you k8s to vendor-specific infra, so it kinda deminishes the point of k8s)

- distributed message bus with persistanse layer. Kafka will work nicely because they say so on Medium, even though it's not designed for this use case

- a bunch of stateless consumers for the events

- don't forget that you'll need to write processing code with concurrency in mind because you're actually executing it in truly destributed system at this point and you've made a poor choice for your messaging system


Wait, I can do all these with s3 and lambda at any scale - for pennies :) Will probably take few hours to set everything up with tools like stackery.io

So once again, why developers need kubernetes for? If the most simple problem becomes a habitholy mess :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: