Jared: "We are sending out a proposal for Stacked Diffs on
@GitHub
to trusted design partners to gather initial feedback over the next few days. From there we’ll iterate and share the gameplan"
Possibly? I haven't found any public documentation that says specifically what hypervisor is used.
Google built crosvm which was the initial inspiration for firecracker, but Cloud Run runs on top of Borg (this fact is publicly documented). Borg is closed source, so it's possible the specific hypervisor they're using is as well.
I believe that's an Intel project, not a Google project. I personally think it's more likely Cloud Run is on top of the same proprietary KVM-based code they use for their Compute Engine.
Interestingly, DynamoDB is cheaper than S3 though, compared by number of requests. DynamoDB costs $1.25 per million write request units and $0.25 per million read request units. While S3 is $5 per million PUT requests and $0.4 per million GET requests.
That's a good point, only very small data is cheaper in DynamoDB than S3. Also adding global secondary indexes tends to add cost, since writes are charged for each of them.
Like many developers, we've built our fair share of workflows that export data to 3rd-party services. They always start simple: pull data, hit an API, job done! Then the problems show up. We hit API limits, services go down, and those quick-and-dirty workflows become a major source of headaches.
The knee-jerk reaction is often to add a queue! Sure, it helps for a while. But queues introduce their own complexity: handling failures, managing retries, creating visibility... It's a band-aid, not a cure, and we've been wrestling with this problem for too long!
In this blog post, we'll break down:
- Why queues fall short when building truly resilient integrations
- The core principles behind building scalable, fault-tolerant async workflows
- Practical techniques that go beyond the limitations of queues
If you're done with fragile systems and want to level up your integration game, this one's for you!
We've had enough of traditional orchestration frameworks. That's why we created dispatch.run, aiming to streamline coding by integrating resilience more naturally.
The core of our solution? Distributed Coroutines. These aren't your typical tools; they're designed to enhance flexibility and reduce complexity in distributed systems.
We've detailed our approach and the potential of Distributed Coroutines in a new blog post. It's about making development smoother and more intuitive.
Let's discuss the future of distributed computing.