Hacker Newsnew | past | comments | ask | show | jobs | submit | pryz's commentslogin

Jared: "We are sending out a proposal for Stacked Diffs on @GitHub to trusted design partners to gather initial feedback over the next few days. From there we’ll iterate and share the gameplan"



Possibly? I haven't found any public documentation that says specifically what hypervisor is used.

Google built crosvm which was the initial inspiration for firecracker, but Cloud Run runs on top of Borg (this fact is publicly documented). Borg is closed source, so it's possible the specific hypervisor they're using is as well.


I believe that's an Intel project, not a Google project. I personally think it's more likely Cloud Run is on top of the same proprietary KVM-based code they use for their Compute Engine.


Love Zed! Would love it even more with Helix/Kakoune style binding :)


This is great! Are you considering opening a proposal for GORANDSEED? Could be really useful to generalize the approach!


> By embracing DynamoDB as your metadata layer, systems stand to gain a lot.

Yes yes yes. However, DynamoDB can be expensive very quickly :]


Interestingly, DynamoDB is cheaper than S3 though, compared by number of requests. DynamoDB costs $1.25 per million write request units and $0.25 per million read request units. While S3 is $5 per million PUT requests and $0.4 per million GET requests.


Keep in mind that a write capacity unit in DDB is capped to 1kb. Writing a 10kb file to DDB? That's 10 write capacity units.


That's a good point, only very small data is cheaper in DynamoDB than S3. Also adding global secondary indexes tends to add cost, since writes are charged for each of them.


Like many developers, we've built our fair share of workflows that export data to 3rd-party services. They always start simple: pull data, hit an API, job done! Then the problems show up. We hit API limits, services go down, and those quick-and-dirty workflows become a major source of headaches.

The knee-jerk reaction is often to add a queue! Sure, it helps for a while. But queues introduce their own complexity: handling failures, managing retries, creating visibility... It's a band-aid, not a cure, and we've been wrestling with this problem for too long!

In this blog post, we'll break down:

- Why queues fall short when building truly resilient integrations - The core principles behind building scalable, fault-tolerant async workflows - Practical techniques that go beyond the limitations of queues

If you're done with fragile systems and want to level up your integration game, this one's for you!


Distributed coroutines are a perfect fit for Python! We're excited to explain how they work and what you can do with them!

Also happy to answer any questions :)


Hi HN,

We've had enough of traditional orchestration frameworks. That's why we created dispatch.run, aiming to streamline coding by integrating resilience more naturally.

The core of our solution? Distributed Coroutines. These aren't your typical tools; they're designed to enhance flexibility and reduce complexity in distributed systems.

We've detailed our approach and the potential of Distributed Coroutines in a new blog post. It's about making development smoother and more intuitive.

Let's discuss the future of distributed computing.


Except that way many many AWS services can't do ipv6.


And since AWS/Amazon gets more money by not supporting IPv6 everywhere, why would they enable it?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: