Hacker Newsnew | past | comments | ask | show | jobs | submit | nickmonad's commentslogin

Yeah definitely something that would've been posted as a joke in a "HN front-page 10 years from now" kind of thing.

You turn it on, and it scales right up.

Who cares what we store so long as we do it quickly?

I hear there's a great tool for this, DB null or something like that.

MySQL actually has a BLACKHOLE storage engine designed specifically for universe-scale data storage for those who don't care about persistence.

It does have its use cases :)

Unless you're doing OLTP. Then, TigerBeetle ;)

https://nickmonad.blog/

Trying to blog more frequently with shorter posts!


Narrowing in on background color is an extreme oversimplification of what Tailwind provides. I found it to be a great tool for working with CSS, especially for layout. Business viability can be debated, but the value is way beyond what you suggested.


I agree with the sentiment that companies should help fund open source they depend on, but I think it's a stretch to say those business succeeded "only" because of Tailwind. It's a great project, although I'm pretty sure they would have figured out a way to work with CSS without it.


Would love to read this, although I'm seeing some pretty horrific code formatting issues in both Firefox and Chrome.


weirdest part is, the initial load looks fine (try refreshing, scroll persists).

firefox's reader view helps too


I saw a blog about this yesterday: disable extensions (notably 1Password) to fix formatting inside code tags.


Looks decent on iPhone safari FWIW.


turn off 1password


More context for those who haven't heard about this: https://www.1password.community/discussions/developers/1pass...


Hey matklad! Thanks for hanging out here and commenting on the post. I was hoping you guys would see this and give some feedback based on your work in TigerBeetle.

You mentioned, "E.g., in OP, memory is leaked on allocation failures." - Can you clarify a bit more about what you mean there?


In

    const recv_buffers = try ByteArrayPool.init(gpa, config.connections_max, recv_size);
    const send_buffers = try ByteArrayPool.init(gpa, config.connections_max, send_size);
if the second try throws, than the memory allocation created by the first try is leaked. Possible fixes:

A) clean up individual allocations on failure:

    const recv_buffers = try ByteArrayPool.init(gpa, config.connections_max, recv_size);
    errdefer recv_buffers.deinit(gpa);

    const send_buffers = try ByteArrayPool.init(gpa, config.connections_max, send_size);
    errdefer send_buffers.deinit(gpa);
B) ask the caller to pass in an arena instead of gpa to do bulk cleanup (types & code stays the same, but naming & contract changes):

    const recv_buffers = try ByteArrayPool.init(arena, config.connections_max, recv_size);
    const send_buffers = try ByteArrayPool.init(arena, config.connections_max, send_size);
C) declare OOMs to be fatal errors

    const recv_buffers = ByteArrayPool.init(gpa, config.connections_max, recv_size) catch |err| oom(err);
    const send_buffers = ByteArrayPool.init(gpa, config.connections_max, send_size) catch |err| oom(err);

    fn oom(_: error.OutOfMemory) noreturn { @panic("oom"); }
You might also be interesting in https://matklad.github.io/2025/12/23/static-allocation-compi..., it's essentially a complimentary article to what @MatthiasPortzel says here https://news.ycombinator.com/item?id=46423691


Gotcha. Thanks for clarifying! I guess I wasn't super concerned about the 'try' failing here since this code is squarely in the initialization path, and I want the OOM to bubble up to main() and crash. Although to be fair, 1. Not a great experience to be given a stack trace, could definitely have a nice message there. And 2. If the ConnectionPool init() is (re)used elsewhere outside this overall initialization path, we could run into that leak.

The allocation failure that could occur at runtime, post-init, would be here: https://github.com/nickmonad/kv/blob/53e953da752c7f49221c9c4... - and the OOM error kicks back an immediate close on the connection to the client.


This is the fundamental question which motivated the post. :)

I think there are a few different ways to approach the answer, and it kind of depends on what you mean by "draw the line between an allocation happening or not happening." At the surface level, Zig makes this relatively easy, since you can grep for all instances of `std.mem.Allocator` and see where those allocations are occurring throughout the codebase. This only gets you so far though, because some of those Allocator instances could be backed by something like a FixedBufferAllocator, which uses already allocated memory either from the stack or the heap. So the usage of the Allocator instance at the interface level doesn't actually tell you "this is for sure allocating memory from the OS." You have to consider it in the larger context of the system.

And yes, we do still need to track vacant/occupied memory, we just do it at the application level. At that level, the OS sees it all as "occupied". For example, in kv, the connection buffer space is marked as vacant/occupied using a memory pool at runtime. But, that pool was allocated from the OS during initialization. As we use the pool we just have to do some very basic bookkeeping using a free-list. That determines if a new connection can actually be accepted or not.

Hopefully that helps. Ultimately, we do allocate, it just happens right away during initialization and that allocated space is reused throughout program execution. But, it doesn't have to be nearly as complicated as "reinventing garbage collection" as I've seen some other comments mention.


Nice! Will definitely take a look :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: