AI-based tools are mostly about replacing the processor with something smarter, not the router.
Of course, sometimes it can be an advantage to not have to explicitly write the router, but the big benefit is the better processor for request->categorization, which with AI can even include clarification steps.
There may be firm specific risk etc., but there is also a concept of double marginalization, where monopolies that exist across the vertical layers of a production chain will be less efficient than a single monopoly, because you only get a single layer of dead weight loss rather than multiple.
Well several times faster, but not interesting enough to say that use this. For me it personally was an exploratory project to review litellm and its internals.
The LLM docgen in this case Claude has been over enthusiastic due to my incessant prodding :D.
Neat but obvious AI slop (coming from someone who vibe codes a lot). The diagrams that don't align in the README, and a readme making ai typical bold claims that haven't been edited, make me doubt how much a human has even reviewed this software or tested it.
If the author hasn't reviewed or tested it why should anyone else bother?
I started this as a personal project to help with monitoring my personal projects. The eBPF monitoring works well - that part is solid.
The AI part is experimental, especially the idea of running inference on CPU (can't afford GPUs and didn't want to rely on OpenAI APIs, though that's where it started). It's hit-or-miss depending on the model.
Not production-tested at scale - just sharing in case it's useful to others who want to tinker with eBPF + Rust.
Full transparency: I did use AI to help write the documentation because honestly, writing docs feels boring and will review thoroughly now based on your feedback
Open sourcing something for the first times so trying and learning
It does seem super cool! But if you aren't even editing the basic README.md - it's not that you used AI to help, but that you that you didn't even do the most basic editing, I don't know what to trust. If I can't trust the docs why spend my time?
It made it very clear - virtualization builds where memory can be dynamically added and removed by the emulator. I haven't done this with Android but it can be quite useful for running lots of test emulators, they can adapt their memory to the workload to not overwhelm the host.
(and if it is not apparent to some readers, most modern x86-based systems use 64 byte cache line sizes, which is sort of analogous to disk block size - quite a few memory operations tend to happen in 64 byte chunks under the covers - the ones that don't are "special")
RT signals do get queued... that is one of the major differences (and yes, the essay is not using them, so your point stands as it is written, but using RT signals is a mechanism to prevent it).
Postgres by default computes univariate stats for each column and uses those. If this is producing bad query plans, you can extend the statistics to be multivariate for select groups of columns manually. But to avoid combinatorially growth of stats related storage and work, you have to pick the columns by hand.
Often times it is ECU engine timings. See the many ECU tune kits available for many cars.
More aggressive tunings can also be harder on the engine. Factory tunes work a bit like binning and overclocking - the higher clocked CPUs are often selected for better quality control, and lower clocked CPUs are the same part, but with less rigorous quality control (or at least they don't have to pass at the same extreme high level).
So just like you can overclock a CPU, you can often overclock an engine... but if there are mild defects in the engine block the results can be disastrous.
This also affects fuel economy, which is regulated, and also affects consumer buying decisions. So optimizing the ECU tune for performance and for fuel economy are often somewhat different optimization points. For instance, BMWs have a button for "Eco/Comfort/Sport" mode, which among other things, can sometimes change the engine timings from a button in the cabin.
Just like underclocking a GPU can get 80% of the performance for a fraction of the power, and overclocking by 10% can use 20% more power, the same is true of the ECU timings.
Of course, sometimes it can be an advantage to not have to explicitly write the router, but the big benefit is the better processor for request->categorization, which with AI can even include clarification steps.