I think the point is it is cheap to prevent. The weird tip is doing a different test to the standard one, which costs little for typical HNers (but admit every $ counts for many people esp. with current inflation, poverty, bad governance) but sounds like on par with a dentist doing anything beyond a checkup.
It is a long read and I want to make time for it. Quick search check and calc (for calcify etc.) and diet appear alot in the article which is not surprising based on other things I have watched on the subject.
It is worrying that the machines many of HN rely on are the minority of their revenue so they'd not even flinch financially to mess up that product line. TF for Linux/x86/arm as an alternative ecosystem that is not controlled by one party.
There is a lot of stuff I should do. From making my own CPU from a breadboard of nand gates to building a CDN in Rust. But aint got time for all the things.
That said I built an LLM following Karpathy's tutorial. So I think it aims good to dabble a bit.
I built an 8-bit computer on breadboards once, then went down the rabbit hole of flight training for a PPL. Every time I think I’m "done," the finish line moves a few miles further.
Glad you’ve got all that time on your hands. I am still working on the fusion reactor portion of my supernova simulator, so that I can generate the silicon you so blithely refer to.
Seriously I feel like it's self-sabotage sometimes at work. Just fixing the thing getting tests to pass isn't enough. Until I fully have a mental model of what is happening I can't move on.
It's good to go through the exercise, but agents are easy until you build a whole application using an API endpoint that OpenAI or LangChain decides to yank, and you spend the next week on a mini migration project. I don't disagree with the claim that MCP is reinventing the wheel but sometimes I'm happy plugging my tools and data into someone else's platform because they are spending orders of magnitudes more time than me doing the janitor work to keep up with whatever's trendy.
I have been playing with OpenAI, Anthropic, and Groq’s APIs in my spare time and if someone reading this doesn’t know it, they are doing the same thing and they are so close in implementation that it’s just dumb that they are in any way different.
You pass listing of messages generated by the user or the LLM or the developer to the API, it generates a part of the next message. That part may contain thinking blocks or tool calls (local function calling requested by the LLM). If so, you execute the tool calls and re-send the request. After the LLM has gathered all the info it returns the full message and says I am done. Sometimes the messages may contain content blocks that are not text but things like images, audio, etc.
That’s the API. That’s it. Now there are two improvements that are currently in the works:
1. Automatic local tool calling. This is seriously some sort of afterthought and not how they did it originally but ok, I guess this isn’t obvious to everyone.
2. Not having to send the entire message history back. OpenAI released a new feature where they store the history and you just send the ID of your last message. I can’t find how long they keep the message history. But they still fully support you managing the message history.
So we have an interface that does relatively few things, and that has basically a single sensible way to do it with some variations for flavor. And both OpenAI and Anthropic are engaged in a turf war over whose content block types are better. Just do the right thing and make your stuff compatible already.
If you are a software engineer, you are going to be expected to use AI in some form in the near future. A lot of AI in its current form is not intuitive. Ergo, spending a small effort on building an AI agent is a good way to develop the skills and intuition needed to be successful in some way.
Nobody is going to use a CPU you build, nor are you ever going to be expected to build one in the course of your work if you don’t seek out specific positions, nor is there much non-intuitive about commonly used CPU functionality, and in fact you don’t even use the CPU directly, you use translation software whit itself is fairly non-intuitive. But that’s ok too, you are unlikely to be asked to build a compiler unless you seek out those sorts of jobs.
EVERYONE involved in writing applications and services is going to use AI in the near future and in case you missed the last year, everyone IS building stuff with AI, mostly chat assistants that mostly suck because, much about building with AI is not intuitive.