For interest sake, have a look at the flutter engine. It does this kind of diff on each build (meaning, each time the ui tree gets modified & triggers a rebuild); they split their objects into stateful & stateless, and then in your own code you have to make sure to not unnecessarily trigger rebuilds for expensive objects. So it kinda force you to think & separate cheap & expensive ui objects.
In the old days we had these kinds of wars with cpu instruction sets & extensions (SSE, MMX, x64,). In a way I feel that CUDA should be opened up & generalized so that other manufacturers can use it too, the same way cpu's equalled out on most intruction sets. That way the whole world won't be beholden to one manufacturer (Big Green) and would calm down the scarcity effect we have now. I'm not an expert on gpu tech, would this be something that is possible? Is CUDA a driver feature or a hardware feature?
Yeah, that's exactly what I started to do with mine. It runs local Whisper on a CUDA, on a graphics card. Whisper is actually better than any other model that I've seen, even things like Parakeet. It can do language detection. It automatically removes all the ahs and all the ohms unless I specifically enter them in my speech. I think this whole paragraph is going to take maybe half a second to process and paste without any issues.
(and it did it perfectly without any edits required for me at all.)
Spin up a mid sized linux vm (or any machine with 8 or 12 cores will do with at least 16GB RAM with nmve). Add 10 users. Install claude 10 times (one per user). Clone repo 10 times (one per user). Have a centralized place to get tasks from (db, trello, txt, etc) - this is the memory. Have a cron wake up every 10 minutes and call your script. Your script calls claude in non-interactive mode + auto accept. It grabs a new task, takes a crack at it and create a pull request. That is 6 tasks per hour per user, times 12 hours. Go from there and refine your harnesses/skills/scripts that claude's can use.
In my case, I built a small api that claude can call to get tasks. I update the tasks on my phone.
The assumption is that you have a semi-well structured codebase already (ours is 1M LOC C#). You have to use languages with strong typing + strict compiler.You have to force claude to frequently build the code (hence the cpu cores + ram + nmve requirement).
If you have multiple machines doing work, have single one as the master and give claude ssh to the others and it can configure them and invoke work on them directly. The usecase for this is when you have a beefy proxmox server with many smaller containers (think .net + debian). Give the main server access to all the "worker servers". Let claude document this infrastructure too and the different roles each machine plays. Soon you will have a small ranch of AI's doing different things, on different branches, making pull requests and putting feedback back into task manager for me to upvote or downvote.
Just try it. It works. Your mind will be blown what is possible.
At first used Claude Max x5, but we are using the api now.
We only give it very targeted tasks, no broad strokes. We have a couple of "prompt" templates, which we select when creating tasks. The new opus model one shots about 90% of tasks we throw at it. Getting a ton of value from diagnostic tasks, it can troubleshoot really quickly (by ingesting logs, exceptions, some db rows).
Thanks, in your example are you saying that you had 10 Claude accounts or all 10 user accounts able to work in the allotment for a single Claude subscription. I've only ever dealt with the API and it got way too expensive quickly for the quality I was getting back.
I've used it recently to flesh out a fully fledged business plan, pricing models, capacity planning & logistics for a 10 year period for a transport company (daily bus route). I already had most of it in my mind and on spreadsheets already (was an old plan that I wanted to revive), but seeing it figure out all the smaller details that would make or break it was amazing! I think MBA's should be worried as it did some things more comprehensive than an MBA would have done. It was like a had an MBA + Actuarial Scientist + Statistics + Domain Expert + HR/Accounting all in one. And the plan was put into a .md file that has enough structure to flesh out a backend and an app.
Yeah it's really impressed me on occasion, but often in the same prompt output it just does something totally nonsensical. For my garage/shop, it generated an SVG of the proposed floor plan, taking care to place the sink away from moisture sensitive material and certain work stations close to each other for work flow, etc. it even routed plumbing and electrical...But it also arranged the work stations cramped together at the two narrow ends of the structure (such that they'd be impractical to actually work at) and ignored all the free wall space along the long axis so that literally most of the space was unused. It was also concerned about things that were non issues like contamination between certain stations, and had trouble when I explicitly told it something about station placement and it just couldn't seem to internalize it and kept putting it in the wrong place.
All this being said, what I was throwing at it was really not what it was optimized for, and it still delivered some really good ideas.
It seems that it’s useful if it’s better than what you would have done yourself.
Although the poster had a bus company business plan that includes actuarial analysis in his head and some spreadsheets so that bar appears to be sufficiently high.
I wasn't arguing whether or not it's correct, I was pointing out that it's useful only because you know it's given you correct information.
Maybe we'll get to a point where we just trust everything we receive from these systems but I'm yet to meet a person who would fund a business solely on an LLMs generated business plans without being able to crosscheck them by someone trusted.
If your work doesn't revolve around a web browser, try closing it completely when you try to do work. It might actually feel unsettling to work without having the browser open, almost the same feeling as reaching for the phone to check emails even though you know there are no new ones.
I recently realised I can do 70% of my work with only the terminal open and nothing else. Can get it up to 95% with terminal + single IDE at a time. The last 5% is browser-based, which can get distracting really fast - like HN and youtube rabbit holes.
If you really want to shine a light on the cockroach that is digital hoarding, try nuking your entire browsing history and tabs, delete all your movies/series/games collection. I 'cured' myself of being this kind of hoarder just before covid started and I haven't had the urge to store anything besides some private/precious data. On my one machine I've explicitly set firefox to not remember tabs and to also wipe history/cookies/tmp data on every close. Feel weird the first week but then if you see someone else's browser with 100+ tabs, its like looking at one of those picture of a hoarders car that is filled with trash. I think on some level the brain likes this kind of trash hoarding, some kind of rat behaviour. I jest but I hope you get the picture.
Of the thousands, a handful will prevail. Most of it is vaporware, just like in any boom. Every single industry has this problem; copy-cats, fakes & frauds.
"Buy my fancy oil for your coal shovel and the coal will turn into gold. If you pay for premium, you don't have to shovel yourself."
If everything goes right, there won't be a coal mine needed.
Remember that games are just simulations. Physics, light, sound, object boundaries - it not real, just a rough simulation of the real thing.
You can say that ML/AI/LLM's are also just very distilled simulations. Except they simulate text, speech, images, and some other niche models. It is still very rough around the edges - meaning that even though it seems intelligent, we know it doesn't really have intelligence, emotions and intentions.
Just as game simulations are 100% biased towards what the game developers, writers and artists had in mind, AI is also constrained to the dataset they were trained on.
reply