Hacker Newsnew | past | comments | ask | show | jobs | submit | more kevinsync's commentslogin

Parallel but unrelated, you can play these tones [0] to unlock shopping cart wheels that have locked up on you. The literal only times I ever abandon a cart (not return it to the store or cart corral) is when they lock up and I can't move the god damned things -- and the rare times it has happened have been in the middle of aisles where cars are supposed to drive, FULLY LOADED CART, before I ever get to the car to unload.

[0] https://www.begaydocrime.com


I'm a bit late to the conversation but I'm on month 4 (?) of building a (greenfield) desktop app with Claude Code + Codex. I've been coding since Pulp Fiction hit theaters, and I'm confident I could have just written this thing from scratch without LLMs with a lot fewer headaches, but I really wanted to get my hands dirty with new tools and see what they are and aren't capable of.

Some brief takeaways:

1. I'm on probably the 10th complete-restart iteration; I had a strong vision for what it was going to be, with a very weak grasp on how to technically achieve it, as well as a tenuous-at-best grasp on some of what turned out to be the most difficult parts (clever memory management, optimizations for speed, wrangling huge datasets, algorithms, etc) -- I started with a CLI-only prototype thinking I could get it all together reasonably quickly and then move onto a hand-crafted visual UI that I'd go over with a fine-toothed comb.

I'm still working on the fundamentals LOL with a janky UI that I'll get to when the foundation is solid.

2. By iteration 4 or 5, I realized I wanted to implement stuff that was incompatible with the less-complicated foundations already laid; this becomes a big issue when you vibe code and have it write docs, and then change your mind / discover a better way to do it. The amount of sprawl and "overgrowth" in the codebase becomes a second job when you need to pivot -- you become a glorified hedge trimmer trying to excise both code AND documentation that will very confidently poison the agents moving forward if you don't.

3. Speaking of overconfidence, I keep finding myself in situations where the LLMs (due to not being able to contextualize the entire codebase at any single time) offer solutions/approaches/algorithms that work (and work well!) until you push more data at it. For validation purposes, I started with very limited datasets, so I could hand-check results and audit the database. By the time you're at a million rows, spot-checking becomes really hard, shit starts crashing because you didn't foresee architectural problems due to lack of domain experience, etc. You start asking for alternative solutions and approaches, you get them, but the LLM (not incorrectly) also wants to preserve what's already there, so a whole new logic path gets cut, and the codebase grows like a jungle. The docs get stale without getting pruned. There's conflicting context. Switch to a different LLM and sometimes naming conventions mysteriously shift like it's speaking a different dialect. On and on.

Are the tools worth it? Depends. For me, for this one, on the whole, yes; it has taken an extremely long time (in comparison to the promises of 10x productivity) to get to where I've been able to try out a dozen approaches that I was unfamiliar with, see first-hand what works and what doesn't, and get a real working grasp of how off-the-rails agentic coding can take you if you're just exploring.

I am now left with some really good, relevant code to reference, a BUNCH of really misguided code to flush down the shitter, a strong mental map of how to achieve what I'm building + where things are supposed to go, and now I'm starting yet another fresh iteration where I can scaffold and piece together the whole thing with refactored / reformatted / readable code. And then actually implement the UI I've been designing lol.

I get the whole "just bully the LLM until it seems like it works, then ship it" mentality; objectively that's not much different than "just bully the developer until it seems like it works, then ship it" mentality of a product manager. But as amazing as these tools are for conjuring something into existence from thin air, I really think the devil is truly in the details, and if you're making something you hope to ever be able to build upon and expand and maintain, you have to go far beyond "vibes" alone.


I left a comment [0] on the other thread, and this is irrelevant if you aren't using Photoshop, but there's a plugin called DITHERTONE Pro that gives you a lot of control over the dither algorithm used + color grade. For actual design, I tend to use this since I'm already in PS cobbling together an image, and you can tweak the results in realtime to dial it in how you want.

I also have used didder [1] a couple times for dithering via CLI / script. Its results can be pretty good too, just more for repeatable batch operations and you need to make sure your palettes and chosen algorithms will produce what you're actually looking for.

[0] https://news.ycombinator.com/item?id=45726845

[1] https://github.com/makew0rld/didder


Cole (the author of didder) also has a GUI version called Dithertime: https://makew0rld.itch.io/dithertime


This looks like what I want - thank you!


I use a Photoshop plugin for complex dithering (DITHERTONE Pro [0] -- this is NOT AN AD lol, I'm not the creator, just a happy customer and visual nerd)

I'm only dropping it in here because the marketing site for the plugin demonstrates a lot of really interesting, full-color, wide spectrum of use-cases for different types of dithering beyond what we normally assume is dithering.

[0] https://www.doronsupply.com/product/dithertone-pro


On iPhone Safari this page opens a modal popup that I cannot close, rendering it useless...


I was in the same boat on a side project (Electron, Claude Code) -- I considered Playwright but ended up building a simple, focused API instead that allows Claude to connect to the app to inspect logs (main console + browser console), query internal app data + state, and execute arbitrary JS.

It's sped up debugging a lot since I can just give it instructions like "found a bug that does XYZ, I think it's a problem with functionABC(); connect to app, click these four buttons in this order, examine the internal state, then trace through the code to figure out what's going wrong and present a solution"

I was pretty resistant at first of delegating debugging blindly like that, but it's made the workflow pretty smooth to where I can occasionally just open the app, run through it as a human user and take notes on bugs and flow issues that I find, log them with steps to reproduce, then give Claude a list of bugs to noodle on while I'm focusing on stuff LLMs are terrible at (design, UI, frontend work, etc)


I grew up with stuff like Kai's Power Goo [0] so it doesn't bother me at all when developers step out of the box and bring a wacky UI!

[0] https://youtu.be/xt06OSIQ0PE?t=266


I cannot for the life of me find the article that explains this succinctly and with a hint of salience, but I recall reading sometime this year about how Klarna (a Swedish company) created the concept, in part, because the Swedish (culturally) tend to pay back their loans. It was a hit in its motherland, with little fraud and generally-responsible users. Then they went worldwide, in particular in America where we are a pack of jokers and heathens who are happy to finance a Crave Case and a vape with no intention of paying it back ever.

It apparently never occurred to them that we are like this.


Cute story, but do you think a massive company that specializes in loans and credit would move to the largest credit market in the world with a 100 years of very detailed financial data and models about loans, delinquencies, defaults, bankruptcies, etc but “it never occurred to them” to check that?


I mean, that was the literal whole point of the article, that they were culturally caught off guard. It happens! I'm going to try to find it again and edit this with a link.


I’m not doubting the existence of such article. I’m asking you to engage some critical thinking and not accept whatever cute, PR or otherwise “fun” narrative you read anywhere.

It’s “fun” to think that Starbucks struggles in Italy is because they are dumb and didn’t realize that Italians have a long cultural tradition with coffee that’s very different than Americans. “Oh dumb Americans and dumb Starbucks. How stupid are they to try to sell espresso in the land of espresso lol”. You need to engage just a little bit of critical thinking and realize that Starbucks knew that well in advance and thought the would give it a shot anyway. Maybe they could occupy a different cultural niche just like they did in dozens of other countries with similarly rich cultural traditions around coffeeshops and coffee or tea like the Middle East and Asia.


Critical thinking is long gone from most, and this site is now popular enough, and cs is main stream enough, that a good majority of this site even barely thinks beyond what fluff was put in front of them.


Let’s dig a little deeper. Why isn’t education around the basics of finance mandatory in school?


How does that account for the disgusting klarna ads I see in NYC that glorify buying things you cannot afford?

Either they got over their shock fast and learned how to take advantage of us, or it was the plan all along.


My picture, may be wrong, is that not paying your loans is easier to get away with in the US. That you can just not pay and people will sigh and actually give up. While in Sweden people will hound you until they get their money.

Consider this:

> Klarna requires you to have a card filed with them and they will charge your card an exorbitant interest rate if you miss a payment. But on the other hand…if you cancel your card, or if you put in a temporary card from one of those temporary card services that auto-expires, what is Klarna going to do? They can’t, like, claw back the taco.

I think this is a case in point, this expectation that if you don't pay that's the end of it. Even in the US it's possible to recover the money with a lawsuit, and I wouldn't be surprised if Klarna starts making use of that possibility.

If I'm right, this would also explain not using traditional credit scores, as they are more a measure of whether you usually pay your debts on time, but what matters is whether you have the assets/income so that you can be made to pay your debts.


As for branding, IMO you could go a bunch of directions:

Timelines

Tempor (temporal)

Chronos

Chronografik

Continuum

Momentum (moments, memory, momentum through time)

IdioSync (kinda hate this one tbh)

Who knows! Those are just the ones that fell out of my mouth while typing. It's just gotta have a memorable and easy-to-pronounce cadence. Even "Memorable" is a possibility LOL

-suggestions from some dude, not ChatGPT


Dateline (with the Dateline NBC theme song playing quietly in the background while you browse your history and achievements)


Momenta


Scribe? As in the person who writes the timeline.


Believe it lol [0]

"The acts are responsible for all of it. They set the ticket prices. They created the large fees. Ticketmaster is paid to take the heat. Because the acts don’t want their fans to hate them. Even worse, the fans don’t hate them even when they’re told the acts are at fault, they just don’t believe it!"

For more insight into ticket pricing, see [1]

That said, up until I read this Ars article, I understood what Ticketmaster was and why they exist and kind of had few problems with it or moral compunctions about it; the concert business is cut-throat and it's very hard to turn a profit without some of these tactics in place. I find this new revelation about protecting revenue by actively supporting + enabling the corrupt reseller market to be quite fucking distasteful though!

[0] https://lefsetz.com/wordpress/2022/11/17/ticketmaster-swift-...

[1] https://lefsetz.com/wordpress/2024/03/01/the-truth-about-tic...


Immich is fantastic. I'm itching to reply but not 100% sure what I want to say, I've got like a bunch of immediate, parallel thought processes about it.

1. I've got 25 years of photos and video that I originally organized by folder (date + title of contents) but got very unwieldy once my wife and I both got smartphones back in mid-late-00's -- this archive has lived on external HDDs (spinning disk) and copied to new ones as capacities increased. In early-mid 2010's I got 2TB with Google and uploaded to Google Photos; it was great, but neither of us ever really utilized it, so I was just paying for cloud backup.

2. I am old-ish, have no concept of "home lab" (because everybody who had one or more computers and messed around with computers basically had what's now called home lab), and tend to keep/repurpose tech that goes out of service -- I've always hosted where it was appropriate (home, colo, cloud, whatever), and always have run many devices in a closet. Given the ~1TB of personal media, it was inevitable that I'd want some kind of self-run solution if only for speed + physical access.

3. I liked Google Photos interface; getting out of it was impossible. Add in 15+ years of unorganized iPhone photo/video backups (pulled out via iExplorer and other apps) in real folders on a real hard drive, and it really was a godsend in starting with a "normal" (yet dysfunctional) archive of original content. Once I set up Immich, I was able to upload all of it and at the bare minimum have a year/month-organized archive of stuff, written to an enterprise HDD (while keeping the old source hard drive(s)). The iOS app is pretty good (way more performant now with the beta timeline) and the CLI + API are great.

4. I have an Ubuntu server in the dusty closet; decent little piece of crap 8-core / 32gb that runs some websites and services. That's where I installed a refurb 12TB drive and Immich. I had an HP Z420 with 128gb ram that was my workstation for a few years, I upgraded to a Z640 dual-xeon with 256gb, and just had the old stuff sitting around -- I installed TrueNAS on that and threw in a bunch of cheap Ironwolf drives, set up a ZFS pool, run Immich on the Ubuntu box and then Syncthing all of it to the NAS for duplication. I recognize that having a bunch of equipment around is a luxury and a privilege, but I'm also a cheap POS and buy everything used/off-lease/refurbished/eBay/etc, and reuse what I can as it gets upgraded. That said, you can get massive local storage and compute if you look around, be patient, and don't impulse-buy.

5. Since Immich predated the NAS and I'm still running it on the Ubuntu box versus in a TrueNAS container, upgrades are less turnkey; for instance, I let mine sit idle from 1.29x to 1.41x, and there were three breaking upgrades in between. It took some fiddling and staggered upgrading on command line to get from where I was all the way to latest; I experienced no data loss though, everything moved over, but it wasn't one-click/one-command. Syncthing backups from machine A to B aren't exactly invisible either, because if A shits the bed it would probably corrupt B, and even if it didn't, while I do have the files duplicated, I'd have to more or less replicate the original install and copy the files from B->A to get the interface running again.

6. The mobile app + features are very seamless and good at this point; my wife hates computers but can find what she needs on her phone super easily. And the beta timeline is very performant with regards to handling a quarter million photos and videos. I haven't fully vetted the latest app-initiated from-phone automated backups, but did notice mine were flowing from my phone to server without even realizing it. That helps a LOT with the inevitable couple-times-a-decade phone upgrade since the main "backup bottleneck" is getting all that personal media out of there. The rest just goes easily with iCloud backups and device-to-device restorations.

7. I don't back any of this up to cloud; I thought about maybe Backblaze or something, but haven't pulled the trigger on anything. Syncing to and restoring from sounds like a nightmare. Since 1998 I think I've only had 2 or 3 drives actually die on me; one was recently (the NOT important one) which prompted the TrueNAS box. I ended up with multiple 2xMIRROR pairs striped in a VDEV pool and feel pretty OK about that for now, which the Immich archive also syncs to. End of the day, anything is better than being imprisoned by Google Photos or iCloud Photos.

8. I also don't expose any of this stuff to the internet; outside of the home network, we have to VPN in to get access. Also don't have external contributors or anything. YMMV because I know a lot of people like to share out to family, or set extended family up to archive with them.

End of the day, fiddly upgrade annoyances aside, it's the only Franken-solution I've found thus far that gives easy access to a giant archive, and spreads itself out enough to where I'm not terrified of losing everything. Really well-done stuff!


Thanks for those details! I came along a similar trajectory, starting when we had our first kid which happened right around the time when phone cameras got good. Which means many gigabytes of video from phones filling up monthly.

I actually ended up building something like immich just a bit more halfbaked for our home use, that could take the whole /MobileSync iOS backup and pull out anything that looks like a photo or video. That way our workflow was, back up the phone in iTunes, let the app process the MobileSync backup, confirm they are visible, and hit the delete all button on the phone. Rinse, repeat. The storage was on a beige box with a couple mirrored drives.

At this point the collection is about a terabyte, which is not extreme by modern home lab standards. My main remaining concern at this point is that the file system is just plain ext4. Even though I mirror the collection on a regular schedule, more and more I am wondering if there is a chance of bitrot since as far as I know none if it is checksumed. I would love a solution that periodically scans objects on the file system. I would love to have an easy way to tell which of the copies is the corrupted one, since with two copies in my case I can't take the majority.

p.s. and I guess one question for you, do you think I would benefit from switching to immich for part of this workflow? I'm thinking I can't just throw raw iOS backups at it so maybe I need a bit of preprocessing there, but otherwise let it take over the cataloging and just keep the underlying storage backed up.


If I understood correctly, it sounds like you could probably change very little of your current setup and just add in an extra step in a script where you use their CLI to upload new content after you do iOS backups [0]

Files get hashed before upload to avoid duplication, so I would imagine that this would also be fine to use in tandem with the mobile app; the app pushes stuff out as it's designed to do, whenever it does it, but then if you upload from a manual backup as well, it would only ingest stuff that's not already in the archive.

[0] https://immich.app/docs/features/command-line-interface/#qui...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: