Hacker Newsnew | past | comments | ask | show | jobs | submit | syx's commentslogin

Last month they had a rerun of the movie at the cinema in Dublin (IE) and went to see it with a friend. It was such a surreal experience because after watching it on my laptop so many times I could hear the laughter and the jokes of the audience on the cheesy hacking scenes, it was like watching the movie in 4D, I enjoyed it a lot!

I even brought my PowerBook Duo 280c along with me


Watching with a big public group of people you mostly don't know but maybe should is a special experience. This may depend on region, but in the US there used to be frequent midnight openings for superfans like myself. People dress up in costumes, local shops hand out prizes and it's an event. Saw Phantom Menace this way, LOTR, Watchmen, and maybe others, but I haven't seen a midnight opening offered in years. Maybe the theater managers are swimming in the pool on the roof.


In San Francisco, DNA Lounge has an annual event where they decorate the whole place as Cyberdelia. I never miss it if I can help.

A couple years ago my friend and I dressed as FBI agents. It was great fun.


Moved from heroku to fly.io three years ago and I don’t regret it, great platform occasionally goes down and requires a bit of attention but the support forum is great


I had an issue with one of my Sprites (Fly.io also runs sprites.dev) and the CEO responded to me personally in less than 10 minutes. They got it fixed quickly.

I was a free customer at the time. I pay for it happily now.


Fly.io are absolute G’s. The product is awesome and the tech blogs they write are fantastic.


It didn't seem quite as fire-and-forget as doing `Heroku create` when I tried to use it 3-4 years ago, especially the database setup. Do you use their Postgres offering?


No my one is a simple ruby sinatra app with no DB. Yeah unfortunately it wasn’t super reliable as heroku but they’re getting better at keeping the instances up


As someone with many animator friends, this sounds very bleak. Their work processes are very similar to software engineering, with the difference that their hiring process is much quicker (they just show a bunch of reels and shots they made for previous films or cartoons, and they’re hired). The sad part is that they have almost no labor rights, and competition is incredibly high, which means pay is very low and turnover is high. All these years, I’ve been setting my expectations that one day my field may become like that.


The difference is animation is part of the entertainment industry and software engineering is part of every industry. I don't really think it will ever be like animators are today, but I wouldn't be surprised if wages fall.


Well I think there's an oversupply of CS engineers, not of animators


There's definitely an oversupply of animators, but probably not to the extent of CS at the moment.


I can’t wait to get home and try this on my Pi. Past few months, I’ve been building a fully local agent [0] that runs inference entirely on a Raspberry Pi, and I’ve been extensively testing a plethora of small, open models as part of my studies. This is an incredibly exciting field, and I hope it gains more attention as we shift away from massive, centralized AI platforms and toward improving the performance of local models.

For anyone interested in a comparative review of different models that can run on a Pi, here’s a great article [1] I came across while working on my project.

[0] https://github.com/syxanash/maxheadbox

[1] https://www.stratosphereips.org/blog/2025/6/5/how-well-do-ll...


I’m very curious about the monthly bill for such a creative project, surely some of these are pre rendered?


Napkin math:

9 AIs × 43,200 minutes = 388,800 requests/month

388,800 requests × 200 tokens = 77,760,000 tokens/month ≈ 78M tokens

Cost varies from 10 cents to $1 per 1M tokens.

Using the mid-price, the cost is around $50/month.

---

Hopefully, the OP has this endpoint protected - https://clocks.brianmoore.com/api/clocks?time=11:19AM


It was limited to 2,000 tokens each. I assume it usually hit that. So could be closer to 777M. assuming they didn't just cache it and just start rotating after a day or two..


i think it is cached on the minute level, responses cannot be that fast


I'm sure someone will install OpenStep and recreate a NeXT computer 2.0


GNUStep is still going.


If a single GNU steps in the forest, does it make a sound?


If anyone was around to hear it... yes


It just won’t torch the same (1).

(1) https://simson.net/ref/1993/cubefire.html


Or stack eight of them and build a Connection Machine


Install Previous and boot into it, voila ;)



Very interesting explanation, I've always wondered how it was built, I didn't know Paul Irish made a video about this. Thanks for sharing!


I really wish I'd had this tool six months ago when I was designing a GUI program for macOS 9. I wanted to import pictures from my modern laptop to a PowerBook Duo and vice versa. Converting all the assets was a pain, and I think this tool would've been incredibly convenient, even just for previewing the images.


The program existed six months ago. That said, currently it only allows to render old QuickDraw pictures on a modern Mac OS X. A tool going the other direction would be simple for bitmaps (just use PNG) or very complicated for vector data.

One thing I'm thinking about making a tool that uses old codecs to make compact images for 68K machines. Basically, use today's CPU power to make efficient road-pizza images.


Offtopic but which tools and resources would you recommend to get started with MacOS 9 GUI development?

I’ve experience with WinAPI/MFC but the classic Macintosh API is quite different and I’d often run into hangs when trying to run simple hello world-ish C++ programs compiled with CodeWarrior.


For those wondering about the use case, this is very useful when enabling streaming for structured output in LLM responses, such as JSON responses. For my local Raspberry Pi agent I needed something performant, I've been using streaming-json-js [1], but development appears to have been a bit dormant over the past year. I'll definitely take a look at your jsonriver and see how it compares!

[1] https://github.com/karminski/streaming-json-js


For LLMs I recommend just doing NDJSON, that is, newline delimited json. It's much simpler to implement


Do any LLMs support constrained generation of newline delimited json? Or have you found that they're generally reliable enough that you don't need to do constrained sampling?


not for the standard hosted APIs using structured output or function calling, best you can get is an array


I love NDJSON in general. I use it a lot for spatial data processing (GDAL calls it GeoJsonSeq).


Particularly for REACT style agents that use a "final" tool call to end the run.


The page referenced in the article, Frame of Preference [1], is such a great example of this feature. When I read it last week, interacting with a live Mac OS system while reading the post felt like magic.

[1] https://aresluna.org/frame-of-preference/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: