Hacker Newsnew | past | comments | ask | show | jobs | submit | kace91's commentslogin

I got a handheld emulator console as a Christmas gift. Configuring shaders that emulate crt TVs, I realized I had no mental model of how those TVs worked at all.

I’m used to “pixels are three little lights combining rgb colors”, which doesn’t work here, so I went on a rabbit hole and let me tell you, analog TVs are extremely impressive tech.

Getting an electron beam to hit a glass, making the chemicals on it spark, covering it in a “reading motion” for hundreds of lines, and doing that 60 times a second! And the beam is oriented by just careful usage of magnets. It sounds super sci-fi for an already dead, 130 years old technology.

I also learned that my childhood was a lie. Turns out that the logic in consoles of the time was tied to the speed of the beam, which in turn used alternating current’s frequency as a clock. This means that since European current changes 50 times per second rather than 60, our games played in slowmo (about 0.8x). American sonic was so much faster! And the music was so much more upbeat!


Don't forget how they found a way to squish closed caption information into analog broadcast. The electron beam traces a path on the CRT display by drawing the odd lines from top to bottom (which draws half the image) and then the even lines (the rest of the owl) from top to bottom. While the electron beam is repositioning itself from the bottom of the screen to the top of the screen, there is a brief period of time where other data can be transmitted. That's where closed caption data was shoehorned in.

Teletext used the vertical blanking interval too. It consisted of numbered pages of text, each number input on the numpad on the remote control as a form of hypertext. Page after page was transmitted after one-another, repeatedly. It was sometimes used for subtitles, with some pages with a transparent background. Better receivers cached pages so you wouldn't have to wait for its next transmission when going to another page...

Digital television formats adopted the framing from analogue formats and sends the same data in digital form within the vertical blanking interval. Many channels have stopped offering teletext. One network here in Sweden still uses it to deliver news, and I often prefer that format because the articles are concise and distraction-free.

BTW. I was once asked to hack together a system for using data in the vblank period to control relays at a remote site.


Technology Connections does a really good video on this on YouTube

> Turns out that the logic in consoles of the time was tied to the speed of the beam, which in turn used alternating current’s frequency as a clock. This means that since European current changes 50 times per second rather than 60, our games played in slowmo (about 0.8x). American sonic was so much faster! And the music was so much more upbeat!

Wasn't this the reason behind different versions of the game for PAL and NTSC etc.? So I imagine the games would play quite similarly, just with a lower refresh rate in Europe?


>So I imagine the games would play quite similarly, just with a lower refresh rate in Europe?

That was my assumption as well! But nope, gameplay was coupled to framerate for a surprising range of years.

You can see comparisons on YouTube, check the music of the pal/ntsc version of sonic for the genesis/megadrive.

Apparently it was still happening to some extent during the PSX era. I remember the turn meter bars in FF7 filled very slow, and this explains it.


If you have an original copy of Grim Fandango, the elevator-and-forklift puzzle is impossible without a patch, since the scene moves at (iirc) the processors clock speed, so modern CPUs ran too quickly to make the action possible to solve the puzzle.

This is obviously fixed in the remastered version, though


> Wasn't this the reason behind different versions of the game for PAL and NTSC etc.? So I imagine the games would play quite similarly, just with a lower refresh rate in Europe?

Yes and no. Some games play at a similar speed but some (most if I recall correctly) weren't modified for the PAL market so they play slow and the image is squashed down. Street Fighter II on the SNES (PAL) is a classic example of this.


Damn, Street Fighter 2 on the SNES is literally the first game I remember ever playing. I never knew I was playing an inferior version!

The vertical resolution was also different. Some games developed for NTSC got black bars, or a silly banner in the PAL version. Many PAL games were not ported for NTSC regions at all.

What other sci fi technology is being lost on us now? I always that the complexity of the local-battery-powered copper-cable telephone exchange system was bonkers. It was the backbone for all our landline calls.

The telephone system also powered the phone, and often worked when the power grid did not.

well, not quite with the 50hz thing. They slowed them down to run at 50hz, but they could've rewritten them to work at full speed by dropping frames

Doesn’t rust have incremental builds to speed up debug compilation? How slow are we talking here?

Rust does have incremental rebuilds, yes.

Folks have worked tirelessly to improve the speed of the Rust compiler, and it's gotten significantly faster over time. However, there are also language-level reasons why it can take longer to compile than other languages, though the initial guess of "because of the safety checks" is not one of them, those are quite fast.

> How slow are we talking here?

It really depends on a large number of factors. I think saying "roughly like C++" isn't totally unfair, though again, it really depends.


My initial guess would be "because of the zero-cost abstractions", since I read "zero-cost" as "zero runtime cost" which implies shifting cost from runtime to compile time—as would happen with eg generics or any sort of global properties.

(Uh oh, there's an em-dash, I must be an AI. I don't think I am, but that's what an AI would think.)


I used em dashes before AI, and won't stop now :)

That's sort of part of it, but it's also specific language design choices that if they were decided differently, might make things faster.


People do have cold Rust compiles that can push up into measured in hours. Large crates often take design choices that are more compile time friendly shape.

Note that C++ also has almost as large problem with compile times with large build fanouts including on templates, and it's not always realistic for incremental builds to solve either especially time burnt on linking, e.g. I believe Chromium development often uses a mode with .dlls dynamic linking instead of what they release which is all static linked exactly to speed up incremental development. The "fast" case is C not C++.


> I believe Chromium development often uses a mode with .dlls dynamic linking instead of what they release which is all static linked exactly to speed up incremental development. The "fast" case is C not C++.

Bevy, a Rust ECS framework for building games (among other things), has a similar solution by offering a build/rust "feature" that enables dynamic linking (called "dynamic_linking"). https://bevy.org/learn/quick-start/getting-started/setup/#dy...


There's no Rust codebase that takes hours to compile cold unless 1) you're compiling a massive codebase in release mode with LTO enabled, in which case, you've asked for it, 2) you've ported Doom to the type system, or 3) you're compiling on a netbook.

I'm curious if this is tracked or observed somewhere; crater runs are a huge source of information, metrics about the compilation time of crates would be quite interesting.

I know some large orgs have this data for internal projects.

This page gives a very loose idea of how we're doing over time: https://perf.rust-lang.org/dashboard.html


Down and to the right is good, but the claim here is the average full release build is only 2 seconds?

Those are graphs of averages from across the benchmarking suite, which you can read much more information about here: https://kobzol.github.io/rust/rustc/2023/08/18/rustc-benchma...

>Because we could read/listen to/watch stuff without paying the people who created it?

I can tell you I wouldn’t be anywhere close to where I am without this, yes.

First because I (/my parents) didn’t have the money, second because of pure geographical access.

I saw movies and shows from countries that would never sell near me, read books that would never be in my country’s libraries, took courses straight from scientists and engineers rather than a thrice translated work…

The barrier of entry was also useful, curiosity is much better fed when you can download a medicine textbook just to check rather than venturing into the library of a university you’re not part of.

That is the one thing the internet did right, spreading culture. It was over when they took boredom from us, that was the big evil.


Ditto here in Spain. If you were able to watch Northern Exposure or The X Files with the Spanish dub in the 90's with a normal TV schedule (read: with human schedules, you need to sleep), I would ask you a unicorn just in case, because I woudn't believe you. After 2003 with Emule and BT? Damn it, I've got whole NE series over torrent and I regret nil.

My public TV already paid the US TV producer with money from our taxes, so in the end it's a draw.

Programming books? Your elder brother/sister it's doing CS at some uni, right? Then, good luck paying $50 on big book stores from malls. Entry courses you mean? Pay ~$50 a month for a private school and try enjoying Visual C++ 98. Linux? That's was for CS engineers and PC freaks right?

Nowadays you can learn damn Calculus on your own and install Maxima from any distro with online guides and tutorials. I had to learn Calculus from my own (I was some HS dropout) early Debian DVD's which had a PDF on Mathematics and from that I tried to understand every exercise and equation under Maxima. No upgrades, no updates, no tutoring. Hard mode my default for everything. Your TV tuner didn't work? Messing with Linux kernel modules like crazy and even editing the source code to fake the tuner and PLL and watch something in XawTV.


I don’t get why they think “professional” is a generic tier.

If I’m a music producer, what’s the value of being given a digital art drawing program? If I’m an illustrator, why do I need a cinema post production suite?

Some people might happen to do both, but overlap is largely accidental, right? The fact that they think of all professions as a bundle is even insulting as it signals the products are mostly toys/hobbyist stuff.


I think that's why they call it "Creator" studio. Creators - in the way the term is usually used today - indeed do use many of these tools. Maybe you produce music, create a video about you producing music and also need an engaging thumbnail for YouTube.

In a feature film production, these would certainly be separate roles. But apart from maybe Logic Pro for composers, Apple's tools are not really relevant at those levels of the entertainment business anymore. Post-pro would be Pro Tools for audio, something like Avid Media Composer for editing etc.

I think Apple has realized they are not playing on that level anymore and target their marketing to where they are still in the game. That's not necessarily a bad move.


Tons of professionals use logic. Really, you will find money making musicians using any of the major daws. Pro tools might still be the standard for recording studios but that's likely it.

My point was more that creators will often use more than one tool.

I know Logic is widespread amongst beat producers and songwriters, especially in the US. But you will also often see tracks getting produced on Logic but the final mix then happens on Pro Tools (by professional mixing engineers).

But that's why I explicitly mentioned Logic, I think it's the one pro app from Apple that still deserves the moniker, at least in regards to where it is used. The video stuff not so much anymore.


Most musicians I know use Ableton or Bitwig on macOS. Logic Pro is really a hassle for collaboration and touring from what I've heard.

A lot of people round trip through various softwares to create things. As a film editor I use NLE’s, DAW’s, music production tools, various encoders (like compressor), graphic design tools…I’d say it’s the norm not the exception to need 2-3 specialized pieces of software during projects.

> I don’t get why they think “professional” is a generic tier.

The target market is prosumer, not true professional.


I don't think there's that much of a distinction.

The real difference is that a "true professional" already has the software—purchased at full price by themselves or by their employer—and doesn't need a subscription in the first place.


The biggest distinction, in my experience, is that prosumers tend to be means-focused and professionals tend to be ends-focused, so there's less zealotry and evangelism in professional circles.

Also in professional circles, there's usually one or two industry standards and you just use what everyone else is using.

Many people that use professional tools are genuinely doing hobbyist stuff. Especially if they haven't already bought their tools outright.

But besides, this subscription works with Family Sharing and is only $12, so it looks easy to get your money's worth.


> If I’m a music producer, what’s the value of being given a digital art drawing program? If I’m an illustrator, why do I need a cinema post production suite

Are you talking about Adobe here?


Probably not, seeing as Creative Cloud has bundles focused on specific mediums.

Their default is the "All" plan which includes many of the same categories as the Apple bundle.

Ooh, this is a great idea. There’s probably a lot that can be detected by measuring usage drop. I wish the same analysis was attempted in my country.

Question: I see that the “actions hints” in the demo show messaging people as an option.

Is this a planned usecase, for the user to hand over human communication in, say, slack or similar? What are the current capabilities and limitations for that?


>I do wonder if there was a completely normal bloke in the middle east at the relevant time who suggested it might be good to stop being complete shits to each other...

As far as I know, what we know as established historical fact is that:

- there indeed was a bloke

- he splintered from being a follower of another more famous bloke at the time who was executed by the romans for becoming too popular with the masses

- he preached the world was about to end (as in, in their listener's lifetime)

- he also pissed off the romans enough to be executed.

Everything else is left to guess!


AI has a lot of potential as a personal, always on teaching assistant.

It's also an 'skip intro' button for the friction that comes with learning.

You're getting a bug? just ask the thing rather than spending time figuring it out. You don't know how to start a project from scratch? ask for scaffolding. Your first boss asks for a ticket? Better not to screw up, hand it to the machine just to be safe.

If those temptations are avoided you can progress, but I'm not sure that lots of people will succeed. Furthermore, will people be afforded that space to be slow, when their colleagues are going at 5x?

Modern life offers little hope. We're all using uber eats to avoid the friction of cooking, tinder to avoid the awkwardness of a club, and so on. Frictionless tends to win.


And there's a ton of human incentives here to take shortcuts in the review part. The process almost pushes you to drop your guard: you spend less physical time observing the code while you write, you get huge chunks of code dropped on you, iterations change a lot to keep a mind model, there's FOMO involved about the speed gain you're supposed to get... We're going to see worse review quality just by a mater of UX and friction of the tool.

Yes! It depends on the company, of course, but I think plenty of people are going to fall for the perverse incentives while reviewing AI output for tech debt.

The perverse incentives being that tech debt is non-obvious & therefore really easy to avoid responsibility for.

Meanwhile, velocity is highly obvious & usually tired directly to personal & team performance metrics.

The only way I see to resolve this is strict enforcement of a comprehensive QA process during both the planning & iteration of an AI-assisted development cycle.

But when even people working at Anthropic are talking about running multiple agents in parallel, I get the idea that CTO's are not taking this seriously.


  > enforcement of a comprehensive QA process during both the planning & iteration of an AI-assisted development cycle
and a new bottleneck appears...

(i don't disagree with this take though, qa should be done from start to finish and integral every step of the way)


the demand for software has increased. The demand for software engineers has increased proportionally, because we were the only source of software. This correlation might no longer hold.

Depending on how the future shapes up, we may have gone from artisans to middlemen, at which point we're only in the business of added value and a lot of coding is over.

Not the Google kind of coding, but the "I need a website for my restaur1ant" kind, or the "I need to agregate data from these excel files in a certain way" kind. Anything where you'd accept cheap and disposable. Perhaps even the traditional startup, if POCs are vibecoded and engineers are only introducer later.

Those are huge businesses, even if they are not present in the HN bubble.


> "I need a website for my restaurant" kind, or the "I need to aggregate data from these excel files in a certain way" kind

I am afraid that kind of jobs were already over by 2015. There are no code website makers available since then and if you can't do it yourself you can just pay someone on fiverr and get it done for less than $5-50 at this point, its so efficient even AI wont be more cost effective than that. If you have $10k saved you can hire a competitive agency to maintain and build your website. This business is completely taken over by low cost fiverr automators and agencies for high budget projects. Agencies have become so good now that they manage websites from Adidas to Lando Norris to your average mom & pop store.


Just to add to the point: no code web site makers have already incorporated AI to simplify marketing tasks like drafting copies/blogs/emails.

I wonder exactly what you do, because almost none of your comment jibes with my knowledge and experience.

Note that I own an agency that does a lot of what you say is “solved”, and I assure you that it’s not (at least in terms of being an efficient market).

SMBs with ARR up to $100m (or even many times more that in ag) struggle to find anyone good to do technical work for them either internally or externally on a consistent basis.

> I am afraid that kind of jobs were already over by 2015.

Conceptually, maybe. In practice, definitely not.

> There are no code website makers available since then

… that mostly make shit websites.

> and if you can't do it yourself you can just pay someone on fiverr and get it done for less than $5-50 at this point,

Also almost certainly a shit website at that price point, probably using the no-code tools mentioned above.

These websites have so many things wrong with them that demonstrably decrease engagement or lose revenue.

> its so efficient even AI wont be more cost effective than that.

AI will be better very soon, as the best derivative AI tools will be trained on well-developed websites.

That said, AI will never have taste, and it will never have empathy for the end user. These things can only be emulated (at least for the time being).

> If you have $10k saved you can hire a competitive agency to maintain and build your website

You can get an ok “brochure” website built for that. Maintaining it, if you have an agency that actually stays in business, will be about $100 minimum for the lowest effort touch, $200 for an actually one line change (like business hours), and up from there from anything substantial.

If you work with a decent, reputable agency, a $10k customer is the lowest on the totem pole amongst the agency’s customer list. The work is usually delegated to the least experienced devs, and these clients are usually merely tolerated rather than embraced.

It sucks to be the smallest customer of an agency, but it’s a common phenomenon amongst certain classes of SMBs.

> This business is completely taken over by low cost fiverr automators and agencies for high budget projects.

This is actually true. Mainly because any decent small agency either turns into one that does larger contracts, or it gets absorbed by one.

That said, there is a growing market for mid-sized agencies (“lifestyle agencies”?).

> Agencies have become so good now that they manage websites from Adidas to Lando Norris to your average mom & pop store

As mentioned above, you absolutely do not want to be a mom and pop store working with a web agency that works with any large, international brand like Adidas.

I appreciate your points from a conceptual level, but the human element of tech, software, and websites will continue to be a huge business for many decades, imho.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: