Hacker Newsnew | past | comments | ask | show | jobs | submit | mojuba's commentslogin

> AI will make this situation worse.

Being an AI skeptic more than not, I don't think the article's conclusion is true.

What LLM's can potentially do for us is exactly the opposite: because they are trained on pretty much everything there is, if you ask the AI how the telephone works, or what happens when you enter a URL in the browser, they can actually answer and break it down for you nicely (and that would be a dissertation-sized text). Accuracy and hallucinations aside, it's already better than a human who has no clue about how the telephone works or where to even begin if the said human wanted to understand it.

Human brains have a pretty serious gap in the "I don't know what I don't know" area, whereas language models have such a vast scope of knowledge that makes them somewhat superior, albeit at a price of, well, being literally quite expensive and power hungry. But that's technical details.

LLMs are knowledge machines that are good at precisely that: knowing everything about everything on all levels as long as it is described in human language somewhere on the Internet.

LLMs consolidate our knowledge in ways that were impossible before. They are pretty bad at reasoning or e.g. generating code, but where they excel so far is answering arbitrary questions about pretty much anything.


Very important distinction here you’re missing: they don’t know things, they generate plausible things. The better the training, the more those are similar, but they never converge to identity. It’s like if you asked me to explain the S3 API, and I’m not allowed to say “I don’t know”, I’m going to get pretty close, but you won’t know what I got wrong until you read the docs.

The ability for LLMs to search out the real docs on something and digest them is the fix for this, but don’t start thinking you (and the LLM) don’t need the real docs anymore.

That said, it’s always been a human engineer superpower to know just enough about everything to know what you need to look up, and LLMs are already pretty darn good at that, which I think is your real point.


Nice! Sent you a message via the contact form.

So the market is going to be flooded with this type of soulless books that have no distinct character or style, just pure dry facts?

In a sense, "I wrote a book about it" is disingenuous and I agree the author's bullet list would probably be more interesting and would save us a lot of time.


Good feedback, thanks! Will include a version of the original stream of conscience and raw note in a day or two

I would take back my negative feedback in that case! Am reading the book, content is interesting, but I am never sure what is actually your thought vs LLM fillers!


Going to be? Already is!

Compare that to the C compiler in 100,000 lines written by Claude in two weeks for $20,000 (I think was posted on HN just yesterday)

It's a fun comparison, but with the notable difference that that one can compile the Linux kernel and generate code for multiple different architectures, while this one can only compile a small proportion of valid C. It's a great project, but it's not so much a C compiler, as a compiler for a subset of C that allows all programs this compiler can compile to also be compiled by an actual C compiler, but not vice versa.

But can it compile "Hello, World" example from its own README.md?

https://github.com/anthropics/claudes-c-compiler/issues/1


It's fascinating how few people read past the issue title

And this is exactly why coding with AI is not-so-slowly taking over.

Most people think they are more capable than they actually are.


Noticed the part where all it requires is to actually have the headers in the right location?

"The location of Standard C headers do not need to be supplied to a conformant compiler."

From https://news.ycombinator.com/item?id=46920922 discussion.


And it doesn't for the compiler in question either. As long as the headers exist in the places it looks for them. No compiler magically knows where the headers are if you haven't placed them in the right location

stddef.h (et al) should be shipped by the compiler itself, and so it should know where it is. But they rely on gcc for it, hence it doesn't always know where to look. Seems totally fine for a prototype.

Especially given they're not shipping anything. The GCC binaries can't find misplaced or not installed headers either.

Shipping GPL headers that explicitly state that they are part of GCC with a creative commons licensed compiler would probably make a lot of people rather unhappy, possibly even lawyers.

Would you accept the same quality of implementation from a human team?

I've certainly encountered clang & gcc not finding or just not having header files a good couple times. Mostly around cross-compilation, but there was a period of time for which clang++ just completely failed to find any C++ headers on my system.

Yes, clang is famously in this category.

If you copy the clang binary to a random place in your filesystem, it will fail to compile programs that include standard headers.


A compiler that can't magically know how to find headers that don't exist in the expected directory?

Yes, that is the case for pretty much every compiler. I suppose you could build the headers into the binary, but nobody does that.


Consider: content-addressed headers.

Then you might as well embed the headers, since in that case you can't update the compiler and headers separately anyway.

I guess you've heard of https://www.unison-lang.org/

Noticed the part where the exact instructions from the Readme were followed and it didn't work?

So we're down to a missing or unclear description of a dependency in a README - note following the instructions worked for others -, from implications the compiler didn't work.

Well I'm pretty sure the author can make a compliant C compiler in a few more sectors.

I mean we know it can be done in little space, given the many tiny C compilers. I think what is most interesting about this one is exactly the creative shortcuts. It's an interesting design space for e.g. bootstrapping to impose extra restrictions.

> Doing it badly is doing the thing.

No it's not. Sometimes (or maybe most of the time) doing it badly means maybe it's not your thing.

I used to have a neighbour who liked to play the piano and sing. He was doing it consistently badly and he didn't have anyone to tell him that he should probably stop trying.


People sometimes do things because they enjoy doing them, even if they aren’t particularly good at them.


I have two problems with that. One is, you can do what you like quietly and without disturbing anyone around you. Second is the Dunning Kruger effect: witnessing it first hand is never fun.


And both of those problems are yours, not your neighbor’s.

To your neighbor, doing it badly is still doing the thing.


Who are you, to define what "the thing" is, for someone else?

Doing the thing isn't about judging other people. That doesn't contribute to your thing.

If someone is bothering you, making it hard to do your thing, then your thing involves talking to them about your problem. Without judging what they are doing.


Well you are pretty bad at comments. Hang up the keyboard bud


Oh.. So you start doing something new and you're top 10% without practicing or being bad at it first? I'd love to test that to see if it's the case... Your logic is "You're not the best ever to do something so you are not doing it" means you have probably never done a single thing your entire life. Maybe you should just stop.

Yeah the dude should have stopped doing what he liked


All because this dude is the ultimate judge for all that is good and worth doing somehow..

no, theres a different thing here, which is that practice needs yo be deliberate.

the answer isnt to stop practicing, its to practice the right thing and not practice doing it wrong.

theyre probably still better off playing badly and enjoying it, vs just staring at an unplayed piano though


Maybe people did tell him he sucked, but he was having fun


Just let people be.


Yes but that doesn't explain why we aren't given a choice. Program code is boringly deterministic but in many cases it's exactly what you need while non-determinism becomes your dangerous enemy (like in the case of some Airbus jets being susceptible to bit flips under cosmic rays)


The current way to address this is through RAG applications or Retrieval Augmented Generation. This means using the LLM side for the natural language non-deterministic portion and using traditional code and databases and files for the deterministic part.

A good example is bank software where you can ask what your balance is and get back the real number. A RAG app won't "make up" your balance or even consult the training the find it. Instead, the traditional code (deterministic) operations are done separately from the LLM calls.


It didn't make pointers safer to use though. In Swift and some other modern languages you can't dereference an optional (nullable) pointer without force-unwrapping it.


It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.

The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer. Modern mobile phones can have 4, 6, even 8 cores. If you don't get a decent grasp of how concurrency works and how to use it properly, your app code will be limited to 1 or 1.5 cores at most which is not a crime but a shame really.

That's where it all starts. You want to execute things in parallel but also want to ensure data integrity. If the compiler doesn't like something, it means a design flaw and/or misconception of structured concurrency, not "oh I forgot @MainActor".

Swift 6.2 is quite decent at its job already, I should say the transition from 5 to 6 was maybe a bit rushed and wasn't very smooth. But I'm happy with where Swift is today, it's an amazing, very concise and expressive language that allows you to be as minimalist as you like, and a pretty elegant concurrency paradigm as a big bonus.

I wish it was better known outside of the Apple ecosystem because it fully deserves to be a loved, general purpose mainstream language alongside Python and others.


> It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.

I disagree. Not every single article or essay needs to start from kindergarten and walk us up through quantum theory. It's okay to set a minimum required background and write to that.

As a seasoned dev, every time I have to dive into a new language or framework, I'll often want to read about styles and best practices that the community is coalescing around. I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.

I'm not saying that level of article/essay shouldn't exist. I'm just saying there's more than enough. I almost NEVER find articles that are targeting the "I'm a newbie to this language/framework, but not to programming" audience.


> I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.

You’d be surprised. Modern Swift concurrency is relatively new and the market for Swift devs is small. Finding good explainers on basic Swift concepts isn’t always easy.

I’m extremely grateful to the handful of Swift bloggers who regularly share quality content.


Got a list of those bloggers you like?


Paul Hudson is the main guy right now, although his stuff is still a little advanced for me. Sean Allen on youtube does great video updates and tutorials.


> The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer.

That's parallelism. Concurrency is mostly about hiding latency from I/O operations like network tasks.


Network operations are "asynchrony". Together with parallelism, they are both kinds of concurrency and Swift concurrency handles both.

Swift's "async let" is parallelism. As are Task groups.


Sure, but as soon as they released their first iteration, they immediately went back to the drawing board and just slapped @MainActor on everything they could because most people really do not care.


Well yes, but that’s because the iOS UI is single threaded, just like every other UI framework under the sun.

It doesn’t mean there isn’t good support for true parallelism in swift concurrency, it’s super useful to model interactions with isolated actors (e.g. the UI thread and the data it owns) as “asynchronous” from the perspective of other tasks… allowing you to spawn off CPU-heavy operations that can still “talk back” to the UI, but they simply have to “await” the calls to the UI actor in case it’s currently executing.

The model works well for both asynchronous tasks (you await the long IO operation, your executor can go back to doing other things) and concurrent processing (you await any synchronization primitives that require mutual exclusivity, etc.)

There’s a lot of gripes I have with swift concurrency but my memory is about 2 years old at this point and I know Swift 6 has changed a lot. Mainly around the complete breakage you get if you ever call ObjC code which is using GCD, and how ridiculously easy it is to shoot yourself in the foot with unsafe concurrency primitives (semaphores, etc) that you don’t even know the code you’re calling is using. But I digress…


Not really true; @MainActor was already part of the initial version of Swift Concurrency. That Apple has yet to complete the needed updates to their frameworks to properly mark up everything is a separate issue.


async let and TaskGroups are not parallelism, they're concurrency. They're usually parallel because the Swift concurrency runtime allows them to be, but there's no guarantee. If the runtime thread pool is heavily loaded and only one core is available, they will only be concurrent, not parallel.


> If the runtime thread pool is heavily loaded and only one core is available, they will only be concurrent, not parallel

Isn't that always true for thread pool-backed parallelism? If only one core is available for whatever reason, then you may have concurrency, but not parallelism.


I like how Swift solved this: there's a more universal `defer { ... }` block that's executed at the end of a given scope no matter what, and after the `return` statement is evaluated if it's a function scope. As such it has multiple uses, not just for `try ... finally`.


I think Swift’s defer (https://docs.swift.org/swift-book/documentation/the-swift-pr...) was inspired by/copied from go (https://go.dev/tour/flowcontrol/12), but they may have taken it from an even earlier language that I’m not aware of.

Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.

Secondly, if you write

       foo
       defer revert_foo
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.

A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.


The oldest defer-like feature I can find reference to is the ON_BLOCK_EXIT macro from this article in the December 2000 issue of the C/C++ Users Journal:

https://jacobfilipp.com/DrDobbs/articles/CUJ/2000/cexp1812/a...

A similar macro later (2006) made its way into Boost as BOOST_SCOPE_EXIT:

https://www.boost.org/doc/libs/latest/libs/scope_exit/doc/ht...

I can't say for sure whether Go's creators took inspiration from these, but it wouldn't be surprising if they did.


Yeah, it's especially handy in UI code where you can have asynchronous operations but want to have a clear start/end indication in the UI:

    busy = true
    Task {
        defer { busy = false }
        // do async stuff, possibly throwing exceptions and whatnot
    }


I'll disagree here. I'd much rather have a Python-style context manager, even if it introduces a level of indentation, rather than have the sort of munged-up control flow that `defer` introduces.


I can see your point, but that (https://book.pythontips.com/en/latest/context_managers.html) requires the object you’re using to implement __enter__ and __exit__ (or, in C#, implement IDisposable (https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...), in Java, implement AutoCloseable (https://docs.oracle.com/javase/tutorial/essential/exceptions...); there likely are other languages providing something similar).

Defer is more flexible/requires less boilerplate to add callsite specific handling. For an example, see https://news.ycombinator.com/item?id=46410610


I was contemplating what it would look like to provide this with a macro in Rust, and of course someone has already done it. It's syntactic sugar for the destructor/RAII approach.

https://docs.rs/defer-rs/latest/defer_rs/


I don't know Rust but, can this `defer` evaluate after the `return` statement is evaluated like in Swift? Because in Swift you can do this:

    func atomic_get_and_inc() -> Int {
        sem.wait()
        defer {
            value += 1
            sem.signal()
        }
        return value
    }


It's easy to demonstrate that destructors run after evaluating `return` in Rust:

    struct PrintOnDrop;
    
    impl Drop for PrintOnDrop {
        fn drop(&mut self) {
            println!("dropped");
        }
    }
    
    fn main() {
        let p = PrintOnDrop;
        return println!("returning");
    }
But the idea of altering the return value of a function from within a `defer` block after a `return` is evaluated is zany. Please never do that, in any language.


EDIT: I don’t think you can actually put a return in a defer, I may have misremembered, it’s been several years. Disregard this comment chain.

It gets even better in swift, because you can put the return statement in the defer, creating a sort of named return value:

    func getInt() -> Int {
        let i: Int // declared but not
                   // defined yet!

        defer { return i }

        // all code paths must define i
        // exactly once, or it’s a compiler
        // error
        if foo() {
            i = 0
        } else {
            i = 1
        }

        doOtherStuff()
    }


This control flow is wacky. Please never do this.


Huh, I didn't know about `return` in `defer`, but is it really useful?


No, I actually misremembered… you can’t return in a defer.

The magical thing I was misremembering is that you can reference a not-yet-defined value in a defer, so long as all code paths define it once:

  fn callFoo() -> FooResult {
    let fooParam: Int // declared, not defined yet
    defer {
      // fooParam must get defined by the end of the function
      foo(fooParam)
      otherStuffAfterFoo() // …
    }

    // all code paths must assign fooParam
    if cond {
      fooParam = 0
    } else {
      fooParam = 1
      return // early return!
    }

    doOtherStuff()
  }
Blame it on it being years since I’ve coded in swift, my memory is fuzzy.


    #include <iostream>
    #define RemParens_(VA) RemParens__(VA)
    #define RemParens__(VA) RemParens___ VA
    #define RemParens___(...) __VA_ARGS__
    #define DoConcat_(A,B) DoConcat__(A,B)
    #define DoConcat__(A,B) A##B
    #define defer(BODY) struct DoConcat_(Defer,__LINE__) { ~DoConcat_(Defer,__LINE__)() { RemParens_(BODY) } } DoConcat_(_deferrer,__LINE__)

    int main() {
        {
            defer(( std::cout << "Hello World" << std::endl; ));
            std::cout << "This goes first" << std::endl;
        }
    }


Why would that be preferable to just using an RAII style scope_exit with a lambda


Meh, I was going to use the preprocessor for __LINE__ anyways (to avoid requiring a variable name) so I just made it an "old school lambda." Besides, scope_exit is in C++23 which is still opt-in in most cases.


And here I thought we were trying to finally kill off pre-processor macros.


"We have syntax macros at home"


Skimmed through the article, some interesting numbers but not a single statistic is per capita (or per million, whatever). How do I understand the scale of the phenomenon without the per capita figures? Sorry but seems a bit useless.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: