Being an AI skeptic more than not, I don't think the article's conclusion is true.
What LLM's can potentially do for us is exactly the opposite: because they are trained on pretty much everything there is, if you ask the AI how the telephone works, or what happens when you enter a URL in the browser, they can actually answer and break it down for you nicely (and that would be a dissertation-sized text). Accuracy and hallucinations aside, it's already better than a human who has no clue about how the telephone works or where to even begin if the said human wanted to understand it.
Human brains have a pretty serious gap in the "I don't know what I don't know" area, whereas language models have such a vast scope of knowledge that makes them somewhat superior, albeit at a price of, well, being literally quite expensive and power hungry. But that's technical details.
LLMs are knowledge machines that are good at precisely that: knowing everything about everything on all levels as long as it is described in human language somewhere on the Internet.
LLMs consolidate our knowledge in ways that were impossible before. They are pretty bad at reasoning or e.g. generating code, but where they excel so far is answering arbitrary questions about pretty much anything.
Very important distinction here you’re missing: they don’t know things, they generate plausible things. The better the training, the more those are similar, but they never converge to identity. It’s like if you asked me to explain the S3 API, and I’m not allowed to say “I don’t know”, I’m going to get pretty close, but you won’t know what I got wrong until you read the docs.
The ability for LLMs to search out the real docs on something and digest them is the fix for this, but don’t start thinking you (and the LLM) don’t need the real docs anymore.
That said, it’s always been a human engineer superpower to know just enough about everything to know what you need to look up, and LLMs are already pretty darn good at that, which I think is your real point.
So the market is going to be flooded with this type of soulless books that have no distinct character or style, just pure dry facts?
In a sense, "I wrote a book about it" is disingenuous and I agree the author's bullet list would probably be more interesting and would save us a lot of time.
I would take back my negative feedback in that case! Am reading the book, content is interesting, but I am never sure what is actually your thought vs LLM fillers!
It's a fun comparison, but with the notable difference that that one can compile the Linux kernel and generate code for multiple different architectures, while this one can only compile a small proportion of valid C. It's a great project, but it's not so much a C compiler, as a compiler for a subset of C that allows all programs this compiler can compile to also be compiled by an actual C compiler, but not vice versa.
And it doesn't for the compiler in question either. As long as the headers exist in the places it looks for them. No compiler magically knows where the headers are if you haven't placed them in the right location
stddef.h (et al) should be shipped by the compiler itself, and so it should know where it is. But they rely on gcc for it, hence it doesn't always know where to look. Seems totally fine for a prototype.
Shipping GPL headers that explicitly state that they are part of GCC with a creative commons licensed compiler would probably make a lot of people rather unhappy, possibly even lawyers.
I've certainly encountered clang & gcc not finding or just not having header files a good couple times. Mostly around cross-compilation, but there was a period of time for which clang++ just completely failed to find any C++ headers on my system.
So we're down to a missing or unclear description of a dependency in a README - note following the instructions worked for others -, from implications the compiler didn't work.
I mean we know it can be done in little space, given the many tiny C compilers. I think what is most interesting about this one is exactly the creative shortcuts. It's an interesting design space for e.g. bootstrapping to impose extra restrictions.
No it's not. Sometimes (or maybe most of the time) doing it badly means maybe it's not your thing.
I used to have a neighbour who liked to play the piano and sing. He was doing it consistently badly and he didn't have anyone to tell him that he should probably stop trying.
I have two problems with that. One is, you can do what you like quietly and without disturbing anyone around you. Second is the Dunning Kruger effect: witnessing it first hand is never fun.
Who are you, to define what "the thing" is, for someone else?
Doing the thing isn't about judging other people. That doesn't contribute to your thing.
If someone is bothering you, making it hard to do your thing, then your thing involves talking to them about your problem. Without judging what they are doing.
Oh.. So you start doing something new and you're top 10% without practicing or being bad at it first? I'd love to test that to see if it's the case... Your logic is "You're not the best ever to do something so you are not doing it" means you have probably never done a single thing your entire life. Maybe you should just stop.
Yes but that doesn't explain why we aren't given a choice. Program code is boringly deterministic but in many cases it's exactly what you need while non-determinism becomes your dangerous enemy (like in the case of some Airbus jets being susceptible to bit flips under cosmic rays)
The current way to address this is through RAG applications or Retrieval Augmented Generation. This means using the LLM side for the natural language non-deterministic portion and using traditional code and databases and files for the deterministic part.
A good example is bank software where you can ask what your balance is and get back the real number. A RAG app won't "make up" your balance or even consult the training the find it. Instead, the traditional code (deterministic) operations are done separately from the LLM calls.
It didn't make pointers safer to use though. In Swift and some other modern languages you can't dereference an optional (nullable) pointer without force-unwrapping it.
It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer. Modern mobile phones can have 4, 6, even 8 cores. If you don't get a decent grasp of how concurrency works and how to use it properly, your app code will be limited to 1 or 1.5 cores at most which is not a crime but a shame really.
That's where it all starts. You want to execute things in parallel but also want to ensure data integrity. If the compiler doesn't like something, it means a design flaw and/or misconception of structured concurrency, not "oh I forgot @MainActor".
Swift 6.2 is quite decent at its job already, I should say the transition from 5 to 6 was maybe a bit rushed and wasn't very smooth. But I'm happy with where Swift is today, it's an amazing, very concise and expressive language that allows you to be as minimalist as you like, and a pretty elegant concurrency paradigm as a big bonus.
I wish it was better known outside of the Apple ecosystem because it fully deserves to be a loved, general purpose mainstream language alongside Python and others.
> It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
I disagree. Not every single article or essay needs to start from kindergarten and walk us up through quantum theory. It's okay to set a minimum required background and write to that.
As a seasoned dev, every time I have to dive into a new language or framework, I'll often want to read about styles and best practices that the community is coalescing around. I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
I'm not saying that level of article/essay shouldn't exist. I'm just saying there's more than enough. I almost NEVER find articles that are targeting the "I'm a newbie to this language/framework, but not to programming" audience.
> I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
You’d be surprised. Modern Swift concurrency is relatively new and the market for Swift devs is small. Finding good explainers on basic Swift concepts isn’t always easy.
I’m extremely grateful to the handful of Swift bloggers who regularly share quality content.
Paul Hudson is the main guy right now, although his stuff is still a little advanced for me. Sean Allen on youtube does great video updates and tutorials.
Sure, but as soon as they released their first iteration, they immediately went back to the drawing board and just slapped @MainActor on everything they could because most people really do not care.
Well yes, but that’s because the iOS UI is single threaded, just like every other UI framework under the sun.
It doesn’t mean there isn’t good support for true parallelism in swift concurrency, it’s super useful to model interactions with isolated actors (e.g. the UI thread and the data it owns) as “asynchronous” from the perspective of other tasks… allowing you to spawn off CPU-heavy operations that can still “talk back” to the UI, but they simply have to “await” the calls to the UI actor in case it’s currently executing.
The model works well for both asynchronous tasks (you await the long IO operation, your executor can go back to doing other things) and concurrent processing (you await any synchronization primitives that require mutual exclusivity, etc.)
There’s a lot of gripes I have with swift concurrency but my memory is about 2 years old at this point and I know Swift 6 has changed a lot. Mainly around the complete breakage you get if you ever call ObjC code which is using GCD, and how ridiculously easy it is to shoot yourself in the foot with unsafe concurrency primitives (semaphores, etc) that you don’t even know the code you’re calling is using. But I digress…
Not really true; @MainActor was already part of the initial version of Swift Concurrency. That Apple has yet to complete the needed updates to their frameworks to properly mark up everything is a separate issue.
async let and TaskGroups are not parallelism, they're concurrency. They're usually parallel because the Swift concurrency runtime allows them to be, but there's no guarantee. If the runtime thread pool is heavily loaded and only one core is available, they will only be concurrent, not parallel.
> If the runtime thread pool is heavily loaded and only one core is available, they will only be concurrent, not parallel
Isn't that always true for thread pool-backed parallelism? If only one core is available for whatever reason, then you may have concurrency, but not parallelism.
I like how Swift solved this: there's a more universal `defer { ... }` block that's executed at the end of a given scope no matter what, and after the `return` statement is evaluated if it's a function scope. As such it has multiple uses, not just for `try ... finally`.
Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.
Secondly, if you write
foo
defer revert_foo
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.
A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.
The oldest defer-like feature I can find reference to is the ON_BLOCK_EXIT macro from this article in the December 2000 issue of the C/C++ Users Journal:
I'll disagree here. I'd much rather have a Python-style context manager, even if it introduces a level of indentation, rather than have the sort of munged-up control flow that `defer` introduces.
I was contemplating what it would look like to provide this with a macro in Rust, and of course someone has already done it. It's syntactic sugar for the destructor/RAII approach.
It's easy to demonstrate that destructors run after evaluating `return` in Rust:
struct PrintOnDrop;
impl Drop for PrintOnDrop {
fn drop(&mut self) {
println!("dropped");
}
}
fn main() {
let p = PrintOnDrop;
return println!("returning");
}
But the idea of altering the return value of a function from within a `defer` block after a `return` is evaluated is zany. Please never do that, in any language.
EDIT: I don’t think you can actually put a return in a defer, I may have misremembered, it’s been several years. Disregard this comment chain.
It gets even better in swift, because you can put the return statement in the defer, creating a sort of named return value:
func getInt() -> Int {
let i: Int // declared but not
// defined yet!
defer { return i }
// all code paths must define i
// exactly once, or it’s a compiler
// error
if foo() {
i = 0
} else {
i = 1
}
doOtherStuff()
}
No, I actually misremembered… you can’t return in a defer.
The magical thing I was misremembering is that you can reference a not-yet-defined value in a defer, so long as all code paths define it once:
fn callFoo() -> FooResult {
let fooParam: Int // declared, not defined yet
defer {
// fooParam must get defined by the end of the function
foo(fooParam)
otherStuffAfterFoo() // …
}
// all code paths must assign fooParam
if cond {
fooParam = 0
} else {
fooParam = 1
return // early return!
}
doOtherStuff()
}
Blame it on it being years since I’ve coded in swift, my memory is fuzzy.
Meh, I was going to use the preprocessor for __LINE__ anyways (to avoid requiring a variable name) so I just made it an "old school lambda." Besides, scope_exit is in C++23 which is still opt-in in most cases.
Skimmed through the article, some interesting numbers but not a single statistic is per capita (or per million, whatever). How do I understand the scale of the phenomenon without the per capita figures? Sorry but seems a bit useless.
Being an AI skeptic more than not, I don't think the article's conclusion is true.
What LLM's can potentially do for us is exactly the opposite: because they are trained on pretty much everything there is, if you ask the AI how the telephone works, or what happens when you enter a URL in the browser, they can actually answer and break it down for you nicely (and that would be a dissertation-sized text). Accuracy and hallucinations aside, it's already better than a human who has no clue about how the telephone works or where to even begin if the said human wanted to understand it.
Human brains have a pretty serious gap in the "I don't know what I don't know" area, whereas language models have such a vast scope of knowledge that makes them somewhat superior, albeit at a price of, well, being literally quite expensive and power hungry. But that's technical details.
LLMs are knowledge machines that are good at precisely that: knowing everything about everything on all levels as long as it is described in human language somewhere on the Internet.
LLMs consolidate our knowledge in ways that were impossible before. They are pretty bad at reasoning or e.g. generating code, but where they excel so far is answering arbitrary questions about pretty much anything.
reply