> x86-64 is typically limited to about 5 instructions
Intel Lion-cove decodes 8 instructions per cycle and can retire 12. Intel Skymont's triple decoder can even do 9 instructions per cycle and that's without a cache.
AMD's Zen 5 on the other hand has a 6K cache for instruction decoding allowing for 8 instructions per cycle, but still only a 4-wide decoder for each hyper-thread.
And yet AMD is still ahead of intel in both performance and performance-per-watt. So maybe this whole instruction decode thing is not as important as people are saying.
> littered with too many special symbols and very verbose
This seems kinda self-contracticting. Special symbols are there to make the syntax terse, not verbose. Perhaps your issue is not with how things are written, but that there's a lot of information for something that seems simpler. In other words a lot of semantic complexity, rather than an issue with syntax.
I think it's also that Rust needs you to be very explicit about things that are very incidental to the intent of your code. In a sense that's true of C, but in C worrying about those things isn't embedded in the syntax, it's in lines of code that are readable (but can also go unwritten or be written wrong). In the GCed languages Rust actually competes with (outside the kernel) — think more like C# or Kotlin, less like Python — you do not have to manage that incidental complexity, which makes Rust look 'janky'.
> Apple demonstrated to the world that it can be extremely fast and sip power.
Kinda. Apple silicon sips power when it isn't being used, but under a heavy gaming load it's pretty comparable to AMD. People report 2 hours of battery life playing cyberpunk on Macs, which matches the steam deck. It's only in lighter games where Apple pulls ahead significantly, and that really has nothing to do with it being ARM.
Not for Linux they're not. IIRC Audio and camera don't work, and firmware is non-redistributable and so you need to mooch it off a Windows partition. On top of that the performance on Linux hasn't been great either.
It changes nothing. If you get taxes 20% til 90k and 30% above that, then donating 10k still saves you 3k in taxes, you're still out 7k and you're still paying 18k in taxes on the 90k.
That's a rule that might hold for applications and services. It does not hold for languages and libraries, where any and every aspect is going to be the bottleneck in someone else's code. It's a different 10% for each user.
Native code generally doesn't have undefined behaviour. C has undefined behaviour and that's a problem regardless of whether you're compiling to native or wasm.
I have the opposite experience: C++ is what I have the most experience with by a very wide margin, but I find reading other people's rust code way easier than other people's C++ code. There's way more weird features, language extensions and other crazy stuff in C++ land.
I believe you, I haven't contributed a lot of C++ code and it's quite possible the projects I have contributed to (e.g. godot engine) just happen to be written very legibly.
A M1 Max has 400GB/s of memory bandwidth but the CPU is only capable of using half of that (see https://tlkh.hashnode.dev/benchmarking-the-apple-m1-max#memo...). So a framework 13 with ddr 5 5600 with 86GB/s of memory bandwidth has a bit less than half, nowhere near ⅕th.
If we compare like-to-like the rtx 5070 in a framework 16 has 384GB/s on its own, add the 86 and the combined memory bandwidth is higher than a M1 Max.
I was being generous. These are just the pure bandwidth speeds. For my workflow when the CPU has to go back and forth with the RAM hudreds of times with unique queries, SoCs are hundreds times faster not 5 times.
Intel Lion-cove decodes 8 instructions per cycle and can retire 12. Intel Skymont's triple decoder can even do 9 instructions per cycle and that's without a cache.
AMD's Zen 5 on the other hand has a 6K cache for instruction decoding allowing for 8 instructions per cycle, but still only a 4-wide decoder for each hyper-thread.
And yet AMD is still ahead of intel in both performance and performance-per-watt. So maybe this whole instruction decode thing is not as important as people are saying.