I'm guessing it's aimed at game development since Vulkan has a similar pattern in every function call (although optional, the driver does it's own allocation if you pass null).
As another commenter wrote "how do you allocate memory without an allocator?"
Even `malloc` has overhead.
> Wouldn't dynamic scope be better?
Dynamic scope would likely be heavier than what Odin has, since it'd require the language itself to keep track of this - and to an extent Odin does do this already with `context.allocator`, it just provides an escape hatch when you need something allocated in a specific way.
Then again, Odin is not a high level scripting language like Python or JavaScript - even the most bloated abstractions in Odin will run like smooth butter compared to those languages. When comparing to C/Rust/Zig, yeah fair, we'll need to bring out the benchmarks.
> Dynamic scope would likely be heavier than what Odin has, since it'd require the language itself to keep track of this
Not necessarily, you can imagine that malloc() itself has an API that says "temporarily switch calls on this thread to a different allocator" and then you undo it later.
> and to an extent Odin does do this already with `context.allocator`
It has a temporary allocator as well, which could track memory leaks. Not so much anymore though, IIRC.
> As another commenter wrote "how do you allocate memory without an allocator?
I would like to point out that this is basic knowledge. At first I was wondering if I have gone insane and it really is not the case anymore or something.
> As another commenter wrote "how do you allocate memory without an allocator?"
You call these things something other than an "allocating" and "allocators". Seriously, few people would consider adding a value to a hashmap an intentionally allocational activity, but it is. Same with adding an element to a vector, or any of the dependent actions on it.
For "adding an element to a vector" it's actually not necessarily what you meant here and in some contexts it makes sense to be explicit about whether to allocate.
Rust's Vec::push may grow the Vec, so it has amortized O(1) and might allocate.
However Vec::push_within_capacity never grows the Vec, it is unamortized O(1) and never allocates. If our value wouldn't fit we get the value back instead.
Yep exactly, at it's simplest all an allocator is doing is keeping track of which memory areas are owned and their addresses. In a sense even the stack pointer is a kind of allocator (you can assign the memory pointed to by the stack pointer and then increment it for the next use)
There's a few comments speculating about fraud, but they're way off on the timeframe. I was approached about this project like 7-8 years ago. It's probably been in development the whole time.
1k * 250 * 8 * a team size of 20 is about 40 mil in salary for engineering, could be low, add on their $12M testing, $4.1M just for the design (vintage Deloitte), some cloud cost blowouts and a bunch of dickhead managers and scrumlords, plus the putrid enterprise-grade 3rd party map/data system they've gone with, I bet that wasn't cheap. All up it's in the right ballpark for a typical well-intentioned trainwreck consulting project.
Wouldn't be the first project to blow out because of a bunch of enterprise Typescript, Java and C# devs that can't deliver anything.
This is what I'm doing for my game, I didn't know it was actually a thing in some big titles too, that's reassuring. I landed on it because it was a huge code simplification compared to every other method of handling transparency, and it doesn't look completely shit in the era of high resolutions and frame rates.
I just implemented it for a VR app I’ve been working on where the semi-transparent objects can appear any which way, intersecting, etc. I didn’t realize how much of an issue that’d be…or how hard it’d be to come up with a shader for dithering in VR that doesn’t look awful. I’m still not super happy with what I have - it moves along with the player’s eyes - but every other solution I could come up with didn’t interact well with two screens, especially at far distances from the object. Moire for days.
Yeah, there was an entire season about ending the war on drugs and how it was the only thing that actually worked lol.
Also, they caught the drug kingpin at the end of the show by physically following his lieutenants to a warehouse full of drugs and arresting them all on the way out. The only thing the wiretaps were used for was to build a conspiracy charge against the leader, who had been standing outside for months/years doing face to face meetings with everyone that was arrested, clearly being the one in control of every conversation. If somehow that's not enough to charge someone with conspiracy then it seems removing a small amount of freedom to change that would be far preferable to reading everyone's messages and banning encryption.
"The Wire proves the need for mass surveillance" is the dumbest take I've ever heard. It literally shows the complete opposite.
I might be reading parts of it wrong, but I think that's a different sort of thing to the research in the article.
Sugar is a very indirect cause of heart attacks, everyone knows that most heart attacks are a culmination of decades of diet and exercise habits. It's still worth researching everything to do with that, but it's pretty low value research because it's hard to draw any actionable conclusions from it other than "eat healthier and exercise", which is already well known.
The research in the article is talking about a direct cause. Bacteria exists on arterial plaque, viral infection triggers bacteria to multiply, something about that process causes the plaque to detach and cause a heart attack. If that ends up being a rock solid cause and effect, even for a subset of heart attacks, that could lead to things like direct prevention (anti-virals before the heart attack happens) or changes in patient management (everyone with artery disease gets put far away from sick patients) that could directly and immediately save a lot of lives.
The post you replied to was saying that the data from the study isn't as strong as the article and headline make it out to be, which is usually the case. For this one though I'm reading that less as "it's a nothingburger" and more as "it's a small interesting result that needs a lot of follow up".
While you're not technically wrong, I find this whole approach to be not good.
And actually, if as a lot of science is now suggesting, inflammation and damage due to eating oxidization-prone lipids (aka refined oils) in combination with refined sugar is a big part of the cause of arterial damage and heart disease, that could be easily be the biggest root cause in most of these cases. The bacteria if they even play a causal role at any point, could be a result of previous damage due to diet (and lack of exercise).
The paper's idea of treating heart disease by giving patients antibiotics seems really problematic to me. Destroy your health with poor diet and lack of exercise, and then once you start to feel the effect of this, take antibiotics and destroy your gut health too.
While do do agree with the general premise of your comment, that is, correct the root cause. For some, "eat healthy and exercise", may not be an option, because they are already addicted and overweight. At least, taking anti-biotics could be the very first line of actionable treatment to prevent the bacterial buildup and save their life immediately.
I very strongly disagree. Antibiotics are very dangerous at the individual level in how they mess up the individual's gut bacteria which are crucial for health.
Furthermore giving everyone antibiotics as a preventative measure for heart disease complications, given that most Americans are on the spectrum of heart disease (i.e. have hypertension) is a recipe for bacterial resistance and other population problems.
If you atempt that plan at scale I would expect antibiodic restistant bacteria to develop fast and people soon start dieing younger of what we now think of as minor infections.
The mechanism for how refined linoleic acid if heated would create higher amounts of free radicals that are known to cause oxidative stress / inflammation is well understood.
I agree a large scale rct for this would be great, but I doubt anyone would fund it and if it does get done I'd be surprised if it wasn't designed to meet the biases of the side that funds it.
> In this process, deletion rather than expansion of the wording of the message is preferable, because if an ordinary message is paraphrased simply by expanding it along its original lines, an expert can easily reduce the paraphrased message to its lowest terms, and the resultant wording will be practically the original message.
This bit has me perplexed. If you had a single message that you wanted to send multiple times in different forms, wouldn't compressing the message exponentially limit possible variation whereas expanding it would exponentially increase it? If you had to send the same message more than a couple of times I'd expect to see accidental duplicates pretty quickly if everyone had been instructed to reduce the message size.
I guess the idea is that if the message has been reduced in two different ways then you have to have removed some information about the original, whereas that's not a guarantee with two different expansions. But what I don't understand is that even if you have a pair of messages, decrypt one, and manage to reconstruct the original message, isn't the other still encrypted expansion still different to the original message? How does that help you decrypt the second one if you don't know which parts of the encrypted message represent the differences?
It's mostly talking about the case where someone receives an encrypted message which is intended to later be published openly. If it was padded by adding stuff, an attacker can try to reconstruct the original plaintext by removing the flowery adjectives, whereas if things were deleted the attacker doesn't know what to add.
In particular, the length of a message is not encrypted when encrypting the text. So if the encrypted message is shorter, you know exactly how much to remove to get back the original, and then just need to guess what to delete. If the message is longer, it is much harder to guess whether to add flowery adjectives, a new sentence, change a pronoun for a name, or some other change.
The thread before with someone flogging off their educational book they wrote "with Claude in an afternoon", as if anyone would benefit from investing days or weeks of learning effort into consuming something the author couldn't be fucked spending even a single day on, that one was well crafted satire, right?
I wish. As far as I can tell the Venn diagram of people building piles of shit with NPM and people building piles of shit with LLMs seems pretty close to a circle.
I can give a bit more context as someone that got on WebGL, then WebGPU, and is now picking up Vulkan for the first time.
The problem is that GPU hardware is rapidly changing to enable easier development while still having low level control. With ReBAR for example you can just take a pointer into gigabytes of GPU memory and pump data into it as if it was plain old RAM with barely any performance loss. 100 lines of bullshit suddenly turn into a one line memcpy.
Vulkan is changing to support all this stuff, but the Vulkan API was (a) designed when it didn't exist and is (b) fucking awful. I know that might be a hot take, and I'm still going to use it for serious projects because there's nothing better right now, but the same extensibility that makes it possible for Vulkan to just pivot huge parts of the API to support new stuff also makes it dogshit to use day to day, the code patterns are terrible and it feels like you're constantly compromising on readability at every turn because there is simply zero good options for how to format your code.
WebGPU doesn't have those problems, I quite liked it as an API. But it's based on a snapshot of these other APIs right at the moment before all this work has been done to simplify graphics programming as a whole. And trying to bolt new stuff onto WebGPU in the same way Vulkan is doing is going to end up turning WebGPU into a bloated pile of crap right alongside it.
If you're coming from WebGL, WebGPU is going to feel like an upgrade (or at least it did for me). But now that I've seen a taste of the future I'm pretty sure WebGPU is dead on arrival, it just had horrendous timing, took too long to develop, and now it's backed into a corner. And in the same vein, I don't think extending Vulkan is the way forward, it feels like a pretty big shift is happening right now and IMO that really should involve overhauls at the software/library level too. I don't have experience with DX12 or Metal but I wouldn't be surprised if all 3 go bye bye soon and get replaced with something new that is way simpler to develop with and reflects the current state of hardware and driver capabilities.
Historically, Microsoft didn't have a problem making breaking changes in new D3D APIs so I think they'll be one of the first to make a clean API to leverage the new hardware features
Both rust and C are also much, much less error-prone than assembly. It is so, so easy to get things wrong in assembly in very subtle ways. That’s one of the main drivers while people are only writing assembly today when they absolutely have to.
reply