malloc() and friends may always return NULL. From the man page:
If successful, calloc(), malloc(), realloc(), reallocf(), valloc(), and aligned_alloc() functions return a pointer to allocated memory. If there is an error, they return a NULL pointer and set errno to ENOMEM.
In practice, I find a lot of code that does not check for NULL, which is rather distressing.
No non-embedded libc will actually return NULL. Very, very little practical C code actually relies only on specified behavior of the spec and will work with literally any compliant C compiler on any architecture, so I don’t find this particularly concerning.
Usefully handling allocation errors is very hard to do well, since it infects literally every error handling path in your codebase. Any error handling that calls a function that might return an indirect allocation error needs to not allocate itself. Even if you have a codepath that speculatively allocates and can fallback, the process is likely so close to ruin that some other function that allocates will fail soon.
It’s almost universally more effective (not to mention easier) to keep track of your large/variable allocations proactively, and then maintain a buffer for little “normal” allocations that should have an approximate constant bound.
This is just a Linux ecosystem thing. Other full size operating systems do memory accounting differently, and are able to correctly communicate when more memory is not available.
There are functions on many C allocators that are explicitly for non-trivial allocation scenarios, but what major operating system malloc implementation returns NULL? MSVC’s docs reserve the right to return NULL, but the actual code is not capable of doing so (because it would be a security nightmare).
> There are functions on many C allocators that are explicitly for non-trivial allocation scenarios, but what major operating system malloc implementation returns NULL?
Solaris (and FreeBSD?) have overcommitting disabled by default.
Solaris, AIX, *BSD and others do not offer overcommit, which is a Linux construct, and they all require enough swap space to be available. Installation manuals provide explicit guidelines on the swap partition sizing, with the rule of thumb being «at least double the RAM size», but almost always more in practice.
That is the conservative design used by several traditional UNIX systems for anonymous memory and MAP_PRIVATE mappings: the kernel accounts for, and may reserve, enough swap to back the potential private pages up front. Tools and docs in the Solaris and BSD family talk explicitly in those terms. An easy way to test it out in a BSD would be disabling the swap partition and trying to launch a large process – it will get killed at startup, and it is not possible to modify this behaviour.
Linux’s default policy is the opposite end of that spectrum: optimistic memory allocation, where allocations and private mappings can succeed without guaranteeing backing store (i.e. swap), with failure deferred to fault time and handled by the OOM killer – that is what Linux calls overcommit.
I hack on various C projects on a linux/musl box, and I'm pretty sure I've seen musl's malloc() return 0, although possibly the only cases where I've triggered that fall into the 'unreasonably huge' category, where a typo made my enormous request fail some sanity check before even trying to allocate.
It's been a while but while I agree the man page says that, my limited understanding was the typical libc on linux won't really return NULL under any sane scenario. Even when the memory can't be backed
I think you're right, but "typical" is the key word. Embedded systems, systems where overcommit is disabled, bumping into low ulimit -v settings, etc can all trigger an immediate failure with malloc(). Those are edge cases, to be sure, but some of them could be applied to a typical Linux system and me, as a coder, won't be aware of it.
As an aside: To me, checking malloc() for NULL is easier than checking a pointer returned by malloc on first use. That's what you're supposed to do in the presence of overcommit.
Even with overcommit enabled, malloc may fail if there is no contiguous address space available. Not a problem in 64 bits but may occasionally happen in 32 bits
But why would you want to violate the docs on something as fundamental as malloc? Why risk relying on implementation specific quirks in the first place?
I looked at some older Zyxel products and came to the same conclusion a while back. There's a whole industry of labeling generic hardware as being part of someone's else ecosystem
it's a stretch to call it generic hardware, all of cheap cameras use similar hardware, but every few months there is a new version of chip which you need to adjust to. It's challenging to find an exact chip if you want to, because they get out of date faster than JS frameworks
When you consider the differing regulation and applications, it makes a great deal of sense. Just making a window in the US can cost less than $10 if you hand assemble it. Making a window that conforms to all building regulations in your particular area is a huge undertaking that involves highly specialized equipment.
This is pretty funny and reminds me of when some company in the US tried to sue someone for copyright infringement. The evidence they offered up was just screenshots of IP addresses, not even a packet log of the traffic in question.
That was almost 15 years ago and the support is evidently not as useful.
Also it's entirely contained within a program that creates systemd .service files. It's super easy to extract it in a separate project. I bet someone will do it very quickly if there's need.
I asked an employee for something by part number and described it. The answer he gave was "why the hell would you want that anyways? I've worked here 13 years and never seen one". I found it on a shelf a few levels up and used a grounding rod from the electrical section to spear it and bring it down to ground level
reply