free(-1) decremented the counter.
This way you could check for leaks :p
On most platforms an implementation could just return adjacent addresses from the top half of the address space. On 32-bit platforms it doesn't take long to run out of such address space however, and you don't want to waste the space for a bitmap allocator. I suppose you could just use a counter for each 64K region or something, so you can reuse it if the right number of elements has been freed ...
How about we call it "Maybe a Number" and since equality can't work for it we still need a separate way to ask like: Math.whoIsTheMaN(me)
I know I've seen that somewhere, but may I ask what standard you're referring to?
If I recall correctly, this was an archaic stackless microcontroller. The heap support was mostly a marketing claim.
If the size of the space requested is zero, the behavior is implementation-defined; the value returned shall be either a null pointer or a unique pointer.
Being tasked to implement a specification typically means having to pass extensive conformance tests and having to answer for instances of noncompliance. You soon learn to follow the spec to the letter, to the best of your abilities, unless you can make a strong case to your management for each specific deviation.
The C99 standard[1] seems to have worded it more precisely:
If the size of the space requested is zero, the behavior is implementation- defined: either a null pointer is returned, or the behavior is as if the size were some nonzero value, except that the returned pointer shall not be used to access an object.
[1]: https://rgambord.github.io/c99-doc/sections/7/20/3/index.htm...
-1 seems to be disjoint from all objects.
Writing a specification is hard...
I have two questions:
Could you please specify what have you meant by Fixed Function blocks on Adreno300+? Did you mean regular Rasterizer, DS and vertex assembler by this, or even portions of old FFP functionality just like lighting and etc from ES 1.x?
Because I think there is no need in HW 1.x support, since even Samsung in their FIMG emulated most of classic FFP functionality like combiners through shades.
I haven't investigated SGX53x drivers yet, but I think it was very same, since PVR MBX architecture was mostly abandoned in 2010.
And second question:
Why so many of early qcom android smartphones lacked GPU drivers at all? For example HTC Dream wasn't shipped with gpu drivers, as well as Magic and many Huawei models? For example, Moto droid has GPU drivers from the start.
This happened almost 20 years ago and I can't even remember what I had for breakfast yesterday, so take everything I say with a grain of salt.
Re. The switch from Adreno 200 to 300. Both had a unified (vertex+pixel) programmable shader unit. They were completely different, with the 200 sporting a VLIW 4+1 vector unit that AMD used at the time, while the 300 was scalar and based on Qualcomm's unreleased QShader core. This made things a lot easier for the poor compiler team.
Sorry, I don't want to go into details about what features were implemented in hardware vs shaders. Every GPU does things a little differently in that regard and there's always some secret sauce. Sometimes features are fully implemented in hardware, sometimes they are fully based on shaders, and often there's a bit of both.
As for Imagination Technologies' SGX 530, which was the successor to PVR MBX, it also had a scalar unified shader unit. I never worked on the MBX, but I was involved in the development of the SGX family back in 2006-2007. Once again, the details of what features were implemented in hardware vs shaders was complicated and I won't get into the secret sauce.
> Why so many of early qcom android smartphones lacked GPU drivers at all? For example HTC Dream wasn't shipped with gpu drivers, as well as Magic and many Huawei models
The question is puzzling to me. The drivers may not be easy to find, but OpenGL would not work without drivers. The HTC Dream shipped with Adreno 130, which I had the unfortunate experience of having to support circa 2008 even though I had not been involved in its development. It felt ancient even at the time, with a super basic vertex shader that we had to manually program in the most rudimentary assembly. Did it even have a pixel shader? God, I can't remember a thing. It did support user clipping planes and stencils. Oh, the nightmares, they haunt me.
Sorry, going back on topic. Yeah, there were drivers for that clunker. They must be hidden somewhere. Please spare a prayer for the souls of the poor bastards who had to work on that abomination. I can't understand how they were able to hold on to their sanity after going through that experience.
Thanks for refreshing my memory. It was Type II fun, for sure.
> The drivers may not be easy to find, but OpenGL would not work without drivers
It is. Android previously has (and still have) so-called PixelFlinger, which was very fast sw gles 1.1 implementation. Also Android lacked of 2D GPU acceleration prior to 3.0 (partial) and 4.2 (full). Due to lack of GPU drivers, T-Mobile G1 was shipped without any GPU acceleration despite the existence of driver for some other device. What is much more interesting, is that first versions of CyanogenMod had proper driver, i.e libGLES_qcom.so with proper Q3D functions and OpenGL wrappers around them ;)
Also I remember Windows Mobile smartphones from HTC on qcom chipsets which lacked not only d3dm drivers (native for WM), but gles too. For example, HTC Kaiser aka TyTN II. And after one year some other model on same chipset was released with driver, so community just put q3d dlls into Windows folder and got fully working 3D ;)
Yet again, thanks for your reply! It's always interesting to learn something new ;)
Now that you mention it, that rings a bell. I was very much a product of the fancy new programmable shader era --so much so that I jumped into OpenCL as soon as Apple reached out to GPU vendors.
I have no idea whether any Adreno 120 devices shipped without GPU acceleration enabled. The release date of the HTC Dream around late 2008 is significant, because that means it may have been affected by Qualcomm's acquisition of AMD's handheld IP group, which is where all the Adreno folks were working (including me). It was a chaotic time, with different members of the team joining Qualcomm at different dates, months apart. It's very possible that some releases were "suboptimal" around that time.
> Also I remember Windows Mobile smartphones from HTC on qcom chipsets which lacked not only d3dm drivers (native for WM), but gles too.
IIRC that work was done by one or two ATI old-timers out in Orlando, based on code inherited from ATI. Smart guys, but way under staffed.
Those early years were chaotic in the mobile GPU space. Lots of buggy crappy drivers, lots of buggy crappy hardware. It got better once the teams got bigger and we hired more experienced engineers and managers from desktop GPUs.
Thanks for the questions. Rarely get a chance to talk about that stuff.
Personally I'd be less worried about uniqueness and more worried about alignment. An address of -1 would violate alignment requirements on most modern targets. But that may have been fine on the target in question; older microcontrollers tended to not have any hardware-level alignment requirements, and the C implementation could have treated all types as having alignment 1.
It returns multiple types of null pointer
From C89, §7.10.3 "Memory management functions":
> If the size of the space requested is > zero, the behavior is implementation-defined; the value returned shall be either a null pointer or a > unique pointer.
The wording is different for C99 and POSIX, but I went back as far as possible (despite the poor source material; unlike later standards C89 is only accessible in scans and bad OCR, and also has catastrophic numbering differences). K&R C specifies nothing (it's often quite useless; people didn't actually write against K&R C but against the common subset of extensions of platforms they cared about), but its example implementation adds a block header without checking for 0 so it ends up doing the "unique non-NULL pointer" thing.
I wouldn't be surprised if Win3.1-looking tool chain had not implemented c89, though I'm certain it wasn't c99 compliant, which only requires that the pointer is a disjoint from any object, not unique.
It's strange that the standard has relaxed over time.
Of course, microcontrollers and the like can have somewhat eccentric implementations of languages of thing and perhaps aren't strictly compliant, and frankly even standard compliant stuff like "int can be 16 bits" might surprise some code that doesn't expect it.
I know I've seen that somewhere, but may I ask what standard you're referring to?
> Each [...] allocation shall yield a pointer to an object disjoint from any other object. The pointer returned points to the start (lowest byte address) of the allocated space. If the space cannot be allocated, a null pointer shall be returned. If the size of the space requested is 0, the behavior is implementation-defined: either a null pointer shall be returned, or the behavior shall be as if the size were some non-zero value, except that the behavior is undefined if the returned pointer is used to access an object.
https://pubs.opengroup.org/onlinepubs/9799919799/functions/m...
The current C standard text says:
> The order and contiguity of storage allocated by successive calls to the aligned_alloc, calloc, malloc, and realloc functions is unspecified. The pointer returned if the allocation succeeds is suitably aligned so that it can be assigned to a pointer to any type of object with a fundamental alignment requirement and size less than or equal to the size requested. It can then be used to access such an object or an array of such objects in the space allocated (until the space is explicitly deallocated). The lifetime of an allocated object extends from the allocation until the deallocation. Each such allocation shall yield a pointer to an object disjoint from any other object. The pointer returned points to the start (lowest byte address) of the allocated space. If the space cannot be allocated, a null pointer is returned. If the size of the space requested is zero, the behavior is implementation-defined: either a null pointer is returned to indicate an error, or the behavior is as if the size were some nonzero value, except that the returned pointer shall not be used to access an object.
So yeah, the allocations are required to be unique (at least until it's free'd).
Phrasing could be slightly more clear to prevent someone from making the argument that -1 is disjoint from all objects as it does not point to an object
So yeah, C implementations have to reserve at least two addresses, not just one. By the way, the standard to this day allows NULL, when cast to a pointer type, to be something else than all-bits-zero pattern (and some implementations indeed took this opportunity).
--
While it's rare to find a platform nowadays that uses something other than a zero bit pattern for NULL as normal pointer type; it's extremely common in C++ for pointer-to-member types: 0 is the first field at the start of a struct (offset 0); and NULL is instead represented with -1.
Well, yes and no. A 4-byte int can not reside at -4, but a char could be; but no object can reside at -1. So implementations need to take care that one-past-the-end addresses never equal to whatever happens to serve as nullptr but this requirement only makes address -1 completely unavailable for the C-native objects.
This works if you are only interested in the overall memory balance. However, if you want to make sure that all malloc() calls are matched by a free() call, you need to distinguish between NULL and a successfull zero-sized allocation, otherwise you run into troubles when you call free on an "actual" NULL pointer (which the standard defines as a no-op).
If malloc() had returned a real pointer, you'd have to free that too.
> wouldn’t it be better just to return NULL and guarantee that 0-sized allocations never use any memory at all?
Better: takes less memory Worse: blinds you to this portability issue.
A mismatch between malloc(0) and free(-1).
You’d know nothing about calls to malloc with non-zero sizes.
Even hacking the compiler to treat -1 as equal to NULL as well wouldn't work since lots of software won't free NULL-like pointers.
Oddly, this is bog standard implementation specific behavior for standard C - caller accessing any result of malloc(0) is undefined behavior, and malloc(0) isn't required to return NULL - the reference heap didn't, and some probably still don't.
If I would allocate 0 bytes of memory and get a pointer to it, I wouldn't care what the value of the pointer is since I am not allowed to dereference it anyways.
But then again, why would I allocate 0 bytes of memory?
Ex: a vector using only a counter and pointer - you can use realloc() with fewer pointer validity checks.
I think it would be much better if malloc(0) just returned 1 or -1 or something constant. If the programmer needs the allocation to have a unique address, they can call malloc(1) instead.
It means you don't need a bunch of special-case handling if one out of 27 types ends up with zero size in some situation. It just all works the same way. Especially the unique address part because that would be an annoying source of difficult to track bugs.
GNU C has a Elvis operator so that you can write "malloc(x?:1)" if you really need to do that.
If only a unique reference is needed (if it won't be used where other kind of pointers would also be used), and keeping track of allocation is wanted, one thing that may be possible is to just use a number, and then use a bit array to keep track of which numbers are in use.
It's been 30 years so I've forgotten the details. My solution was to always allocate size+1 since memory use was far from critical.
Additionally the actual amount of memory malloc allocates is implementation-defined so long as it is not less than the amount requested, but accessing this extra memory is undefined behavior since processes don't know if it exists or not. a non-NULL return could be interpreted as malloc(0) allocating more than zero bytes.
Some implementations don't actually perform the allocation until theres a pagefault from the process writing to or reading from that memory so in that sense a non-NULL return is valid too.
I'd argue that malloc(0)==NULL makes less sense because there's no distinction between failure and success.
The only real problem is specifying two alternate behaviors and declaring them both to be equally valid.
Another use was to use it to mint unique cookies/addresses, but malloc(1) works for this just as well.
The point of the original C Standard was to make rules about these things AND not break existing implementations. They recognized that (a) and (b) were in existing implementations and were reasonable, and they chose not to break the existing implementations when writing the standard.
This is similar to the extremely unfortunate definition of the NULL macro. There were two existing styles of implementation (bare literal 0 and (void *) 0) and the Standard allows either style. Which means the NULL macro is not entirely safe to use in portable code.
A pointer to what, though? If the requester asked for 0 bytes of memory, you'd either be pointing to memory allocated for another purpose (!) or allocating a few bytes that weren't asked for.
> This makes people unhappy for various reasons
I read through all the links trying to figure out what those reasons might be and came up empty, I'm still curious why anybody would expect or rely on anything except a null pointer in this instance.
FWIW the alignment guarantees of `malloc()` mean it often will have to allocate more than you ask for (before C23 anyway). You can't 'legally' use this space, but `malloc()` also can't repurpose it for other allocations because it's not suitably aligned.
That said I still agree it's a hack compared to just using `malloc(1)` for this purpose, it's well-defined and functionally equivalent if you're looking for a unique address. The fact that you don't know what `malloc(0)` is going to do makes it pretty useless anyway.
Did they change "suitably aligned for any object type" to "suitably aligned for any object type with size less than or equal to what was requested" or something like in C23?
If you’re pointing to a zero sized data it shouldn’t matter what it’s pointing to. Even outside valid address space. Because you shouldn’t be reading or writing more than 0 bytes anyway.
You are always allocating bytes you weren't asked for: the allocation metadata and some extra bytes to satisfy the alignment requirement. If you absolutely don't want to allocate memory, you probably shouldn't have called malloc() in the first place :)
By extension, if x == 0, doesn't it make sense for the system to either return NULL OR to return a pointer to 0 bytes of memory which the program can use? So the standard promises exactly that: to return either NULL or else a unique pointer where that the program has permission to use zero bytes starting at that pointer.
malloc(0) is allowed to return non-NULL because the standard decrees it.
One way of thinking is that all mallocated pointers must always be freed exactly once. Then you're portable.