You did, but you didn't understand virtual memory. On a 32bit system you can allocate 3 Gb of virtual memory and it will work. Even if you have 64mb of RAM. There is a 3/1 split where the upper 1gig of the address space is reserved for the kernel, so you effectively have a 3gb address space starting at 0x0.

You can allocate the whole address space. You could probably even use malloc() as memory managers are lazy, but more to that later. The important limiting factor you have is physical memory. If you allocate virtual memory, it's not mapped to any physical address, so it doesn't actually use up any RAM. Of course that's no fun, so once you want to use the virtual memory, it has to start getting backed by physical memory, which happens in page sizes. One page is 4096byte.

The kernel is very optimistic about memory, if you allocate a gig worth of memory via malloc, you get a gig worth of unmapped virtual memory. That's why, especially on 64bit systems, calls to malloc rarely ever fail. Once you start using the allocated memory, the CPU will raise page fault exceptions for the accessed memory page and the kernel will just fetch a free physical address or try to evict something else that is in memory but not needed right now, and use that as backing storage for your virtual memory page. This can fail if there is no way to get the memory for your application in a meaningful amount of time.

On your system you have the issue that physical memory is a very scarce resource, but virtual memory is never a scarce resource (well, it is on 32bit systems/applications, since 4gig isn't really all that much anymore). Every application has its own virtual address space, and all 4gig of that belongs to it. Well, 3gig, because the kernel lives in the upper gig.

Now, the kernel can still deal with demands of more physical memory than what is actually available to the system. Let's say, you have to applications which each need 300mb and one that take sup 500mb. That's over what your system can provide, but if the memory pressure becomes high enough, the kernel will start moving memory that is not needed right now to the hard drive (for example because an application hasn't accessed it in quite some time or it has only background priority to resources). Once the application needs to access the memory area again, the kernel will unfault the memory and move it back from the disk (potentially moving something else to disk in the process).

This works, until you get to the point where your application is fighting itself for memory, at which point you will encounter slow downs due to the kernel having to frequently do roundtrips to the hard drive. This gets worse if your memory access is randomly scattered all over the allocated memory, as the kernel then has a hard time predicting how the memory will be accessed and what to keep around.

However... Your application is still living at that point. Even though it may very well exceed what is physically available to the system, the kernel can keep it around and maybe, just maybe, even get things done in a meaningful way.

The easiest solution really is to become optimistic about memory as well and just roll with it. If the memory is exceeded, the kernel will kill the application and that's it. There is no meaningful way to warn the user at runtime. You'd have to constantly check the system and its resources (physical and virtual) to make a predictions about insufficient memory. So the solution is to just say "Hey, the game needs at least x amount of RAM"


Shitlord by trade and passion. Graphics programmer at Laminar Research.
I write blog posts at feresignum.com