Gamestudio Links
Zorro Links
Newest Posts
Data from CSV not parsed correctly
by EternallyCurious. 04/18/24 10:45
StartWeek not working as it should
by Zheka. 04/18/24 10:11
folder management functions
by VoroneTZ. 04/17/24 06:52
lookback setting performance issue
by 7th_zorro. 04/16/24 03:08
zorro 64bit command line support
by 7th_zorro. 04/15/24 09:36
Zorro FIX plugin - Experimental
by flink. 04/14/24 07:48
Zorro FIX plugin - Experimental
by flink. 04/14/24 07:46
AUM Magazine
Latest Screens
The Bible Game
A psychological thriller game
SHADOW (2014)
DEAD TASTE
Who's Online Now
1 registered members (1 invisible), 672 guests, and 0 spiders.
Key: Admin, Global Mod, Mod
Newest Members
EternallyCurious, howardR, 11honza11, ccorrea, sakolin
19047 Registered Users
Previous Thread
Next Thread
Print Thread
Rate Thread
stable amount of free memory? #446029
10/02/14 08:17
10/02/14 08:17
Joined: Mar 2012
Posts: 927
cyberspace
W
Wjbender Offline OP
User
Wjbender  Offline OP
User
W

Joined: Mar 2012
Posts: 927
cyberspace
I was just thinking ,I want to write a manager that
will allow me not to create entities or whatever if
the memory consumption steps past a limit .

I need some advice on , what would be a good limit
of free memory for a game to keep ,and how could
I effectively calculate memory required in the best form ,for whatever I need to create ,would I just use
sizeof and a check against available memory ?

bit of a vague question I guess ,but this is something
I have never done before , I think it would be called
a memory pool and a pool manager , I just made that up by the way ..

jb


Compulsive compiler
Re: stable amount of free memory? [Re: Wjbender] #446033
10/02/14 12:22
10/02/14 12:22
Joined: Apr 2007
Posts: 3,751
Canada
WretchedSid Offline
Expert
WretchedSid  Offline
Expert

Joined: Apr 2007
Posts: 3,751
Canada
Free memory is wasted memory. You really don't want to have free memory lying around unused, until you really absolutely know that you are going to use it in the future and there is no way to evict existing things from memory in the meantime. In that case, you pre-allocate the memory to keep it available to you.

But yeah, if you intentionally keep free memory around, you are doing things wrong. It's lying around doing nothing, while you could put it to work on more useful things.


Shitlord by trade and passion. Graphics programmer at Laminar Research.
I write blog posts at feresignum.com
Re: stable amount of free memory? [Re: WretchedSid] #446038
10/02/14 14:44
10/02/14 14:44
Joined: Mar 2012
Posts: 927
cyberspace
W
Wjbender Offline OP
User
Wjbender  Offline OP
User
W

Joined: Mar 2012
Posts: 927
cyberspace
yes sid your correct ,but I was thinking if I kept a limit
just below what is physically available and virtually available in some way ,by checking somehow

I could manage things better perhaps , the free memory I am referring to would be just above this
limit ,so I don't do something similar to the OOM situations ..

made sense to me somehow but anyway ..

Thanks man

jb


Compulsive compiler
Re: stable amount of free memory? [Re: Wjbender] #446042
10/02/14 15:37
10/02/14 15:37
Joined: Apr 2007
Posts: 3,751
Canada
WretchedSid Offline
Expert
WretchedSid  Offline
Expert

Joined: Apr 2007
Posts: 3,751
Canada
What's physically available changes over time as the OS moves pages in and out from disk to/from memory. Your virtual address space is limited to 4gb for 32bit applications, and usually around 256tb for 64bit applications (one large hole in the middle of the address space and 128tb chunks at the top and bottom).

The point I'm trying to make is that you shouldn't make any assumptions about the available memory on the system. Have a minimum requirement and let the OS do what it's good at: Managing resources. If you start allocating and using a lot of memory, the OS will first deal with the memory pressure by paging out unused memory pages so your application can use them. If your application has to start dealing with the memory pressure, you almost always have lost already.


Shitlord by trade and passion. Graphics programmer at Laminar Research.
I write blog posts at feresignum.com
Re: stable amount of free memory? [Re: WretchedSid] #446048
10/02/14 17:59
10/02/14 17:59
Joined: Mar 2012
Posts: 927
cyberspace
W
Wjbender Offline OP
User
Wjbender  Offline OP
User
W

Joined: Mar 2012
Posts: 927
cyberspace
okay cool got it , thanks sid


Compulsive compiler
Re: stable amount of free memory? [Re: Wjbender] #446065
10/03/14 14:30
10/03/14 14:30
Joined: Apr 2007
Posts: 3,751
Canada
WretchedSid Offline
Expert
WretchedSid  Offline
Expert

Joined: Apr 2007
Posts: 3,751
Canada
To really drive this point home (because it's important, and maybe not everyone has gotten it yet). Here my system under normal workload:



As you can see, it uses 15.1 Gb of the available 16 Gb. App memory is application memory, file cache is, well, memory used by the filesystem to speed up disk accesses and wired memory is memory allocated inside the kernel that can't be paged out.

Now, I wrote a test application that allocates 2gb of memory and writes them (to assure that the kernels memory manager actually maps the virtual pages to physical one). If you'd just looked at the free memory, you'd assume that your budget is 900mb, and 2gb is far more than that. Here is how the system deals with the increase in memory pressure (note the small bump):



As you can see, the system started to shuffle memory around. It compressed some of the memory and took away from the files cache. But the swap file is still 0kb in size, so it's still all in main memory, nothing has been written to disk.

Alright, here is what happens when further increasing the memory pressure (allocating and writing 4gb of memory):



Still NO paging! Some disk accesses will become slower as more got evicted from the filesystem cache, but oh well.


So, the long story short is that a modern kernel is excellent at managing resources! You want your system to use all of your memory while idling, because it can put it to good use. The system will reclaim resources when there is pressure for resources, and it's really good at keeping up with the pressure.

Of course, that 4 gb experiment doesn't work well when you only have 4gb of RAM, after all, these are all hot pages and the kernel itself also needs a place to reside. That's why you need a minimum requirement of memory. If you constantly need 4gb of memory, a minimum requirement of 6gb is what you are looking for.

Trust yo kernel, trust yo compiler. Hide yo kids, hide yo wives, hide yo husbands!


Shitlord by trade and passion. Graphics programmer at Laminar Research.
I write blog posts at feresignum.com
Re: stable amount of free memory? [Re: WretchedSid] #446089
10/04/14 08:12
10/04/14 08:12
Joined: Mar 2012
Posts: 927
cyberspace
W
Wjbender Offline OP
User
Wjbender  Offline OP
User
W

Joined: Mar 2012
Posts: 927
cyberspace
sadly this is what I am dealing with


anyway , moving on to the following in the manual :

-nx number
Size of the nexus in megabytes. The nexus is a contiguous virtual memory area that the engine allocates at startup for caching entity files and level textures and geometry. It speeds up level loading and prevents that the game suddenly aborts at runtime when memory is running low. The nexus size depends on the size of the biggest level of the game. The bigger the nexus, the bigger levels can be rendered - but the more virtual memory is allocated at startup .

When you set the nexus in Map Properties, WED uses the -nx command line option to transfer the nexus size to the engine. The default value for the nexus is 40 megabytes. The maximum value is limited by the Virtual Memory Setting of the Windows OS, minus around 500...1000 MB that should remain free for the operating system. The current nexus requirement is indicated in the statistics panel and can be read from the nexus variable. The recommended maximum nexus value for commercial games is 200. When setting higher values, you should be aware that several Windows subsystems - including DirectX - tend to crash without error message when the virtual memory is running low.

If the nexus size is exceeded, the engine will allocate additional memory from the PC's virtual memory pool. If the virtual memory is also used up, the application will issue an error message and terminate. The level_mark and level_free functions only work when the nexus is not exceeded. In engines older than A7, exceeding the nexus size will produce a "Nexus too small" error message. The engine must then be restarted with a nexus size higher than that you've used before (e.g. -nx 80 for 80 MB nexus size).

.............


i should determine my requirements and, issue warnings for systems that do not meet it ,would it be
a good idea to raise the nexus and issue a warning that if the virtual memory setting of the system is
exceeded by the requirements of the application so the user is warned about an impending failure before hand...

i think that is good idea ,if i understood nexus correct ?

Last edited by Wjbender; 10/04/14 09:20.

Compulsive compiler
Re: stable amount of free memory? [Re: Wjbender] #446096
10/04/14 14:56
10/04/14 14:56
Joined: Apr 2007
Posts: 3,751
Canada
WretchedSid Offline
Expert
WretchedSid  Offline
Expert

Joined: Apr 2007
Posts: 3,751
Canada
You did, but you didn't understand virtual memory. On a 32bit system you can allocate 3 Gb of virtual memory and it will work. Even if you have 64mb of RAM. There is a 3/1 split where the upper 1gig of the address space is reserved for the kernel, so you effectively have a 3gb address space starting at 0x0.

You can allocate the whole address space. You could probably even use malloc() as memory managers are lazy, but more to that later. The important limiting factor you have is physical memory. If you allocate virtual memory, it's not mapped to any physical address, so it doesn't actually use up any RAM. Of course that's no fun, so once you want to use the virtual memory, it has to start getting backed by physical memory, which happens in page sizes. One page is 4096byte.

The kernel is very optimistic about memory, if you allocate a gig worth of memory via malloc, you get a gig worth of unmapped virtual memory. That's why, especially on 64bit systems, calls to malloc rarely ever fail. Once you start using the allocated memory, the CPU will raise page fault exceptions for the accessed memory page and the kernel will just fetch a free physical address or try to evict something else that is in memory but not needed right now, and use that as backing storage for your virtual memory page. This can fail if there is no way to get the memory for your application in a meaningful amount of time.

On your system you have the issue that physical memory is a very scarce resource, but virtual memory is never a scarce resource (well, it is on 32bit systems/applications, since 4gig isn't really all that much anymore). Every application has its own virtual address space, and all 4gig of that belongs to it. Well, 3gig, because the kernel lives in the upper gig.

Now, the kernel can still deal with demands of more physical memory than what is actually available to the system. Let's say, you have to applications which each need 300mb and one that take sup 500mb. That's over what your system can provide, but if the memory pressure becomes high enough, the kernel will start moving memory that is not needed right now to the hard drive (for example because an application hasn't accessed it in quite some time or it has only background priority to resources). Once the application needs to access the memory area again, the kernel will unfault the memory and move it back from the disk (potentially moving something else to disk in the process).

This works, until you get to the point where your application is fighting itself for memory, at which point you will encounter slow downs due to the kernel having to frequently do roundtrips to the hard drive. This gets worse if your memory access is randomly scattered all over the allocated memory, as the kernel then has a hard time predicting how the memory will be accessed and what to keep around.

However... Your application is still living at that point. Even though it may very well exceed what is physically available to the system, the kernel can keep it around and maybe, just maybe, even get things done in a meaningful way.

The easiest solution really is to become optimistic about memory as well and just roll with it. If the memory is exceeded, the kernel will kill the application and that's it. There is no meaningful way to warn the user at runtime. You'd have to constantly check the system and its resources (physical and virtual) to make a predictions about insufficient memory. So the solution is to just say "Hey, the game needs at least x amount of RAM"


Shitlord by trade and passion. Graphics programmer at Laminar Research.
I write blog posts at feresignum.com
Re: stable amount of free memory? [Re: WretchedSid] #446097
10/04/14 15:56
10/04/14 15:56
Joined: Mar 2012
Posts: 927
cyberspace
W
Wjbender Offline OP
User
Wjbender  Offline OP
User
W

Joined: Mar 2012
Posts: 927
cyberspace
I know what virtual memory and physical memory is
but since I haven't ever worried about it because I
used to have 12 gb of ram on my previous pc ,since
switching to a crappy laptop I have been thinking
about virtual memory and physical memory more than usual in how it all works with my application ,so basically
I am worried about nothing , just let the os do what
it was designed to do and worry about my min. required .

that is what i take away from this thanks sid .


Compulsive compiler

Moderated by  HeelX, Lukas, rayp, Rei_Ayanami, Superku, Tobias, TWO, VeT 

Gamestudio download | chip programmers | Zorro platform | shop | Data Protection Policy

oP group Germany GmbH | Birkenstr. 25-27 | 63549 Ronneburg / Germany | info (at) opgroup.de

Powered by UBB.threads™ PHP Forum Software 7.7.1