After much internal debate and struggle, I’ve decided to finally write an article on the pagefile.  Why the struggle?  Because the topic is the Windows paging file…a pretty old concept that I think most folks understand, not to mention virtual memory is a pretty abstract and boring topic so I’m sure to lose a few people along the way.  Then why am I finally writing about it?  As it turns out, I still get questions about this all the time…and I still see ridiculous advice being handed out (even by our own folks) all the time…and I still see our customers configuring the pagefile wrong all the time.  And instead of repeating myself over and over again and pointing people to the same resources, I figured I could point people to my own article which consolidates a lot of the good information out there, while debunking some of the myths about the pagefile in the process.  I’ll end the article with a few real-world examples so you can see what I’ve recommended setting the pagefile to in the past in certain scenarios.

Let’s start with the basics – paging is a memory-management technique by which a system can store and retrieve frequently accessed data from secondary storage for use in main memory.  The operating system retrieves data from this secondary storage in blocks called pages.  So the paging file is essentially a collection of “pages” and these pages are stored on disk (that’s the secondary storage I was referring to earlier).  This extension of virtual memory is important because it allows an operating system to leverage disk storage for data that might not fit into memory, which improves performance and prevents application crashes.  The pagefile also allows the physical address space of a process to be non-contiguous (preventing things like fragmentation and other problems), but that’s about as much theory as I want to get into in this article about paging and memory segmentation.  If I kept rambling about what my college professor taught me in my Operating Systems class, I’m sure to lose even more readers. 😉

So now that we have a basic understanding of what the pagefile is (extension of virtual memory on disk) and why it’s important (speed, performance, etc.), how should we go about configuring this thing?  And that’s really what I want to talk about in this article – how to size the Windows paging file.

Now that I’ve bashed people for mis-configuring this thing, I’m going to give everyone a break.  Because the reason most people are configuring the pagefile incorrectly is because the “authorities” on this subject are providing improper guidance and don’t seem to understand how paging works either!  So it’s not your fault…articles like this and this are still being referenced and that’s probably why I still see pagefiles that are blindly set to 1 GB or 1.5-3x RAM!  It also doesn’t help that the default setting is to allow for a “system managed” pagefile, which is almost never what we want.  Perhaps this is even more telling…this is a quote from a comment posted on one of my favorite references in the world:

“I was involved in choosing the default min/max sizes for system managed pagefiles in Vista, and I’m pretty sure those numbers were not just copied from some magazine 🙂  The 1 GB minimum was chosen based on the actual commit charge observed on small machines (512 MB of RAM). The 3*RAM maximum might seem excessive on machines with lots of RAM, but remember that pagefile will only grow this large if there is actual demand. Also, running out of commit (for example, because of a leak in some app) can bring the entire system to a halt, and a higher maximum size can make the difference between a system that does not respond and has to be rebooted and a system that can be recovered by restarting a process.  I will admit that scaling the maximum size linearly with the size of RAM is somewhat arbitrary. Perhaps it should have been a fixed constant instead.”

Pretty telling when even the guy (or gal) at MSFT fesses up, eh?  It also shows us that the default settings are essentially designed for desktops…or that we bought all of our servers in the year 2000 when 2 GB RAM boxes were the norm.  The reality is these defaults aren’t good and they haven’t been for a really long time.  And especially in a server-world that us Citrix people live in and especially in the year 2011 when servers with 100 GB+ of memory are quite common.

So what does Citrix recommend then for the pagefile?  That’s where I turn to one of the smartest people in the world, Mark Russinovich.  Name sound familiar?  For starters, he’s the author of one of my favorite IT books of all time called “Windows Internals“.  But he’s also the founder/creator of Winternals and Sysinternals.com (procmon anyone?).  After Microsoft gobbled up everything he ever wrote and invented, he’s now a Technical Fellow at MSFT.  We also happen to have studied the same thing in college – Computer Engineering.  The only difference?  He has a Ph.D. from Carnegie Mellon and I went to Tulane…let’s just say there were “other” things to focus on in New Orleans.  😉

So now that everyone knows Mark, let me point you to one of my favorite references on virtual memory in general, but also the pagefile (towards the end of the article):

Please, I beg you, take 15 minutes to read through that article.  Because after you read through it, you’ll have a much better understanding of how virtual memory and the pagefile work.  And once you understand how the pagefile is used by the OS, then you can correctly size it!  But it all comes down to the peak commit charge or maximum commit.  I’ll let Mark finish off the “Citrix best practice” related to sizing pagefiles for me:

To optimally size your paging file you should start all the applications you run at the same time, load typical data sets, and then note the commit charge peak (or look at this value after a period of time where you know maximum load was attained). Set the paging file minimum to be that value minus the amount of RAM in your system (if the value is negative, pick a minimum size to permit the kind of crash dump you are configured for). If you want to have some breathing room for potentially large commit demands, set the maximum to double that number.

And that’s really it!  What a novel concept…do some actual performance/load testing (which seems to be a forgotten art these days) and set the pagefile appropriately based on peak commit.  So the optimal size for the pagefile actually has very little to do with how much memory is in the system or some multiple of RAM.  And it has everything to do with your unique workload and how much paging your apps are doing!

So why might you still hear some Citrix or Microsoft Consultants say to set it to the “size of RAM” plus maybe 1% or 12 MB?  That’s just a RULE OF THUMB when we don’t know anything about the workload or can’t determine peak commit through proper testing…and setting it to the size of RAM plus a bit extra simply allows for a full memory dump to be taken.  So that’s why you might hear that advice still…it’s better than saying something from 10 years ago like “1.5x RAM” or “2-3X RAM”, but it’s important to remember that it’s just a rule of thumb…and the only way to optimally size your pagefile for the best performance (and cost which we’ll talk about in a minute) is to follow Mark’s advice and look at the peak commit charge.  We even have this nifty thing called ESLT that allows you to simulate load so you can determine peak commit and set the pagefile properly.  Why aren’t more people doing this?  I have no idea.

Now let’s talk about the cost aspect and memory dumps a bit more.  Because sizing the pagefile according to Mark’s guidance can also save you a lot of money, especially in certain XA scenarios and PVS-based XD deployments.  Allow me to explain.  Let’s say you are deploying XA 6.5 (64-bit) on bare-metal and the server has 256 GB RAM.  If you followed the logic of the size of RAM or 1.5x RAM, you would have to order those boxes with pretty sizable disks to even put a pagefile on there! (Assuming min = max value and the file isn’t growing on demand).  And do you even need a full or complete memory dump?  Maybe…maybe not.  Will a minidump or kernel memory dump suffice?  Maybe…maybe not.  Sure, you might need a full dump if you are getting blue screens on your boxes and MSFT Support gets engaged and requests one.  But that’s a pretty hefty price to pay for a full dump, considering I’ve only seen MSFT request a full dump on 2 or 3 of the ~200 projects I’ve been apart of over the last 8 years.  And I’ve already started seeing customers take this “chance”…they are ordering boxes with say 256 GB RAM with say 128 GB SSDs!  So whether we are going bare-metal or cutting up these giant boxes with a server virtualization product like XenServer, we still aren’t going to have enough disk space for a pagefile that’s the size of RAM so we can take a full dump.  So it seems customers are already taking this calculated risk for some XA workloads which may not be deemed mission-critical by the business.  By the way, I recently ran into this situation at a customer and we used storage vMotion to move some resources around and take a full dump.  Of course, the dump proved useless to MSFT, but we still found a way to get it done.  I’ve also used the relatively new “Dedicated Dump File” feature on a XA6/R2 system to take a full dump “outside” of the typical pagefile used to back virtual memory…this little feature can be extremely valuable and is yet another reason why the pagefile does not need to be the size of physical memory!  That’s why I want everyone to ask their customers (or themselves depending on who you are) if taking a full memory dump is really required or not.  Or might it be better to save some cash on disk and possibly move things around when the time comes (in that rare event like I said when you actually need a full dump)?

Let’s do another example…say you are deploying Win7 desktops via XD and PVS.  You are using target device hdd for the PVS wC (with a 5 GB persistent drive) and your Win7 VM spec is 2 vCPUs and 4 GB RAM.  Again, if you followed the rule of thumb and used the size of RAM, your 5 GB secondary drive would get gobbled up by a 4 GB pagefile almost immediately.  That would only leave 1 GB for the wC itself, event logs, ES data, etc.  Not good.  So you have a couple options – make that secondary drive even bigger or correctly size the pagefile!  Making the secondary drive bigger is a HUGE cost hit because that is a per-VM “storage hit” as I like to say.  So that adds up and gets expensive quickly.  So I’d recommend the latter…the last time I did this at a customer we saw that the peak commit was a little below 2 GB, so we set the min and max equal to 2 GB.  That left us with 3 GB for everything else which was a safe bet in my mind with nightly reboots (and subsequent flushing of the wC).   And since these are desktops, having a pagefile smaller than the amount of memory was also OK because we really didn’t care about taking a full dump.  As a baseline, I’ll typically configure a desktop for a minidump since it only requires a couple MBs…and I might configure my servers for a minidump or kernel memory dump as opposed to a complete memory dump.  But if I have the disk space, cost is not an issue and the XA workload is deemed absolutely mission-critical by the business, then I’ll configure my boxes for a full dump to be safe.  So it depends, but these are the factors it depends on and I want everyone to start asking these questions so we can be a little smarter about configuring the pagefile and probably saving some cash in the process.

There are some other things I could continue to go on about, most notably whether even having a pagefile is required, if setting the min=max is a best practice, whether having multiple pagefiles make sense, or splitting the pagefile across multiple disks make sense, but I think most of the industry agrees on that stuff and we already know the answers (yes, almost always, probably not, and only if they are truly separate disks and not partitions, respectively!).

To wrap this article up, let’s quickly recap some of the important items:

  • The pagefile is an extension of virtual memory on disk
  • The default settings involved with a “system managed” pagefile should not be trusted or used
  • Setting the pagefile equal to the size of RAM (plus some small amount of overhead to take a dump) is just a rule of thumb to follow when no testing can be done
  • MarkR’s advice should be followed to properly size a pagefile (based on proper testing and peak commit – not some multiple of RAM!)
  • There are a variety of memory dumps that can be configured…and a full or complete dump may only be required in certain situations
  • The “Dedicated Dump File” feature can prove very handy on newer operating systems

I really hope this article proves useful in your travels.  Please drop me a note in the comments section if you have any feedback or questions.  Thanks for reading.

-Nick

Nick Rintalan, Senior Architect, Citrix Consulting