LowEndBox - Cheap VPS, Hosting and Dedicated Server Deals

Why I Don't Like OpenVZ VPS Plans with Large Burstable Memory

“superblade” asked on WebhostingTalk — Can somebody explain Burst Ram?. It got to be the 1 zillionth times “burstable memory” explained on WHT, and about the same number of times even web hosting providers have no idea what they are talking about. All those crap on “you should rely on guaranteed memory, and burstable is only for your sudden increase of traffic”. Only Chris got it right, and he is not even a hosting provider!

And this is the VPS hosting industry we are talking about here!

Here is one article that discussed guaranteed vs. burstable memory in OpenVZ, and why you should not care about the so-called “guaranteed memory”. Basically the real guaranteed parameter, oomguarpages, is only used when the host server hits an OOM (out of memory). VPS that exceeds too much oomguarpages will have its processes killed. However how often does a Linux box hit an OOM? When both physical memory and swap are exhausted.

Not sure whether you have tried to admin a Linux box with all its swap space exhausted (usually set a 2x the physical memory). It is painfully slow. And if it is used as a physical node for OpenVZ VE’s, all the customers would have fleed the server and posting “over-sold! over-loaded!” comments all over the place. Basically it rarely happens, and that implies burstable memory should be all you need to care about.

That would certainly wreck the way OpenVZ VPS providers plan their packages — they probably shouldn’t be using guaranteed memory to calculate how many VPS to fit onto a box. That also means for providers that give significantly more burstable memory might have their servers more oversold, if they did the planning base on guaranteed memory. For example AlienVPS (512MB/2048MB), Virpus (512MB/2048MB), Vixile (256MB/2048MB), Newwebsite (512MB/2048MB) etc.

For example you can allocate almost 2GB of memory on Virpus constantly, and no ill-effect will happen to your VPS, unless the actual server runs out both physical memory and swap (unlikely).

Anyway. Maybe it’s not as bad as I think if the provider can constantly monitor their own servers to ensure no overloading is happening. Virpus is a larger provider so I picked on them (and I think they are in better position of handling these issues). But what about all these one/two men teams?

Or use Xen. Pricy, but you get what you have paid for, and memory accounting is much easier to understand.

Latest posts by LEA (see all)


  1. AZGuy:

    > Virpus is a larger provider so I picked on them (and I think they are in better position of handling these issues). But what about all these one/two men teams?

    Isn’t Virpus run by only 1 guy too (Ken)? Lol.

    August 3, 2010 @ 5:33 am | Reply
  2. Could very well be. They have indeed been around for a while though (comparing to those on our monthly deadpool list :)

    August 3, 2010 @ 5:36 am | Reply
  3. earl:

    I see what you’re saying LEA.. but at the current state, I think the system works cause people are under the impression that they should not rely on burst memory so most users will stay well below the guaranteed memory just to be safe.

    August 3, 2010 @ 6:56 am | Reply
  4. agree with earl
    From the first day using openvz from pingvps, i can’t rely on burst RAM at all.
    But, that is my one and only openvz VPS.

    August 3, 2010 @ 8:30 am | Reply
  5. My point is — why can’t you rely on? Have your processes actually getting killed through OpenVZ’s OOM (which `dmesg` would tell you)? Or just generally slowness? But slowness is pretty much a result of overloaded server, rather than the fact that your specific VPS is using the burstable memory?

    Yes Xen, VMWare, KVM are definitely easier to put your head around, especially when you have been running linux boxes for a while and are familiar with the memory model. But I guess for a simple LAMP/LNMP stack OpenVZ is not too bad, and is able to bring down the cost.

    August 3, 2010 @ 8:59 am | Reply
  6. LEA :”My point is — why can’t you rely on? Have your processes actually getting killed through OpenVZ’s OOM (which `dmesg` would tell you)?”

    Killed, didn’t check dmesg at that time. And reducing +-10MB RAM usage solve the problem.

    August 3, 2010 @ 9:10 am | Reply
  7. Sorry , not ‘+-10MB’.
    mysqld get killed when innodb,and bdb still enabled.
    maybe 10-20MB ?

    August 3, 2010 @ 9:12 am | Reply
  8. That’s because innodb uses its own memory allocator and allocates more than committed, thus use more burstable than actual memory. Check your beancounter and I think you might actually hit your burstable limit with innodb.

    The same with Java and many other apps with their own malloc.

    August 3, 2010 @ 9:18 am | Reply
  9. Just get free 512MB openvz VPS for test purpose.
    Any suggestion about test method?

    August 3, 2010 @ 9:38 am | Reply
  10. earl:

    Does dmesg work on openvz? I tried but only get a blank, it worked when I had a dedi..

    As a test I did try using the burst ram on a VPS I had.. it seemed to work alright but it was only a test site so no real traffic but WP always seemed to load ok and no failcnt. I was saying that if everyone felt entitled to use the burst then would we not contribute to an OOM condition? say with Virpus if everyone started to use the burst of 2GB instead of the allotted 512MB would the server not run out of memory?

    August 3, 2010 @ 9:41 am | Reply
  11. earl:


    I used a 256/512 VPS for a test.. what I did was install kloxo and enabled spamassassin, clamav, xcache etc.. I suppose you could also install webmin with kloxo to raise the memory a bit more.

    August 3, 2010 @ 9:58 am | Reply
  12. Well, I asked about Burst from one provide and he told me that its for temporary use and using it more often can get the processed killed and my VPS suspended :(

    +1 for “why can’t you rely on [burst memory]?” as LEA said

    August 3, 2010 @ 11:36 am | Reply
  13. InsDel:

    If you set barrier(oomguarpages)=barrier(vmguarpages)=512MB and limit(vmguarpages)=barrier(privvmpages)=limit(privvmpages)=some large number and change meminfo so that it shows oomguarpages instead of privvmpages, would that be more like a 512MB Xen VPS without swap? (memory-accounting wise, all other virtualization differences aside)

    If only physpages limiting was implemented.

    August 3, 2010 @ 1:24 pm | Reply
  14. As far as I know, vmguarpages is just the maximum you can allocate — which is a bit pointless I found. According to OpenVZ documentation:

    If the current amount of allocated memory space exceeds the guarantee but below the barrier of privvmpages, allocations may or may not succeed, depending on the total amount of available memory in the system.

    My main issue is, total amount of available memory always includes swapped pages. So allocation may not succeed only if the swap is exhausted, i.e. when the physical server is probably already in the spiral of death.

    Virtuozzo’s SLM is supposed to fix it by accounting just the physpages + kmemsize. It’s still not Xen/KVM-like. Nor is it available in OpenVZ.

    August 3, 2010 @ 1:45 pm | Reply
  15. @LEA
    Principally, i agree with “why can’t you rely on [burst memory]?”.
    I’m just wondering if this rule still valid for overselled VPS?

    August 3, 2010 @ 2:01 pm | Reply
  16. In our VPS plans, this is exactly why we advertise burst, instead of guaranteed. It’s also why providers should ensure that burst is always double a plan’s “guarantee”…java processes have the most fun with RAM…

    In evaluating other providers, it seems anything less…or more…and the provider is typically overcommiting/oversubscribing their equipment.

    August 3, 2010 @ 4:49 pm | Reply
  17. And this is the VPS hosting industry we are talking about here!

    Actually this is WHT we’re talking about. I don;t think WHT should be seen as a snapshot of the hosting industry, VPS or otherwise.

    August 8, 2010 @ 5:07 pm | Reply
  18. @drmike — well true. Maybe there are just more hosting-provider-wannabes on WHT who have no idea what they are talking about.

    August 8, 2010 @ 10:58 pm | Reply
  19. id:

    admin & all,

    is there any benefit (except lower cost) of choosing openvz vps over xen from customer point of view?

    September 11, 2010 @ 2:58 pm | Reply
  20. rm:

    I like the fact that with OpenVZ, RAM used for the kernel, the file cache, various system housekeeping buffers etc, are all “external” to your node, and not counted towards your memory allowance. Only your useful running programs are counted. So comparing e.g. Xen 64 MB and OpenVZ 64 MB, in the latter case you can in fact fit a lot more stuff into your VPS.

    September 11, 2010 @ 4:08 pm | Reply
  21. @id yep:
    * Memory & Disk space upgrades are applied with zero downtime. The same goes when expanding a host server’s storage capacity; there’s zero negative impact to running VMs.
    * Kernel RAM, filesystem caches, and various other kernel-level things do not count against a VM’s quota; this enables more effective usage of low-resource VMs (i.e. our VMs using less than ~256MB or RAM benefit most).
    * The virtualization overhead in OpenVZ is negligible in most scenarios, other virtualization technologies such as VMWare and XEN cannot claim this.
    * VMWare for example still has a horrible record of ensuring accurate timesync (an issue that has persisted since their 1998 alpha/beta releases), and XEN encounters similar issues. Poor timesync can easily break database dependent and other time-sensitive applications.
    * The additional CPU, Networking and Disk/storage virtualization overheads in XEN and VMWare can really impact performance-sensitive applications…some argue that effects can be minimized, but this is only true when using features found in some of the newest CPUs…that’s more cost that a provider will pass to the customer.
    * More effective CPU, RAM & Disk quota allocations ensures that customers receive the services they purchased, even on increasingly-loaded host machines.
    * Migration of an OpenVZ virtual machine, between hardware nodes, is a built-in & mature feature that was introduced back in 2006. This enables zero-downtime migrations between hardware nodes in most cases – other virtualization technologies either simply cannot do this, their techniques and tools are still maturing, or the feature typically costs extra.
    * Cloning of VMs is quite-easy; simply copy the files from one container to another. Wheras XEN & VMWare also require mucking around with bootstrapping and hardware identification stuff that can break a new container.
    * Mass-management; an administrator can see all the running processes and files of all the containers on an OpenVZ system. This enables providers to more effectively detect and shutdown customers that are violating the terms of service.

    September 11, 2010 @ 4:15 pm | Reply
  22. Do note that with reply from TOCICI, a lot of benefit is for the hosting or service provider as it gives a lot more flexibility when you need to do admin on them. I’ll probably pick OpenVZ if I am trying to virtualize my servers. It’s just easier.

    However when you are on the end of actual user with unknown neighbours, I prefer Xen.

    September 12, 2010 @ 9:05 am | Reply
  23. @LowEndAdmin I guess it depends on how aggressively the provider actively works to ensure a high quality of services.

    We are very aggressive about ensuring high performance; along with our excellent internal business processes, TOCICI’s BuildYourVPS systems were built for high reliability and to ensure excellent performance. We believe that OpenVZ is an excellent virtualization tool that fairly balances the shared-needs of both providers and customers.

    For example, when a provider can quickly detect and shutdown those that are violating the terms of service, this means lower costs to the customer and a generally higher quality of services for everyone; lower chances of seeing your IPs blacklisted (or emails spam-tagged) because a neighbor ends up being a spammer or hacker, and a far lower risk of having shared resources saturated (such as networking).

    When a provider can easily clone and/or move VPSes between physical machines, they’re able to ensure a higher level of uptime; when hardware issues arise, your VPS can be migrated between physical machines, without downtime.

    When the virtualization overhead is lower, then the customer can get more bang for the same buck. OpenVZ disk & network I/O are simply faster than other virtualization technologies, and it takes less-expensive hardware to accomplish the same tasks – this means higher performance for customers, at a lower price-point.

    The various things that do not count against an OpenVZ VM’s quota…well, you basically get more resources for the same price-point.

    Zero upgrade downtime and accurate timesync benefits everyone.

    …Take, for example, one of our larger consulting clients. They’ve been giving TerreMark approximately $125,000/month (yep, that’s $1.5Mil/year) for VMWare-based virtualization services on some pretty impressive hardware (256GB of RAM, 48-core systems, etc…) With their relationship to VMWare, TerreMark is supposed to be the “creme de la creme” of virtualized hosting providers. Yet, TerreMark’s inability to consistently deliver reliable and accurate timesync keeps breaking this client’s web application (this is a pre-production issue – the app is not even under a production load yet!). They’re also having issues with poor disk I/O, and unusual networking bottlenecks that keep chocking the database servers. I don’t know about you, but if I was giving a provider that much money every month, and really basic core-competencies kept breaking like this, I’d be pissed. TOCICI’s BuildYourVPS architecture does not suffer the same issues found with VMWAre and Xen-based virtualization.

    Here’s an interesting discussion on Xen vs VMWare:

    September 12, 2010 @ 2:59 pm | Reply
  24. No matter how “aggressive” a provider can be, it is still reactive to the load changes. How quick can you identify and shutdown offending VE? 30 minutes? 10 minutes? A rogue neighbour can steal all the IO for 10 minutes due to OpenVZ only has IO prioritization but no suitable throttling. And I will be pissed if my VE gets shutdown because I got a run-away script due to sloppy programming. I expect my IO gets throttled.

    As of time-sync. Yes I do experience various drift in my Xen VPS, however it’s not something ntpd can’t fix. Every computer has drift anyway. You can’t run ntpd on OpenVZ unless your provider explicitly enables it, which actually makes it worse in the comparison.

    September 12, 2010 @ 8:51 pm | Reply
  25. @LowEndAdmin You are correct about a delayed response to rouge VPSes, although our responses are often automated and in practice it takes us an average of five minutes to detect and shutdown a rouge VPS. Although shutdowns are very rare – typically that VPS is deprioritized, and an automated system contacts customers so that they can login and resolve the situation. If a provider does not oversubscribe, and partitions their host servers accurately…our experience has been that other VPSes are unaffected.

    Running NTP inside VMWare VMs typically causes more grief than its worth. Our experience with Xen has been similar…maybe things are getting better recently? As for OpenVZ, the good providers :-) run NTP on the host, so there’s no need for each individual VPS to run it as well.

    September 13, 2010 @ 6:23 am | Reply
  26. Craig:

    So how should VPS hosts sell their servers to avoid complete overselling? From what i have been reading the burst memory isnt physical memory, right? So if there is a 16GB server you can still sell 16GB “Guaranteed” memory?

    October 13, 2012 @ 1:25 pm | Reply

Leave a Reply

Some notes on commenting on LowEndBox:

  • Do not use LowEndBox for support issues. Go to your hosting provider and issue a ticket there. Coming here saying "my VPS is down, what do I do?!" will only have your comments removed.
  • Akismet is used for spam detection. Some comments may be held temporarily for manual approval.
  • Use <pre>...</pre> to quote the output from your terminal/console, or consider using a pastebin service.

Your email address will not be published. Required fields are marked *