LowEndBox - Cheap VPS, Hosting and Dedicated Server Deals

Thrust::VPS - $6.90 1GB OpenVZ VPS Exclusive Offer

Thrust::VPS Rus from Thrust::VPS has provided us a new exclusive coupon code for the LowEndBox readers. Use code LOWENDVZ to get 37% off recurring discount. That would bring down their Mystic OpenVZ plans to $6.90/month, and you get

  • 1024MB guaranteed/2048MB burstable memory
  • 30GB storage
  • 1000GB/month data transfer
  • OpenVZ/SolusVM

Yes it’s another 1GB memory offer :) Their plans are available in either New Jersey, Dallas or Los Angeles (those are direct order links).

I had its OpenVZ VPS for a while in LA. Uptime is good (no reboots for 2 months). Performance is passable — I am running PHP and RabbitMQ on it, using just ~130MB of memory. However the node I am on has pretty bad IO wait.

Latest posts by LEA (see all)


  1. rean22:

    Can you explain the “pretty bad IO wait” staitment?

    August 18, 2010 @ 6:55 am | Reply
  2. Do something like `dd if=/dev/zero of=test bs=64k count=16k; sync`, and see “wa” climbs up in top. I just tested and writing 1GB file and sync took almost 2 minutes.

    Could be a bad node.

    August 18, 2010 @ 7:13 am | Reply
  3. lsylsy2:

    have one on node 1(Openvz)
    100% uptime for two months
    Very Very good.

    August 18, 2010 @ 7:26 am | Reply
  4. I have not tried their OpenVZ but their XEN HVM is great and pretty stable, apart from the hiccups that everyone received on 4XNLA and 6XNLA nodes in LA

    I will definitely try their OpenVZ

    August 18, 2010 @ 7:44 am | Reply
  5. lsylsy2:

    The node are Xeon 5620 with 48GB of ram

    August 18, 2010 @ 7:47 am | Reply
  6. “The promotion code you entered has been applied to your cart but no items qualify for the discount yet – please check the promotion terms”


    August 18, 2010 @ 7:50 am | Reply
  7. Ok, I get it. This is for NEW customer accounts ONLY. Not current customer.

    August 18, 2010 @ 7:51 am | Reply
  8. Anonymous:

    The reason for the heavy IO Wait is they pile way too many VPS’s on a single machine. Whilst they may dimension the RAM and diskspace sufficiently to not oversell too badly, the IO is shot.

    August 18, 2010 @ 7:53 am | Reply
  9. @lowendadmin. I’m guessing you are on 1.vz.la or 2.vz.la in which case you should of got emails from us about a complimentary move to a new node which has much faster I/O

    This order is for just new signups rather than existing customers

    August 18, 2010 @ 8:02 am | Reply
  10. @Rus — may very well be. Need to dig out that email from the trashbin as it can’t be found in WHMCS.

    August 18, 2010 @ 8:11 am | Reply
    • I just bought a 1 GB openvz with their DEC2010 special to test them out.

      I’m on 3.vz.la and there is definitely something wrong. I opened my fresh VM with nothing installed and it froze for 20 seconds when I typed “ls”.

      The usual 1 gb dd test showed around 50 mb/s.

      December 1, 2010 @ 6:44 am | Reply
      • Update:

        I just had a talk with Rus and he confirms it was an iowait problem. He is moving the vps to another node.

        I also ordered a xen from them for testing and so far it doesn’t seem to suffer the same iowait problem.

        December 1, 2010 @ 10:37 am | Reply
  11. > This order is for just new signups rather than existing customers


    August 18, 2010 @ 8:37 am | Reply
  12. So, existing customer will not enjoy this offer if they want to signup another new VPSes??? If yes, too bad :(

    August 18, 2010 @ 8:40 am | Reply
  13. @Wing Yes, this is right. This offer is for NEW CUSTOMERS only. This is disappointing for me too

    August 18, 2010 @ 8:44 am | Reply
  14. We made a conscious choice for this to be new signups as we already offer some of the best value for money in the industry especially on our Xen HVM plans. Unfortunately a small segment of people in the past have abused coupons so we have to limit this to new customers to avoid it.

    August 18, 2010 @ 9:08 am | Reply
  15. @Rus Thanks for the clarification. I wish it was the other way around

    @LEA you forgot to tag this as EXCLUSIVE

    August 18, 2010 @ 9:27 am | Reply
  16. Pascal:

    Lol, nodes with 48G RAM definitely means they pile up a huge amount of vpses per node resulting in very slow disk I/O.

    August 18, 2010 @ 10:06 am | Reply
  17. Anonymous:

    @Pascal: Precisely, its insane

    August 18, 2010 @ 11:00 am | Reply
  18. Jamie:

    I/O wait is shot on 1.vz.la – i got moved to 3.vz.la and everything is perfect. I can highly recommend Thrust::VPS.

    August 18, 2010 @ 11:39 am | Reply
  19. Pascal:

    Well I wouldn’t ever trust a provider who advertises to use nodes with 48G of RAM minimal. That might sound cool to newbies or something, but in reality it just means bad disk I/O.

    If you’re disk I/O is good now it probably means the node isn’t fully loaded yet.

    August 18, 2010 @ 11:50 am | Reply
  20. But also take in consideration that you can scale IO in a similar fashion that you scale CPU. RAID it with multiple drives and you get instant results. 48GB RAM onboard certainly means server grade boxes, and I do hope they are also putting server grade disk IO systems there.

    August 18, 2010 @ 12:38 pm | Reply
  21. Anonymous:

    @LEB: Thats not 100% true, each RAID level has different performance characteristics. If you are going to pile hundreds of VPS’s on a single box then you need multiple disk controllers. But then the next problem happens which is bus contention.

    Here’s some maths for you:

    According to Thrust’s website you get 1.2Ghz CPU on their Mystic product
    They state also: Dual 4 Core Xeon E55xx/E56xx

    Lets assume E5500 at 3.2Ghz – I really doubt they use that but I am being generous.

    Each processor core delivers 3.2Ghz = 25.6Ghz

    If you have say 50 of the Mystic package – thats 50Ghz sold, consider 100 VPS’s on the box 120Ghz sold – but there is only 25.6Ghz available.

    The issue is not just about overselling diskspace or RAM – its about CPU overselling, disk I/O overselling and bus overselling.

    You have to loook at all metrics.

    August 18, 2010 @ 1:04 pm | Reply
  22. Anonymous:

    @Jamie – and when that box gets full up – then what? Migrate again?

    August 18, 2010 @ 1:05 pm | Reply
  23. Anonymous:

    @Pascal – you are 100% right

    August 18, 2010 @ 1:05 pm | Reply
  24. PiotrGr:

    Crap, and I just signed up with them yesterday….. :/

    August 18, 2010 @ 1:10 pm | Reply
  25. Mike:

    The problem with disk i/o is that you can take measures to ensure that what’s available is reasonably fast – fast disks on a good RAID card, properly configured – but there are still no effective ways of ensuring *consistent* disk performance for the end-user. I’d rather have consistent, say, 30MBps read/write speeds (not spectacular, but pretty average for older, you know, 80GB drives, and perfectly adequate for most circumstances) than 100Mbps read/write one moment and 3Mbps the next. You can prioritize disk access to a degree, but it doesn’t work nearly as well as one might hope.

    (I suspect that if you’re going to put, say, fifty VPS customers on one server with six or eight drives, you might actually be better served by using them as bare disks than one giant RAID array, with 7-10 customers on each drive.)

    I was discussing poor i/o with a (non-low-end) VPS provider recently, and their “solution” was to offer to migrate me to a new node, after they reluctantly admitted that they’d observed some horrible i/o performance on the one I was on. Couldn’t they just leave me where I was and *fix* the i/o issue? Not really, they said; it’s much easier to move people who complain. Go figure.

    August 18, 2010 @ 1:39 pm | Reply
  26. @Anonymous — what I said was that having 48GB on a server box does not necessarily mean the IO is going to be an issue, and not specific in Thrust::VPS’s case — I don’t have an idea what kind of IO system their servers have.

    As of the CPU guarantee calculation, the node I am on has E5620. I suspect they put hyperthread into calculation (cheating, I know). So 2.66GHz x 16 = 42.56GHz. Still fall short of 48x 1.2GHz = 47.60GHz(assuming they have 48x 1GB Mystic to fill that 48GB box), but not that far off. Of course, it assumes Thrust::VPS does not oversell on memory, which I found hard to believe myself.

    At the end of the day for budget VPS, CPU is rarely the issue. Most out of box software has minimum RAM configuration, which puts constraint on disk IO (default MySQL for example). People also use these boxes as though they are local so issue stupid commands without nice/ionice them.

    August 18, 2010 @ 1:53 pm | Reply
  27. @Mike — I think OpenVZ by nature is just not very good with IO accounting to give containers consistent IO. One rogue VE can probably cause the entire box to fall. You can manually set IO priority, but I am not sure whether it has credit based scheduler like Xen.

    There are also other tricks with Xen based VPS. Putting swap and root partition into different physical RAID for example. But for VZ, hmm I am not sure what can be done other than active monitoring and boot off rogue VEs.

    August 18, 2010 @ 2:01 pm | Reply
  28. Anonymous:

    @LEB – Hyperthreads must NEVER be included in the calculation.

    The I/O contention is always the problem – be it disk, network interface, bus performance, memory I/O performance, etc

    Big boxes in the VPS business are bad news. Works great for the operator in terms of being able to line their pockets but for the customer, it just sucks badly.

    I have been involved with extensive lab testing of building VPS nodes for OpenVZ, Xen, VMWare and Containers; and when you create random load across the VPS’s you will see a lot of randomised I/O so the disk heads are jumping all over the place. Caching the disks helps to a degree, but the more VPS’s you have one a single server results in low cache hit ratio; and in turn more disk thrashing.

    August 18, 2010 @ 3:43 pm | Reply
  29. Anonymous:

    @Mike – fifty customers on a single box – OMG. But then when I look at many providers they have 100’s on a single box.

    Most operators run on SATA disks – you can do the maths on SATA I/O performance and from what I am seeing most of them are using single SATA channel to handle their disks off.

    If you are lucky, they run Software RAID1 – so you have further I/O penalty which congests the bus further; if you are really lucky – true hardware RAID1 but I dont know many that do that. At least with hardware RAID1 you offload the RAID overhead to the RAID controller. But still performance plummets when you hit 20-30 VPS’s on a box, by the time you hit 50, its even worse.

    Looking at the bottom of this page, there is an advert for Quickweb – 3Ware Raid10 – at the price they are charging $3.75/mo they are clearly piling the customers on the box. So despite having overloaded the RAID overhead their disk performance is still going to be pathetic under load.

    The way to test performance is order 2-3 VPS’s from a provider get them on the same node, then run a heavy DD on one/two of the VPS’s and watch measure the perfomance hit on remaining VPS. You’ll find one user in most case can bring down the whole node.

    August 18, 2010 @ 3:49 pm | Reply
  30. Pascal:

    @LowEndAdmin disk scaling is true to some extent, but with SATA disks you’ll reach a wall pretty soon. And I don’t think you’ll often see SAS disks in this price range.

    August 18, 2010 @ 5:09 pm | Reply
  31. I have a custom VPS on Xen with them. They have been really great! So far is one of the best I have tried so far. Support is really good and their uptime is great. My VPN is in the NJ node.

    August 18, 2010 @ 5:19 pm | Reply
  32. kprice:

    I have two nodes with them. A 1GB Xen PV in Dallas on 3xntx and a 512MB Xen PV in LA on 3xnla. The LA box has had some minor hiccups lately, but all in all both have performed extremely well.

    dd if=/dev/zero of=test bs=64k count=16k; sync

    Gives me:

    Dallas running PowerDNS/MySQL and (Zenoss || OpenNMS) with 20+ targets
    1073741824 bytes (1.1 GB) copied, 7.32196 seconds, 147 MB/s

    LA running PowerDNS/MySQL for 1900+ domains
    1073741824 bytes (1.1 GB) copied, 10.9587 seconds, 98.0 MB/s

    Compared to my 2Host 1.5GB Xen PV on SAS drives
    1073741824 bytes (1.1 GB) copied, 5.33685 seconds, 201 MB/s

    The ThrustVPS VPS’s are performing great and their support is responsive and friendly. It appears some hardware kinks are being worked through but I’ve actually received *PROACTIVE* notifications of issues and upcoming maintenance windows! In my experience that’s nearly unheard of in the VPS world. After using so many VPS, dedicated server, and colocation providers who not only fail to notify you of issues, but even refuse to take ownership of them, I have to give these guys a few extra bonus points.

    August 18, 2010 @ 6:42 pm | Reply
  33. kprice:

    … although I do hope performance stays acceptable and I don’t end up eating my words. :-P

    August 18, 2010 @ 6:44 pm | Reply
  34. In my opinion, unless you are using Linode, your node is likely to be oversold and overloaded to some extent even if its XEN (they oversell bandwidth with 10TB per VPS). I buy small/affordable VPSes because I can use them for as long as they are OK, when they start jumping in terms of performance, trash it and go for another. I save more then $400/yr using smaller/cheaper VPSes. I keep Linode for more stable applications and uses

    August 18, 2010 @ 7:18 pm | Reply
  35. Kprice : “I’ve actually received *PROACTIVE* notifications of issues and upcoming maintenance windows! In my experience that’s nearly unheard of in the VPS world”

    Receive proactive notification of issue & maintenance is good, and looks like they have so many troubles.
    But, have no issue or just few issues will be better for me.

    August 19, 2010 @ 2:55 am | Reply
  36. Jamie:


    No comment really. All i can say is that I/O has not been an issue as soon as i was moved, and according to their monitoring system page, they’ve moved onto other nodes.

    August 19, 2010 @ 3:32 am | Reply
  37. “The promotion code entered has already been used” :/’

    August 21, 2010 @ 9:12 pm | Reply
  38. scyllar:

    Not sure if it is the right place to ask, but what is the best deal for Thrust::VPS XEN at the moment? Thanks.

    August 22, 2010 @ 3:21 am | Reply
  39. promo code not working !!

    “The promotion code entered has already been used”

    August 22, 2010 @ 3:43 am | Reply
  40. me:

    yes,cannot use promo code

    promo code not working !!

    “The promotion code entered has already been used”

    August 22, 2010 @ 6:09 am | Reply
  41. usman:

    Their Promo code is not working for me either. The same message : “The promotion code entered has already been used”

    August 22, 2010 @ 11:27 am | Reply
  42. scyllar:

    Don’t be that serious about coupons, I’d say. If you can’t use it, forget it.

    August 22, 2010 @ 11:37 am | Reply
  43. usman:

    @scyllar … i dont see any other method of using that coupon through validate code button. If the webhost has played a game by creating a puzzle with it , then you win in that.

    August 22, 2010 @ 11:43 am | Reply
  44. I’ll say maybe Thurst::VPS is running a very short promotion against LowEndBox readers. Not a biggie — there’s always another provider to move to later :)

    August 22, 2010 @ 11:48 am | Reply
  45. Rus:

    We allocated 100 uses of this voucher and they all went in the space of a few days which filled all the servers we allocated for this.

    August 22, 2010 @ 8:24 pm | Reply
  46. scyllar:

    @Rus, waiting for your next XEN promo ;)

    August 23, 2010 @ 2:10 am | Reply
  47. me:

    running on nj node 4xnnj and getting this
    1073741824 bytes (1.1 GB) copied, 2.54956 seconds, 421 MB/s

    Very great specs so far… No issues so far (fingers crossed) with them…

    August 24, 2010 @ 12:02 am | Reply
  48. @rus and others,
    One way of having good IO on the HOST machine is to have several hard drives, lets say, 4x 2TB hard drives, make it RAID1 and have 2TB.

    That will give 2 times performance,

    Then strip each VPS between the 2 arrays, if you have 80 VPS accounts, put 40 accounts per disk.
    Users will have great IO.

    most 2U server racks can fit 4 drives. or even 6 drives on some 3U

    or pay slightly more for a 3U rack.

    August 24, 2010 @ 9:06 am | Reply
  49. Good notes for when I get to testing my hardware. I’m doing 12*1.5TB Hardware RAID10 on mine and keeping it under 24 users. The majority of the space will be for snapshots. Also going to include a small high iops partition off mirrored SSD drives.

    September 8, 2010 @ 10:56 pm | Reply
  50. 2 days ago, all server is upgrade kernel
    after that all vps on LA (OpenVZ) is always DOWN !!
    in 2 days, my all vps, is 2 time down…


    today is down to…

    support not respons yet ???

    please RUS, FIX THIS PROBLEM !!!

    September 26, 2010 @ 2:22 am | Reply
  51. hallo Rus,


    PING cp.thrustvps.com ( 56 data bytes
    Request timeout for icmp_seq 0
    Request timeout for icmp_seq 1
    Request timeout for icmp_seq 2
    148 bytes from if-12-0-0.core2.s9r-singapore.as6453.net ( Time to live exceeded
    Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
    4 5 00 5400 89d7 0 0000 39 01 c868

    Request timeout for icmp_seq 3
    Request timeout for icmp_seq 4
    148 bytes from if-12-0-0.core2.s9r-singapore.as6453.net ( Time to live exceeded
    Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
    4 5 00 5400 8484 0 0000 39 01 cdbb

    — cp.thrustvps.com ping statistics —
    6 packets transmitted, 0 packets received, 100.0% packet loss
    hostmaster:~ hacker$

    December 2, 2010 @ 6:19 am | Reply

Leave a Reply

Some notes on commenting on LowEndBox:

  • Do not use LowEndBox for support issues. Go to your hosting provider and issue a ticket there. Coming here saying "my VPS is down, what do I do?!" will only have your comments removed.
  • Akismet is used for spam detection. Some comments may be held temporarily for manual approval.
  • Use <pre>...</pre> to quote the output from your terminal/console, or consider using a pastebin service.

Your email address will not be published. Required fields are marked *