Thrust::VPS - $6.90 1GB OpenVZ VPS Exclusive Offer
Aug 18, 2010 @ 6:45 am
/
/
Rus from Thrust::VPS has provided us a new exclusive coupon code for the LowEndBox readers. Use code LOWENDVZ to get 37% off recurring discount. That would bring down their Mystic OpenVZ plans to $6.90/month, and you get
- 1024MB guaranteed/2048MB burstable memory
- 30GB storage
- 1000GB/month data transfer
- OpenVZ/SolusVM
Yes it’s another 1GB memory offer :) Their plans are available in either New Jersey, Dallas or Los Angeles (those are direct order links).
I had its OpenVZ VPS for a while in LA. Uptime is good (no reboots for 2 months). Performance is passable — I am running PHP and RabbitMQ on it, using just ~130MB of memory. However the node I am on has pretty bad IO wait.
Related Posts:
Early Black Friday Offer: VersaWeb has a Dedi Deal in Las Vegas, Dallas, and Miami!
Triple Header: The Greatest Thanksgiving Turkey Recipe of All Time, a Bonus Code for RackNerd's Give...
RackNerd's 11.11 Sale NOW LIVE! KVM VPS from $11.11/Year in Los Angeles, San Jose, Seattle, Dallas, ...
Get a 2GB VPS for Only $10.30 a Year in Texas! Wow! Amazing Deals on NAT and Shared, Too! Thanks, ...
RackNerd Says Thank You to Customers: Fabulous Prizes in Their Thanksgiving Giveaway, and Cheap VPS ...
ColoCrossing: Now in Chicago! And Those Deals Are There Now, Too!
The Founder at LowEndBox
The original owner of LowEndBox known as LowEndAdmin or LEA for short founded LowEndBox in 2008 and created the concept of hosting applications on low resource Low End Boxes. After creating the roots of the community that we know today, LEA stepped aside and allowed others to carry the torch forward.
Can you explain the “pretty bad IO wait” staitment?
Thanks.
Do something like `dd if=/dev/zero of=test bs=64k count=16k; sync`, and see “wa” climbs up in top. I just tested and writing 1GB file and sync took almost 2 minutes.
Could be a bad node.
have one on node 1(Openvz)
100% uptime for two months
Very Very good.
I have not tried their OpenVZ but their XEN HVM is great and pretty stable, apart from the hiccups that everyone received on 4XNLA and 6XNLA nodes in LA
I will definitely try their OpenVZ
Note:
The node are Xeon 5620 with 48GB of ram
“The promotion code you entered has been applied to your cart but no items qualify for the discount yet – please check the promotion terms”
:(
Ok, I get it. This is for NEW customer accounts ONLY. Not current customer.
The reason for the heavy IO Wait is they pile way too many VPS’s on a single machine. Whilst they may dimension the RAM and diskspace sufficiently to not oversell too badly, the IO is shot.
@lowendadmin. I’m guessing you are on 1.vz.la or 2.vz.la in which case you should of got emails from us about a complimentary move to a new node which has much faster I/O
This order is for just new signups rather than existing customers
@Rus — may very well be. Need to dig out that email from the trashbin as it can’t be found in WHMCS.
I just bought a 1 GB openvz with their DEC2010 special to test them out.
I’m on 3.vz.la and there is definitely something wrong. I opened my fresh VM with nothing installed and it froze for 20 seconds when I typed “ls”.
The usual 1 gb dd test showed around 50 mb/s.
Update:
I just had a talk with Rus and he confirms it was an iowait problem. He is moving the vps to another node.
I also ordered a xen from them for testing and so far it doesn’t seem to suffer the same iowait problem.
> This order is for just new signups rather than existing customers
:(
So, existing customer will not enjoy this offer if they want to signup another new VPSes??? If yes, too bad :(
@Wing Yes, this is right. This offer is for NEW CUSTOMERS only. This is disappointing for me too
We made a conscious choice for this to be new signups as we already offer some of the best value for money in the industry especially on our Xen HVM plans. Unfortunately a small segment of people in the past have abused coupons so we have to limit this to new customers to avoid it.
@Rus Thanks for the clarification. I wish it was the other way around
@LEA you forgot to tag this as EXCLUSIVE
Lol, nodes with 48G RAM definitely means they pile up a huge amount of vpses per node resulting in very slow disk I/O.
@Pascal: Precisely, its insane
I/O wait is shot on 1.vz.la – i got moved to 3.vz.la and everything is perfect. I can highly recommend Thrust::VPS.
Well I wouldn’t ever trust a provider who advertises to use nodes with 48G of RAM minimal. That might sound cool to newbies or something, but in reality it just means bad disk I/O.
If you’re disk I/O is good now it probably means the node isn’t fully loaded yet.
But also take in consideration that you can scale IO in a similar fashion that you scale CPU. RAID it with multiple drives and you get instant results. 48GB RAM onboard certainly means server grade boxes, and I do hope they are also putting server grade disk IO systems there.
@LEB: Thats not 100% true, each RAID level has different performance characteristics. If you are going to pile hundreds of VPS’s on a single box then you need multiple disk controllers. But then the next problem happens which is bus contention.
Here’s some maths for you:
According to Thrust’s website you get 1.2Ghz CPU on their Mystic product
They state also: Dual 4 Core Xeon E55xx/E56xx
Lets assume E5500 at 3.2Ghz – I really doubt they use that but I am being generous.
Each processor core delivers 3.2Ghz = 25.6Ghz
If you have say 50 of the Mystic package – thats 50Ghz sold, consider 100 VPS’s on the box 120Ghz sold – but there is only 25.6Ghz available.
The issue is not just about overselling diskspace or RAM – its about CPU overselling, disk I/O overselling and bus overselling.
You have to loook at all metrics.
@Jamie – and when that box gets full up – then what? Migrate again?
@Pascal – you are 100% right
Crap, and I just signed up with them yesterday….. :/
The problem with disk i/o is that you can take measures to ensure that what’s available is reasonably fast – fast disks on a good RAID card, properly configured – but there are still no effective ways of ensuring *consistent* disk performance for the end-user. I’d rather have consistent, say, 30MBps read/write speeds (not spectacular, but pretty average for older, you know, 80GB drives, and perfectly adequate for most circumstances) than 100Mbps read/write one moment and 3Mbps the next. You can prioritize disk access to a degree, but it doesn’t work nearly as well as one might hope.
(I suspect that if you’re going to put, say, fifty VPS customers on one server with six or eight drives, you might actually be better served by using them as bare disks than one giant RAID array, with 7-10 customers on each drive.)
I was discussing poor i/o with a (non-low-end) VPS provider recently, and their “solution” was to offer to migrate me to a new node, after they reluctantly admitted that they’d observed some horrible i/o performance on the one I was on. Couldn’t they just leave me where I was and *fix* the i/o issue? Not really, they said; it’s much easier to move people who complain. Go figure.
@Anonymous — what I said was that having 48GB on a server box does not necessarily mean the IO is going to be an issue, and not specific in Thrust::VPS’s case — I don’t have an idea what kind of IO system their servers have.
As of the CPU guarantee calculation, the node I am on has E5620. I suspect they put hyperthread into calculation (cheating, I know). So 2.66GHz x 16 = 42.56GHz. Still fall short of 48x 1.2GHz = 47.60GHz(assuming they have 48x 1GB Mystic to fill that 48GB box), but not that far off. Of course, it assumes Thrust::VPS does not oversell on memory, which I found hard to believe myself.
At the end of the day for budget VPS, CPU is rarely the issue. Most out of box software has minimum RAM configuration, which puts constraint on disk IO (default MySQL for example). People also use these boxes as though they are local so issue stupid commands without nice/ionice them.
@Mike — I think OpenVZ by nature is just not very good with IO accounting to give containers consistent IO. One rogue VE can probably cause the entire box to fall. You can manually set IO priority, but I am not sure whether it has credit based scheduler like Xen.
There are also other tricks with Xen based VPS. Putting swap and root partition into different physical RAID for example. But for VZ, hmm I am not sure what can be done other than active monitoring and boot off rogue VEs.
@LEB – Hyperthreads must NEVER be included in the calculation.
The I/O contention is always the problem – be it disk, network interface, bus performance, memory I/O performance, etc
Big boxes in the VPS business are bad news. Works great for the operator in terms of being able to line their pockets but for the customer, it just sucks badly.
I have been involved with extensive lab testing of building VPS nodes for OpenVZ, Xen, VMWare and Containers; and when you create random load across the VPS’s you will see a lot of randomised I/O so the disk heads are jumping all over the place. Caching the disks helps to a degree, but the more VPS’s you have one a single server results in low cache hit ratio; and in turn more disk thrashing.
@Mike – fifty customers on a single box – OMG. But then when I look at many providers they have 100’s on a single box.
Most operators run on SATA disks – you can do the maths on SATA I/O performance and from what I am seeing most of them are using single SATA channel to handle their disks off.
If you are lucky, they run Software RAID1 – so you have further I/O penalty which congests the bus further; if you are really lucky – true hardware RAID1 but I dont know many that do that. At least with hardware RAID1 you offload the RAID overhead to the RAID controller. But still performance plummets when you hit 20-30 VPS’s on a box, by the time you hit 50, its even worse.
Looking at the bottom of this page, there is an advert for Quickweb – 3Ware Raid10 – at the price they are charging $3.75/mo they are clearly piling the customers on the box. So despite having overloaded the RAID overhead their disk performance is still going to be pathetic under load.
The way to test performance is order 2-3 VPS’s from a provider get them on the same node, then run a heavy DD on one/two of the VPS’s and watch measure the perfomance hit on remaining VPS. You’ll find one user in most case can bring down the whole node.
@LowEndAdmin disk scaling is true to some extent, but with SATA disks you’ll reach a wall pretty soon. And I don’t think you’ll often see SAS disks in this price range.
I have a custom VPS on Xen with them. They have been really great! So far is one of the best I have tried so far. Support is really good and their uptime is great. My VPN is in the NJ node.
I have two nodes with them. A 1GB Xen PV in Dallas on 3xntx and a 512MB Xen PV in LA on 3xnla. The LA box has had some minor hiccups lately, but all in all both have performed extremely well.
dd if=/dev/zero of=test bs=64k count=16k; sync
Gives me:
Dallas running PowerDNS/MySQL and (Zenoss || OpenNMS) with 20+ targets
———————————————————————-
1073741824 bytes (1.1 GB) copied, 7.32196 seconds, 147 MB/s
LA running PowerDNS/MySQL for 1900+ domains
——————————————-
1073741824 bytes (1.1 GB) copied, 10.9587 seconds, 98.0 MB/s
Compared to my 2Host 1.5GB Xen PV on SAS drives
———————————————–
1073741824 bytes (1.1 GB) copied, 5.33685 seconds, 201 MB/s
The ThrustVPS VPS’s are performing great and their support is responsive and friendly. It appears some hardware kinks are being worked through but I’ve actually received *PROACTIVE* notifications of issues and upcoming maintenance windows! In my experience that’s nearly unheard of in the VPS world. After using so many VPS, dedicated server, and colocation providers who not only fail to notify you of issues, but even refuse to take ownership of them, I have to give these guys a few extra bonus points.
… although I do hope performance stays acceptable and I don’t end up eating my words. :-P
In my opinion, unless you are using Linode, your node is likely to be oversold and overloaded to some extent even if its XEN (they oversell bandwidth with 10TB per VPS). I buy small/affordable VPSes because I can use them for as long as they are OK, when they start jumping in terms of performance, trash it and go for another. I save more then $400/yr using smaller/cheaper VPSes. I keep Linode for more stable applications and uses
Kprice : “I’ve actually received *PROACTIVE* notifications of issues and upcoming maintenance windows! In my experience that’s nearly unheard of in the VPS world”
Receive proactive notification of issue & maintenance is good, and looks like they have so many troubles.
But, have no issue or just few issues will be better for me.
@Anonymous
No comment really. All i can say is that I/O has not been an issue as soon as i was moved, and according to their monitoring system page, they’ve moved onto other nodes.
“The promotion code entered has already been used” :/’
Not sure if it is the right place to ask, but what is the best deal for Thrust::VPS XEN at the moment? Thanks.
promo code not working !!
“The promotion code entered has already been used”
yes,cannot use promo code
promo code not working !!
“The promotion code entered has already been used”
Their Promo code is not working for me either. The same message : “The promotion code entered has already been used”
Don’t be that serious about coupons, I’d say. If you can’t use it, forget it.
@scyllar … i dont see any other method of using that coupon through validate code button. If the webhost has played a game by creating a puzzle with it , then you win in that.
I’ll say maybe Thurst::VPS is running a very short promotion against LowEndBox readers. Not a biggie — there’s always another provider to move to later :)
We allocated 100 uses of this voucher and they all went in the space of a few days which filled all the servers we allocated for this.
@Rus, waiting for your next XEN promo ;)
running on nj node 4xnnj and getting this
1073741824 bytes (1.1 GB) copied, 2.54956 seconds, 421 MB/s
Very great specs so far… No issues so far (fingers crossed) with them…
@rus and others,
One way of having good IO on the HOST machine is to have several hard drives, lets say, 4x 2TB hard drives, make it RAID1 and have 2TB.
That will give 2 times performance,
Then strip each VPS between the 2 arrays, if you have 80 VPS accounts, put 40 accounts per disk.
Users will have great IO.
most 2U server racks can fit 4 drives. or even 6 drives on some 3U
or pay slightly more for a 3U rack.
Good notes for when I get to testing my hardware. I’m doing 12*1.5TB Hardware RAID10 on mine and keeping it under 24 users. The majority of the space will be for snapshots. Also going to include a small high iops partition off mirrored SSD drives.
2 days ago, all server is upgrade kernel
after that all vps on LA (OpenVZ) is always DOWN !!
in 2 days, my all vps, is 2 time down…
WHY LIKE THAT???
today is down to…
support not respons yet ???
please RUS, FIX THIS PROBLEM !!!
hallo Rus,
SOLUSVM is DOWN ?
PING cp.thrustvps.com (78.129.220.17): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
148 bytes from if-12-0-0.core2.s9r-singapore.as6453.net (116.0.84.50): Time to live exceeded
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 5400 89d7 0 0000 39 01 c868 183.91.77.123 78.129.220.17
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
148 bytes from if-12-0-0.core2.s9r-singapore.as6453.net (116.0.84.50): Time to live exceeded
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 5400 8484 0 0000 39 01 cdbb 183.91.77.123 78.129.220.17
^C
— cp.thrustvps.com ping statistics —
6 packets transmitted, 0 packets received, 100.0% packet loss
hostmaster:~ hacker$
It works fine here.