Starting a few months back, I have a VPS from Slicehost. It's the cheapes one they've got, with only 256MB RAM.
I never worked on a VPS, I only had either dedicated physical servers in the company datacenter (at the previous job) or CPanel-based hosted accounts (for some other clients).
All in all, a VPS is just as one might expect: almost like a normal server only slower.
And the slowness is starting to bug me a bit, specifically the problem that I don't know how slow is it supposed to be.
The fixed technical details from Slicehost is that you'll have 256MB RAM, 10GB or disk storage and 100GB bandwidth.
Now there are 2 issues here. One which seems quite obvious and another one I'll introduce later.
OK, the 1st problem is that you don't know how much CPU cycles you are going to get. Being a VPS means it runs on some beefy server (Slicehost says it's a Quad core server with 16GB of RAM).
According to Slicehost's FAQ:
Each Slice is assigned a fixed weight based on the memory size (256, 512 and 1024 megabytes). So a 1024 has 4x the cycles as a 256 under load. However, if there are free cycles on a machine, all Slices can consume CPU time.
This basically means that under load, each slices gets CPU cycles depending on the RAM it has (ie. price you pay). A 256MB slice gets 1 cycle, the 512MB slice gets 2 cycles, 1GB slice gets 4 cycles and so on.
The problem here is of course, that one is not certain that they only have on the server a maximum amount of slices, but Slicehost is clearly overselling as top usually displays a "Steal time" of around 20%.
So, assuming a machine is filled 100% with slices and there is no multiplexing, it means that a 256MB slice gets 6.25% of a single CPU under load.
6.25 isn't much at all, but considering that the machine isn't always under load, the slice seems to get a decent amount of CPU nonetheless.
If we consider the overselling issue and that 20% is stolen by Xen to give to other VPS, we get to an even 5 %.
Now, this might not be as bad as it sounds CPU-wise as I've noticed Xen stealing time when my CPU-share is basically idle anyhow so maybe it doesn't affect my overall performance.
For example: ./pi_css5 1048576 takes about 10 seconds which is more than decent.
The bigger problem with VPS seems the be the fact that hard drives aren't nearly as fast as RAM. And when you have a lot of processes competing for the same disk, it's bound to be slow.
What Slicehost doesn't mention is if the "fixed weight" sharing rule they use for CPU cycles applies to disk access too. My impression is that it is.
After trying to use my VPS as a build server I've noticed it grind to a halt.
top shows something like this:
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 62.2%id, 20.9%wa, 0.0%hi, 0.0%si, 16.9%st
but the load average for a small build is something like
and it easily goes to 3, 4 and even 9! when I also try to do something else there.
load average: 1.73, 2.06, 1.93
So, looking at the information above, we can note that 62.2%, the CPU is just idle, while the actualy "working" tasks, ie. 20.9% are waiting for IO. The rest of 16.9% CPU time is stolen by Xen and given to other virtual machines, and I don't think it really matters given that the load is clearly IO-bound.
And here lies the problem: just how fast might Slicehosts' hard drives be ? And how many per slice ? Actually more like: how many slices per drive ?
From a simple test I made, a simple build that takes 30 seconds on my MacBook Pro (2.4Ghz/2GB ram/laptop hard drive-5400rpm) takes about 20 minutes on the slice. This means the VPS is 40 times slower when doing IO-bound tasks.
Another large build that takes around 40 minutes on my laptop took 28 hours on the server. Which respects the about 40 times slower rule.
Now considering the above number and a 20% steal time, I'd expect to have a 20% overselling of slices on a physical machine. Meaning, at 16GB per machine, roughly 76 slices of 256MB on one machines. Taking into account the 1:40 rule above for IO speed, this means that they have about 2 hard drives in a server.
It's certainly liberating to have complete control over a server. CPanel solution just don't cut it when you need to run various applications on strange ports. Of course, the downsize is that you also have to do all the administration tasks, secure it, etc.
The Slice host services are very decent price-wise, the "administrator" panel they have provides you with everything you need, even a virtual terminal that goes to tty1 of the machine (very handy if for some reason SSH doesn't work for example).
Even the smallest slice I'm using right now has enough space, RAM and bandwidth for small tasks. If you just use it sparingly during business hours, the "fixed weight" sharing rule gives you enough CPU / IO for most tasks.
But for heavy usage, I think the solution is either to get a more expensive slice or start building your own machine.
IO-bound tasks are almost impossible to run due to the 1:40 slowness noticed. This means that you need to get at least the 4GB slice to have it run decently. Of course, that's $250 compared to the $20 slice I have right now.
CPU doesn't seem to be a problem, at least for my kind of usage. It seems responsive enough during normal load and mostly idle under heavy load (so idle that Xen gives my CPU cycles to other virtual machines). Initially I was expecting this to be a major problem while moving my build server there, but boy, was I wrong. IO-limitations don't even compare with the CPU limitations.
Getting 5% or more of a fast CPU doesn't even compare to getting 2.5% of an even slower resource like that hard drive if you are compiling.
During the time I was considering the CPU to be my future bottleneck, I was thinking which option would be better: 2 x 256MB slices or a bigger 512MB slice.
According to their rules and offering, the two configurations are totally comparable. Even more, using their sharing rule, 2 x 256MB slices should get at least the same CPU cycles under load as the 512MB one. (Further emails from Slicehost's support led me to believe the rule might be oversimplified, but they didn't tell me in what way -- I think the weight of the smallest slices might be even smaller with the bigger slices getting even more of their share).
So, if under load they get the same CPU cycles, it means that when the machine has CPU cycles to spare, I have 2 candidate slices to get those spares.
So the question was: for a 5% price increase I would pay for 2 x 256 slices compared to 1 x 512 slice, will I get at least 5% more CPU cycles ?
I'm still not certain with the data I've computed that it might happen. Also, the new question now would be: will I get at least 5% more IO operations ?
The above post isn't a rant against Slicehost. I think they are providing a decent service for their price. It is interesting though to see which kind of usage can one put on a VPS and which are better to be run on the server in the basement.
Well, isn't this interesting. A vertical upgrade to 512MB of RAM is another world entirely. Maybe the new VPS is on a less-loaded machine, but at first sight, it's looking way better: the previous 28 hours build (fresh) takes now only 40 minutes for a small update. I'll try a clean build later this week and see how fast it is.
So it seems it wasn't only a problem of slow IO, it was also a big problem of not enough RAM leading to swap file trashing.