[blackbirdpie url=”https://twitter.com/vpsnet/status/113858818764570624″]
We are pleased to announce the new beta of our Japan cloud (Tokyo). We are offering old and new customers up to 9 beta nodes for free to test out the location.
Highlights include:
– The fastest, most technically advanced enterprise class SAN within all of our clouds
– Full support for Windows and Linux
– Full OnApp 2.2 support will be enabled during the beta
– Its Free!
VPS.NET just launched their Tokyo cloud today in beta. According to them, this new cloud features an all-new SAN deployment which will resolve issues like their recent (and not only) 50h downtime + customer data loss. SAN failures have been a major issue at VPS.NET during the last year and many customers who didn’t had their own backup strategy lost all their data because of file corruption after a SAN crash (hint: don’t rely on VPS.NET backups -unless R1Soft- and always have your own backup strategy!)
I just ordered a 6-node VPS with Debian 6.0 x64 to test things out. After removing grub-legacy and grub-common I did an apt-get update && apt-get upgrade
to get my VPS up to date. The default /etc/apt/sources.list
file contains links to mirrors.kernel.org which is currently down, so change it to something to this:
deb http://ftp.dti.ad.jp/pub/Linux/debian/ squeeze main contrib deb-src http://ftp.dti.ad.jp/pub/Linux/debian/ squeeze main contrib
(How did I come up with ftp.dti.ad.jp? Read: Using netselect-apt to find the fastest Debian mirror)
During apt-get upgrade I was getting all those nasty
locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory
kind of errors, which can be easily fixed by issuing a
root@dev:~# locale-gen en_US.UTF-8 Generating locales (this might take a while)... en_US.UTF-8... done Generation complete.
and adding
LANG="en_US.UTF-8" LC_ALL="en_US.UTF-8"
to your /etc/default/locale
file.
Time for a reboot and some testing! My 6-node VPS has 3 CPU cores assigned (I’m pasting only the output of the first one)
root@dev:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 44 model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz stepping : 2 cpu MHz : 2400.084 cache size : 12288 KB fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu de tsc msr pae cx8 sep cmov pat clflush mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nonstop_tsc pni ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm bogomips : 4800.16 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management:
Then I used hdparm -tT to test cache/memory throughput (3 times in a row)
root@dev:~# for i in 1 2 3; do hdparm -tT /dev/xvda1; done /dev/xvda1: Timing cached reads: 15204 MB in 1.99 seconds = 7646.24 MB/sec Timing buffered disk reads: 60 MB in 3.66 seconds = 16.41 MB/sec /dev/xvda1: Timing cached reads: 15172 MB in 1.99 seconds = 7630.04 MB/sec Timing buffered disk reads: 42 MB in 3.00 seconds = 13.98 MB/sec /dev/xvda1: Timing cached reads: 15196 MB in 1.99 seconds = 7641.91 MB/sec Timing buffered disk reads: 52 MB in 3.12 seconds = 16.68 MB/sec
And finally a typical hard disk bench (also ran 3 times in a row)
root@dev:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 18.6979 s, 57.4 MB/s root@dev:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 11.5814 s, 92.7 MB/s root@dev:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 12.4096 s, 86.5 MB/s
Hard disk speed test results are not what I expected from the “The fastest, most technically advanced enterprise class SAN” to be honest, but I have to give them credit that today is launch day (+ it’s free for a month!) and maybe the HVs & SANs are overloaded creating new VPSs all the time (although iostat -d -x
doesn’t thinks so…)
I will be monitoring the VPS with sar for a few days and will update this post in case things change.
Although disk I/O performance IS important (you want your website to load fast in order for Google to rank you higher) the main issue with VPS.NET was downtime & data loss because of SAN failures. If the new SANs turn out stable and manage to retain data integrity even in a case of failure, then I’m sure that VPS.NET will catch up with the competition again.
Update 19 Sep. 2011:
Five days after posting my initial measurements, I retested everything to see if things got better or worse now that the cloud is sold out and no more activations are being performed.
Here are my today’s results:
root@dev:~# for i in 1 2 3; do hdparm -tT /dev/xvda1; done /dev/xvda1: Timing cached reads: 14980 MB in 1.99 seconds = 7533.07 MB/sec Timing buffered disk reads: 44 MB in 3.13 seconds = 14.07 MB/sec /dev/xvda1: Timing cached reads: 14944 MB in 1.99 seconds = 7514.73 MB/sec Timing buffered disk reads: 56 MB in 3.23 seconds = 17.32 MB/sec /dev/xvda1: Timing cached reads: 14962 MB in 1.99 seconds = 7523.54 MB/sec Timing buffered disk reads: 56 MB in 3.22 seconds = 17.37 MB/sec
root@dev:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 13.4729 s, 79.7 MB/s root@dev:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 12.2284 s, 87.8 MB/s root@dev:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 12.7171 s, 84.4 MB/s
root@dev:~# sar Linux 2.6.32-5-xen-amd64 (dev2) 09/19/2011 _x86_64_ (3 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:05:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 12:15:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 12:25:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 12:35:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 12:45:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 12:55:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 01:05:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 01:15:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 01:25:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 01:35:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 01:45:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 01:55:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 02:05:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 02:15:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 02:25:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 02:35:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 02:45:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 02:55:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 03:05:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 03:15:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 03:25:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 03:35:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 03:45:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 03:55:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 04:05:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 04:15:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 04:25:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 04:35:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 04:35:01 AM CPU %user %nice %system %iowait %steal %idle 04:45:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 04:55:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 05:05:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 05:15:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 05:25:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 05:35:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 05:45:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 05:55:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 06:05:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 06:15:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 06:25:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 06:35:01 AM all 0.01 0.00 0.01 0.00 0.00 99.98 06:45:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 06:55:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 07:05:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 07:15:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 07:25:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 07:35:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 07:45:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 07:55:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 08:05:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 08:15:01 AM all 0.00 0.00 0.00 0.00 0.00 100.00 08:25:01 AM all 0.01 0.00 0.24 1.55 0.00 98.19 Average: all 0.00 0.00 0.01 0.03 0.00 99.96
Disk I/O speed seems to be stable at around 80MB/s which is considered slightly above average based on the stats found at the VPS Disk I/O test thread on WebHostingTalk, but certainly not something to brag about. To compare, the VPS this blog is hosted on is always giving me >300MB/sec:
root@hydrogen [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 3.50145 seconds, 307 MB/s
Again, I can’t stress out enough that VPS.NET ultimately needs to find a stable and reliable storage solution with which they will deploy the rest of their clouds with. Disk I/O speed is of secondary importance. It’s about one year ago since the new SAN 2.0 which would resolve all storage issues was introduced, but again the issues remain and a new SAN vendor had to be found. Lets hope that Nexenta will do the trick this time because this distributed, amazing, multi-path, enterprise, redundant, super-fast (…you get my point) SAN technology VPS.NET is counting on, has screwed their profile and potential many times. VPS.NET should be the leader of VPS providers with their innovative product, ease of use and technology behind it, instead they have to hear their clients losing their data again and again because some SAN crashed.
George,
I wanted to get in touch with you and thank you for writing a review. How are things going so far? Are you seeing any kind of speed increase/decrease now that the cloud is closed for new VMs?
Hello Terry,
Disk I/O speed today and yesterday seems to be the same (~80MB/sec) like on launch day, no matter what time of the day I’m measuring. I’m going to update the post tomorrow with my newest measurements and will let you know via twitter.