joedicastro / vps-comparison
- среда, 3 мая 2017 г. в 03:11:47
Python
A comparison between some VPS providers. It uses Ansible to perform a series of automated benchmark tests over the VPS servers that you specify. It allows the reproducibility of those tests by anyone that wanted to compare these results to their own. All the tests results are available in order to provide independence and transparency.
WARNING: A work in progress!
A comparison between some VPS providers that have data centers located in Europe.
Initially I’m comparing only entry plans, below 5$ monthly.
What I trying to show here it’s basically a lot of things that I would want to know before sign up with any of them. If I save you a few hours researching, like I spend, I’ll be glad!
OVH | Linode | DigitalOcean | Scaleway | Vultr | |
---|---|---|---|---|---|
Foundation | 1999 | 2003 | 2011 | 2013 | 2014 |
Headquarters | Roubaix (FR) | Galloway, NJ (US) | New York (US) | Paris (FR) | Matawan, NJ (US) |
Market | 3° largest | 2° largest | |||
Website | OVH | Linode | DigitalOcean | Scaleway | Vultr |
Notes:
OVH | Linode | DigitalOcean | Scaleway | Vultr | |
---|---|---|---|---|---|
Credit Card | Yes | Yes | Yes | Yes | Yes |
PayPal | Yes | Yes | Yes | No | Yes |
Bitcoin | No | No | No | No | Yes |
Affiliate/Referral | Yes | Yes | Yes | No | Yes |
Coupon Codes | Yes | Yes | Yes | Yes | Yes |
Note:
OVH | Linode | DigitalOcean | Scaleway | Vultr | |
---|---|---|---|---|---|
European data centers | 3 | 2 | 3 | 2 | 4 |
Documentation | Docs | Docs | Docs | Docs | Docs |
Doc. subjective valuation | 6/10 | 9/10 | 9/10 | 6/10 | 8/10 |
Uptime guaranteed (SLA) | 99,95% | 99,9% | 99,99% | 99,9% | 100% |
Outage refund/credit (SLA) | Yes | Yes | Yes | No | Yes |
API | Yes | Yes | Yes | Yes | Yes |
API Docs | API Docs | API Docs | API Docs | API Docs | API Docs |
Services status page | Status | Status | Status | Status | Status |
Support Quality | |||||
Account Limits | 10 instances | Limited instances (e.g. 50 VC1S ) | 10 instances | ||
Legal/ToS | ToS | ToS | ToS | ToS | ToS |
Note:
OVH | Linode | DigitalOcean | Scaleway | Vultr | |
---|---|---|---|---|---|
Subjective control panel evaluation | 5/10 | 6/10 | 8/10 | 5/10 | 9/10 |
Graphs | Traffic, CPU, RAM | CPU, Traffic, Disk IO | CPU, RAM, Disk IO, Disk usage, Bandwith, Top | No | Monthly Bandwith, CPU, Disk, Network |
Subjective graphs valuation | 5/10 | 8/10 | 9/10 | 0/10 | 8/10 |
Monthly usage per instance | No | Yes | No | No | Bandwith, Credits |
KVM Console | Yes | Yes (Glish) | Yes (VNC) | Yes | Yes |
Power management | Yes | Yes | Yes | Yes | Yes |
Reset root password | Yes | Yes | Yes | No | No |
Reinstall instance | Yes | Yes | Yes | No | Yes |
First provision time | Several hours | <1 min | <1 min | some minutes | some minutes |
Median reinstall time | ~12,5 min | ~50 s | ~35 s | N/A | ~2,1 min |
Upgrade instance | Yes | Yes | Yes | No | Yes |
Change Linux Kernel | No | Yes | CentOS | Yes | No |
Recovery mode | No | Yes | Yes | Yes | Boot with custom ISO |
Tag instances | No | Yes | Yes | Yes | Yes |
Responsive design (mobile UI) | No | No | No | No | Yes |
Android App | Only in France | Yes | Unofficial | No | Unofficial |
iOS App | Yes | Yes | Unofficial | No | Unofficial |
Notes:
OVH | Linode | DigitalOcean | Scaleway | Vultr | |
---|---|---|---|---|---|
Linux | Arch Linux, CentOS, Debian, Ubuntu | Arch, CentOS, Debian, Fedora, Gentoo, OpenSUSE, Slackware, Ubuntu | CentOS, Debian, Fedora, Ubuntu | Alpine, CentOS, Debian, Gentoo, Ubuntu | CentOS, Debian, Fedora, Ubuntu |
BSD | No | No | FreeBSD | No | FreeBSD, OpenBSD |
Windows | No | No | No | No | Windows 2012 R2 (16$) |
Other OS | No | No | CoreOS | No | CoreOS |
Note:
OVH | Linode | DigitalOcean | Scaleway | Vultr | |
---|---|---|---|---|---|
Docker | Yes | No | Yes | Yes | Yes |
Stacks | LAMP | No | LAMP, LEMP, ELK, MEAN | LEMP, ELK | LAMP, LEMP |
Drupal | Yes | No | Yes | Yes | Yes |
WordPress | Yes | No | Yes | No | Yes |
Joomla | Yes | No | No | No | Yes |
Django | No | No | Yes | No | No |
RoR | No | No | Yes | No | No |
GitLab | No | No | Yes | Yes | Yes |
Node.js | No | No | Yes | Yes | No |
E-Commerce | PrestaShop | No | Magento | PrestaShop | Magento, PrestaShop |
Personal cloud | Coyz | No | NextCloud, ownCloud | OwnCloud, Cozy | NextCloud, ownCloud |
Panels | Plesk, cPanel | No | No | Webmin | cPanel (15$), Webmin |
Notes:
OVH | Linode | DigitalOcean | Scaleway | Vultr | |
---|---|---|---|---|---|
ISO images library | No | No | No | No | Yes |
Custom ISO image | No | Yes | No | Yes | Yes |
Install scripts | No | StackScripts | Cloud-init | No | iPXE |
Preloaded SSH keys | Yes | No | Yes | Yes | Yes |
Notes:
OVH | Linode | DigitalOcean | Scaleway | Vultr | |
---|---|---|---|---|---|
2FA | Yes | Yes | Yes | No | Yes |
Restrict access IPs | Yes | Yes | No | No | No |
Account Login Logs | No | Partial | Yes | No | No |
SSL Quality | A- | A+ | A+ | A | A |
DNS SPY Report | B | B | B | B | C |
Send root password by email | Yes | No | No | No | No |
Account password recovery | Link | Link | Link | Link | Link |
Notes:
OVH | Linode | DigitalOcean | Scaleway | Vultr | Vultr | |
---|---|---|---|---|---|---|
Name | VPS SSD 1 | Linode 1024 | 5bucks | VC1S | 20GB SSD | 25GB SSD |
Monthly Price | 3,62€ | 5$ | 5$ | 2,99€ | 2,5$ | 5$ |
CPU / Threads | 1/1 | 1/1 | 1/1 | 1/2 | 1/1 | 1/1 |
CPU model | Xeon E5v3 2.4GHz | Xeon E5-2680 v3 2.5GHz | Xeon E5-2650L v3 1.80 GHz | Atom C2750 2.4 GHz | Intel Xeon 2.4 GHz | Intel Xeon 2.4 GHz |
RAM | 2 GB | 1 GB | 512 MB | 2 GB | 512 MB | 1 GB |
SSD Storage | 10 GB | 20 GB | 20 GB | 50 GB | 20 GB | 25 GB |
Traffic | ∞ | 1 TB | 1 TB | ∞ | 500 GB | 1 TB |
Bandwidth (In / Out) | 100/100 Mbps | 40/1 Gbps | 1/10 Gbps | 200/200 Mbps | 1/10 Gbps | 1/10 Gbps |
Virtualization | KVM | KVM (Qemu) | KVM | KVM (Qemu) | KVM (Qemu) | KVM (Qemu) |
Anti-DDoS Protection | Yes | No | No | No | 10$ | 10$ |
Backups | No | 2$ | 1$ | No | 0,5 $ | 1$ |
Snapshots | 2,99$ | Free (up to 3) | 0,05$ per GB | 0,02 € per GB | Free (Beta) | Free (Beta) |
IPv6 | Yes | Yes | Optional | Optional | Optional | Optional |
Additional public IP | 2$ (up to 16) | Yes | Floating IPs (0,006$ hour if inactive) | 0,9€ (up to 10) | 2$ (up to 2) / 3$ floating IPs | 2$ (up to 2) / 3$ floating IPs |
Private Network | No | Optional | Optional | No (dynamic IPs) | Optional | Optional |
Firewall | Yes (by IP) | No | No | Yes (by group) | Yes (by group) | Yes (by group) |
Block Storage | From 5€ - 50GB | No | From 10$ - 100GB | From 1€ - 50GB | From 1$ - 10GB | From 1$ - 10GB |
Monitoring | Yes (SLA) | Yes (metrics, SLA) | Beta (metrics, performance, SLA) | No | No | No |
Load Balancer | 13$ | 20$ | 20$ | No | High availability (floating IPs & BGP) | High availability (floating IPs & BGP) |
DNS Zone | Yes | Yes | Yes | No | Yes | Yes |
Reverse DNS | Yes | Yes | Yes | Yes | Yes | Yes |
Note:
All the numbers showed here can be founded in the /logs
folder in this
repository, keep in mind that usually I show averages of several iterations of
the same test.
The graphs are generated with gnuplot directly from the tables of this
README.org
org-mode file. The tables are also automatically generated with a
python script (/ansible/roles/common/files/gather_data.py
) gathering the data
contained in the log files. To be able to add more tests without touching the
script, the criteria to gather the data and generate the tables are stored in a
separate json file (/ansible/roles/common/files/criteria.json
). The output of
that script is a /logs/tables.org
file that contain tables likes this:
|- | | Do-5Bucks-Ubuntu | Linode-Linode1024-Ubuntu | Ovh-Vpsssd1-Ubuntu | Scaleway-Vc1S-Ubuntu | Vultr-20Gbssd-Ubuntu | Vultr-25Gbssd-Ubuntu |- | Lynis (hardening index) |59 | 67 | 62 | 64 | 60 | 60 | Lynis (tests performed) |220 | 220 | 220 | 225 | 230 | 231 |-
That does not seems like a table, but thanks to the awesome org-mode table
manipulation features, only by using the Ctrl-C Ctrl-C
key combination that
becomes this:
|-------------------------+------------------+--------------------------+--------------------+----------------------+----------------------+----------------------| | | Do-5Bucks-Ubuntu | Linode-Linode1024-Ubuntu | Ovh-Vpsssd1-Ubuntu | Scaleway-Vc1S-Ubuntu | Vultr-20Gbssd-Ubuntu | Vultr-25Gbssd-Ubuntu | |-------------------------+------------------+--------------------------+--------------------+----------------------+----------------------+----------------------| | Lynis (hardening index) | 59 | 67 | 62 | 64 | 60 | 60 | | Lynis (tests performed) | 220 | 220 | 220 | 225 | 230 | 231 | |-------------------------+------------------+--------------------------+--------------------+----------------------+----------------------+----------------------|
And finally using also a little magic from org-mode, org-plot and gnuplot, that
table would generate automatically a graph like the ones showed here with only a
few lines of text (see this file in raw mode to see how) and the Ctrl-c " g
key combination over those lines. Thus, the only manual step is to copy/paste
those tables from that file into this one, and with only two key combinations
for table/graph the job is almost done (you can move/add/delete columns very
easily with org-mode).
There is another python script (/ansible~/roles/common/files/clean_ips.py
)
that automatically removes any public IPv4/IPv6 from the log files (only on
those that is needed).
Performance tests can be affected by locations, data centers and VPS host neighbors. This is inherent to the same nature of the VPS service and can vary very significantly between instances of the same plan. For example, in the tests performed to realize this comparison I had found that in a plan (not included here, becasuse is more than $5/mo) a new instance that usually would give a UnixBench index about ~1700 only achieved an UnixBench index of 629,8. That’s a considerable amount of lost performance in a VPS server… by the same price! Also the performance can vary over time, due to the VPS host neighbors. Because of this I discarded any instance that would report a poor performance and only show “typical” values for a given plan.
I have chosen Ansible to automate the tests to recollect information from the VPS servers because once that the roles are write down it’s pretty easy to anyone to replicate them and get its own results with a little effort.
The first thing that you have to do is to edit the /ansible/hosts
file to
use your own servers. In the template provided there are not real IPs
present, but serves you as a guide of how to manage them. For example in this
server:
[digitalocean] do-5bucks-ubuntu ansible_host=X.X.X.X ansible_python_interpreter=/usr/bin/python3
You should have to put your own server IP. The interpreter path is only needed when there is not a Python 2 interpreter available by default (like in Ubuntu). Also I’m using the variables per group to declare the default user of a server, and I’m grouping servers by provider. So, a complete example for a new provider using a new instance running Ubuntu should be like this:
[new_provider] new_provider-plan_name-ubuntu ansible_host=X.X.X.X ansible_python_interpreter=/usr/bin/python3 [new_provider:vars] ansible_user=root
And you can add as many servers/providers as you want. If you are already
familiar with Ansible, you can suit the inventory file (/ansible/hosts
) as
you need.
Then, you can start to tests the servers/providers using Ansible by running
the playbook, but should be a good idea to test the access first with a ping
(from the /ansible
folder):
$ ansible all -m ping
If it’s the first time that you are SSHing to a server, you are probably
going to be asked to add it to the ~/.ssh/known_hosts
file.
Then you can easily execute all the tasks in a server by:
$ ansible-playbook site.yml -f 6
With the -f 6
option you can specify how many forks you want to create to
execute the tasks in parallel, the default is 5 but as I use here 6 VPS plans
I use also 6 forks.
You can also run only selected tasks/roles by using tags. You can list all the available tasks:
$ ansible-playbook site.yml --list-tasks
And run only the tags that you want:
$ ansible-playbook site.yml -t benchmark
All the roles are set to store the logs of the tests in the /logs/
folder
using the /logs/server_name
folder structure.
WARNING:
All the tests that I include here are as “atomic” as possible, that is that in
every one of them I try to leave the server in a state as close as it was
before perform it, with the exception that I keep the logs. By the way, the
logs are stored in the /tmp
folder intentionally because they will disappear
when you reboot the instance. There are three main reasons why I try to make
the tests as atomic as possible and do not take advantage of some common tasks
and perform them only once:
Perhaps the only major drawback of this approach is that it consumes more time globally when you perform all the tests together.
All the instances were allocated in London (GB), except for OVH VPS SSD 1 in Gravelines (FR) and Scaleway VC1S in Paris (FR).
All the instances were running on Ubuntu 16.04 LTS
Currently the Vultr’s 20GB SSD plan is sold out and is unavailable temporarily, thus I only performed some tests (and some in a previous version) in an instance that I deleted before new ones become unavailable. I have the intention to retake the test as soon as the plan is available again.
UnixBench as is described in its page:
The purpose of UnixBench is to provide a basic indicator of the performance of a Unix-like system; hence, multiple tests are used to test various aspects of the system’s performance. These test results are then compared to the scores from a baseline system to produce an index value, which is generally easier to handle than the raw scores. The entire set of index values is then combined to make an overall index for the system.
Keep in mind, that this index is very influenced by the CPU raw power, and does not reflect very well another aspects like disk performance. In this index, more is better.
I only execute this test once because it takes some time -about 30-45 minutes depending of the server- and the variations between several runs are almost never significant.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
UnixBench (index, 1 thread) | 1598.1 | 1248.6 | 1264.6 | 629.8 | 1555.1 | 1579.9 |
UnixBench (index, 2 threads) | 1115.1 |
In this table I show the individual tests results that compose the UnixBench benchmark index.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD | |
---|---|---|---|---|---|---|---|
Dhrystone 2 using register variables | A | 2510.2 | 2150.0 | 2061.0 | 1057.9 | 2530.5 | 2474.5 |
Double-Precision Whetstone | B | 583.6 | 539.7 | 474.6 | 367.5 | 578.2 | 656.9 |
Execl Throughput | C | 1038.9 | 941.8 | 799.5 | 400.0 | 963.8 | 1027.8 |
File Copy 1024 bufsize 2000 maxblocks | D | 2799.5 | 1972.7 | 2222.5 | 1094.4 | 2775.3 | 2608.8 |
File Copy 256 bufsize 500 maxblocks | E | 1908.7 | 1286.2 | 1440.1 | 752.6 | 1888.8 | 1851.4 |
File Copy 4096 bufsize 8000 maxblocks | F | 3507.1 | 2435.6 | 2692.6 | 1729.9 | 3248.4 | 3212.1 |
Pipe Throughput | G | 1846.5 | 1472.1 | 1468.7 | 894.0 | 1813.6 | 1789.6 |
Pipe-based Context Switching | H | 744.0 | 623.2 | 597.2 | 60.3 | 739.0 | 746.3 |
Process Creation | I | 904.5 | 690.5 | 706.8 | 288.2 | 848.1 | 949.9 |
Shell Scripts (1 concurrent) | J | 1883.2 | 1442.0 | 1501.9 | 801.9 | 1787.8 | 1851.2 |
Shell Scripts (8 concurrent) | K | 1725.0 | 1144.4 | 1362.7 | 1221.8 | 1665.9 | 1679.1 |
System Call Overhead | L | 2410.1 | 2034.4 | 1955.6 | 1154.7 | 2461.0 | 2366.4 |
Notes:
Notes:
Sysbench is a popular benchmarking tool that can test CPU, file I/O, memory, threads, mutex and MySQL performance. One of the key features is that is scriptable and can perform complex tests, but I rely here on several well-known standard tests, basically to compare them easily to others that you can find across the web.
In this test the cpu would verify a given primer number, by a brute force algorithm that calculates all the divisions between this one and all the numbers prior the square root of it from 2. It’s a classic cpu stress test and usually a more powerful cpu would employ less time in this test, thus less is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Sysbench CPU (seconds) | 31.922 | 37.502 | 39.080 | 46.130 | 30.222 | 30.544 |
This test measures the memory performance, it allocates a memory buffer and reads/writes from it randomly until all the buffer is done. In this test, more is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Sysbench RAM rand read (Mb/s) | 2279.750 | 1334.162 | 1262.542 | 1228.898 | 2146.132 | |
Sysbench RAM rand write (Mb/s) | 2196.174 | 1310.624 | 1221.276 | 1181.516 | 2062.046 |
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Sysbench RAM rand read (IOPS) | 2334463 | 1366183 | 1292842 | 1258393 | 2197641 | |
Sysbench RAM rand write (IOPS) | 2248883 | 1342079 | 1250589 | 1209873 | 2111535 |
Here is the file system what is put to test. It measures the disk input/output operations with random reads and writes. The numbers are more reliable when the total file size is more greater than the amount of memory available, but due to the limitations that some plans have in disk space I had to restrain that to only 8GB. In this test, more is better.
Notes:
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Sysbench file rand read (Mb/s) | 4.813 | 19.240 | 48.807 | 41.353 | Temp. unavailable | 23.022 |
Sysbench file rand write (Mb/s) | 4.315 | 5.529 | 21.400 | 2.482 | Temp. unavailable | 17.510 |
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Sysbench file rand read (IOPS) | 1232 | 4925 | 12495 | 10586 | Temp. unavailable | 5984 |
Sysbench file rand write (IOPS) | 1105 | 1415 | 5478 | 635 | Temp. unavailable | 4482 |
Here the test measures the database performance. I used the MySQL database for this tests, but the results could be applied also to the MariaDB database. More requests per second is better but less 95% percentile is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
DB R/W (request/second) | 245.590 | 212.42 | 232.266 | 176.700 | 245.127 | 243.832 |
request approx. 95% (ms) | 203.210 | 242.100 | 218.490 | 268.086 | 203.410 | 205.786 |
fio is a benchmarking tool used to measure I/O operations performance, usually oriented to disk workloads, but you could use it to measure network, cpu and memory I/O as well. It’s scriptable and can simulate complex workloads, but I use it here in a simple way to measure the disk performance. In this test, more is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Read IO (MB/s) | 3.999 | 111.622 | 581.851 | 266.779 | 249.672 | 244.385 |
Write IO (MB/s) | 3.991 | 93.6 | 35.317 | 84.684 | 192.748 | 194.879 |
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Read IOPS | 999 | 27905 | 145487 | 66694 | 62417 | 60913 |
Write IOPS | 997 | 23399 | 8828 | 21170 | 48186 | 48719 |
A classic, the ubiquitous dd
tool that is being used forever for tons of
sysadmins for diverse purposes. I use here a pair of well-known fast tests
to measure the CPU and disk performance. Not very reliable (e.g. the disk is
only a sequential operation) but they are good enough to get an idea, and I
include them here because many people use them. In the CPU test less is
better and the opposite in the disk test.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
dd CPU (seconds) | 2.684 | 2.935 | 3.292 | 4.199 | 2.667 | 2.715 |
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
dd IO (MB/s) | 550 | 467.4 | 702.6 | 163.6 | 477 | 458.2 |
This test measures the time in seconds that a server takes to compile the MariaDB server. This is not a synthetic test and gives you a more realistic workload to compare them. Also helps to reveal the flaws that some plans have due their limitations (e.g. cpu power in Scaleway and memory available in DO). In this test, less is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Compile MariaDB (seconds) | 1904.7 | 3070.2 | out of memory | 5692.7 | Temp. unavailable | 2069.3 |
Notes:
In this test the measure is the frames per second achieved to transcode a
video with ffmpeg
(or avconv
in Debian). This is also a more realistic
approach to compare them, because is a more real workload (even when is not
usually performed in VPS servers) and stress heavily the CPU, but making
also a good use of the disk and memory. In this test, more is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
FPS | 5.9 | 4.7 | out of memory | 3.2 | Temp. unavailable | 5.6 |
Note:
This test try to measure the average network speed downloading a 100mbit file and the average sustained speed downloading a 10gb file from various locations. I include some files that are in the same provider network as the plans that I compare here to see how much influence this factor has (remember that Scaleway belongs to Online.net). In the bash script used there are more files and locations, but I only use some of them to limit the monthly bandwidth usage of the plan. In this test, more is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD | |
---|---|---|---|---|---|---|---|
Cachefly CDN | A | 11.033 | 84.367 | 123 | 82.567 | Temp. unavailable | 182.333 |
DigitalOcean (GB) | B | 11.9 | 90.767 | 137 | 79.633 | 148.333 | |
LeaseWeb (NL) | C | 11.9 | 100.067 | 87.867 | 105.667 | 162.333 | |
Linode (GB) | D | 11.9 | 110.667 | 125.333 | 77.233 | 134.667 | |
Online.net (FR) | E | 11.9 | 17.90 | 66.200 | 110.3 | 73.267 | |
OVH (FR) | F | 12 | 43.10 | 53.9 | 41.8 | ||
Softlayer (FR) | G | 11.8 | 34.067 | 77.267 | 52.1 | 79.533 | |
Vultr (GB) | H | 11.9 | 32.867 | 121.667 | 60.2 | 195 |
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
DigitalOcean (GB) | 89.7 | 145.667 | 113 | Temp. unavailable | 146 | |
LeaseWeb (NL) | 98.7 | 13.6 | 109.967 | 174.333 | ||
Linode (GB) | 109.667 | 126.333 | 111.333 | 113.333 | ||
Softlayer (FR) | 42.223 | 91.567 | 31.233 | 63.633 |
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
CDN77 (NL) | 11.967 | 91.6 | 65.9 | 120.667 | Temp. unavailable | 161.667 |
Online.net (FR) | 11.933 | 21.467 | 64.333 | 117.333 | 158.333 | |
OVH (FR) | 11.967 | 54.2 | 41.15 | 37.867 | 158 |
This test uses speedtest.net service to measure the average download/upload network speed from the VPS server. To do that I use the awesome speedtest-cli python script to be able to do it from the command line.
Keep in mind that this test is not very reliable because depends a lot of the network capabilities and status of the speedtest’s nodes (I try to choose always the fastest node in each city). But it gives you an idea of the network interconnections of each provider.
In those tests more is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Nearest Download (Mb/s) | 99.487 | 719.030 | 743.270 | 815.250 | Temp. unavailable | 584.740 |
Nearest Upload (Mb/s) | 80.552 | 273.677 | 464.403 | 288.130 | 94.037 |
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Madrid | 98.940 | 390.947 | 376.187 | 367.177 | Temp. unavailable | 535.477 |
Barcelona | 98.550 | 319.777 | 489.210 | 558.573 | 796.617 | |
Paris | 96.237 | 343.067 | 720.700 | 339.76 | 493.723 | |
London | 98.897 | 1395.290 | 1260.607 | 766.277 | 3050.463 | |
Berlin | 94.233 | 309.860 | 525.137 | 453.267 | 943.980 | |
Rome | 98.910 | 321.69 | 527.560 | 636.857 | 964.350 |
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Madrid | 87.937 | 151.977 | 172.437 | 57.333 | Temp. unavailable | 128.560 |
Barcelona | 85.670 | 152.757 | 148.080 | 41.480 | 177.963 | |
Paris | 91.173 | 182.267 | 337.737 | 199.737 | 169.450 | |
London | 86.360 | 302.350 | 282.380 | 107.260 | 489.013 | |
Berlin | 86.353 | 99.223 | 206.170 | 75.100 | 194.157 | |
Rome | 87.387 | 116.90 | 44.350 | 59.053 | 121.390 |
I’m going to use two popular blog platforms to benchmark the web performance in each instance: WordPress and Ghost. In order to minimize the hassle and avoid any controversies (Apache vs Nginx, which DB, wich PHP, what cache to use, etc) and also make all the process easier I’m going to use the Bitnami stacks to install both programs. Even when I’m not specially fond of Bitnami stacks (I would use other components), being self-contained helps a lot to make easier the task as atomic and revert the server at the end to the previous state. To use two real products, even with dummy blog pages, makes a great difference from using only a “Hello world!” HTML page, specially with WordPress that also stresses heavily the database.
The Bitnami’s Wordpress stack uses Apache 2.4, MySQL 5.7, PHP 7, Varnish 4.1, and Wordpress 4.7
The Ghost stack uses Apache 2.4, Node.js 6.10, SQlite 3.7, Python 2.7 y Ghost 0.11
To perform the tests I’m going to use also another two popular tools: ApacheBench (aka ab) and wrk. In order to do the tests properly, you have to perform the tests from another machine, and even when I could use a new instance to test all the other instances, I think that the local computer is enough to test all of this plans. But there is a drawback, you need a good internet connection, preferably with a small latency and a great bandwidth, because all the tests are going to be performed in parallel. I’m using a symmetric fiber optic internet access with enough bandwidth, thus I did not had any constrain in my side. But with bigger plans, and specially with wrk and testing with more simultaneous connections it would be eventually a problem, in that case a good VPS server to perform the tests would be probably a better solution. I cold use an online service but that would make more difficult and costly the reproducibility of these tests by anyone by their own. Also I could use another tools (Locust, Gatling, etc), but they have more requirements and would cause more trouble sooner in the local machine. Also wrk is enough in their own to saturate almost any VPS web server with very small requirements in the local machine, and faster.
To avoid install or compile any software in the local machine, specially wrk that is not present in all the distributions, I’m going to use two Docker images (williamyeh/wrk and jordi/ab) to perform the tests. In the circumstances of these tests, using Docker almost does not cause any performance loss on the local machine, is more than enough. But if we want to test bigger plans with more stress, then it would be wiser to install locally both tools and perform the tests from them.
Anyway, there is a moment, no matter with software I use to perform the tests (but specially with wrk), that when testing WordPress the requests are so much that the system runs out of memory and the MySQL database is killed and eventually the Apache server is killed too if the test persists enough, until that the server would be unavailable for a few minutes (some times never recover on its own and I had to restart it from the control panel). After all, is a kind of mini DDoS attack what are performing here. This can be improved a lot with other stack components and a careful configuration. The thing here is that all of the instances are tested with the same configuration. Thus, I do not try here to test the maximum capacity of a server as much as I try to compare them under the same circumstances. To avoid lost the SSH connection with the servers, I limit the connections until a certain point, pause the playbook five minutes and then restart the stack before perform the next test.
In the servers where the memory available is less than 1GB, to be able to install the stacks, I set a swap cache file of 512GB.
This graph show the requests per second achieved with 50 concurrent connections in 3 minutes, more is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Requests per second (mean, RPS) | 96.00 | 54.12 | 59.02 | 64.63 | 92.10 |
This other one shows the mean time per request and under what time are served the 95% of all requests. Less is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Time per request (mean, ms) | 520.849 | 923.857 | 847.195 | 773.665 | 542.895 | |
95% requests under this (ms) | 634 | 1338 | 1278 | 1043 | 657 |
With those tests, using the wrk capacity to saturate almost any server, I increment the connections in three steps (100, 150, 200) under a 3 minutes load to be how the performance of each server is degrading. I could use a linear plot, but that would make me to change the gather python script and I think that’s clear enough in this way.
Of course, the key here is the amount of memory, the plans that support more load are also the ones that have more memory.
More valid requests is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Total (requests) | 17099 | 11398 | 3793 | 11862 | 16544 | |
Timeout (requests) | 115 | 444 | 3214 | 149 | 149 | |
Failed (requests) | ||||||
Valid (requests) | 16984 | 10954 | 579 | 11713 | 0 | 16395 |
I truncate the graph by the top here because of the excess of invalid requests (the database is killed to soon) from Vultr misrepresents the most important value, the successful requests.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Total (requests) | 16812 | 11422 | 1728 | 11774 | 133738 | |
Timeout (requests) | 352 | 9215 | 1125 | 7693 | 644 | |
Failed (requests) | 1 | 131986 | ||||
Valid (requests) | 16460 | 2207 | 602 | 4081 | 0 | 1108 |
I truncate the graph by the top here because of the excess of invalid requests (the database is killed to soon) from several plans misrepresents the most important value, the successful requests.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Total (requests) | 25287 | 55949 | 59824 | 21781 | 82194 | |
Timeout (requests) | 9003 | 2481 | 1276 | 9162 | 1088 | |
Failed (requests) | 11379 | 53480 | 58848 | 11867 | 80670 | |
Valid (requests) | 4905 | -12 | -300 | 752 | 0 | 436 |
The same test as above but with 20 threads and 150 connections in Ghost, a faster and more efficient blog than wordpress. More valid request is better.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Total (requests) | 42347 | 29315 | 23449 | 15517 | 43992 | |
Timeout (requests) | 138 | 47 | 137 | 46 | 73 | |
Failed (requests) | ||||||
Valid (requests) | 42209 | 29268 | 23312 | 15471 | 0 | 43919 |
Warning: Security in a VPS is your responsibility, nobody else. But taking a look to the default security applied in the default instances of a provider could give you a reference of the care that they take in this matter. And maybe it could give you also a good reference of how they care about their own systems security.
Lynis is a security audit tool that helps you to harden and test compliance on your computers, among other things. As part of that is has an index that values how secure is your server. This index should be take with caution, it’s not an absolute value, only a reference. It not covers yet all the security measures of a machine and could be not well balanced to do a effective comparison. In this test, more is better, but take into account that the number of tests performed had also an impact on the index (the number of test executed is a dynamic value that depends on the system features detected).
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Lynis (hardening index) | 62 (220) | 67 (220) | 59 (220) | 64 (225) | 60 (230) | 60 (231) |
Notes:
This tests uses nmap (also netstat to double check) to see the network ports and protocols that are open by default in each instance.
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Open TCP ports | 22 (ssh) | 22 (ssh) | 22 (ssh) | 22 (ssh) | 22 (ssh) | |
Open UDP ports | 68 (dhcpc) | 68 (dhcpc), 123 (ntp) | 68 (dhcpc), 123 (ntp) |
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Open TCP ports | 22 (ssh) | 22 (ssh) | 22 (ssh) | |||
Open UDP ports | 22 (ssh) | 22 (ssh) | 22 (ssh) |
Plan | OVH VPS SSD 1 | Linode 1024 | DO 5bucks | Scaleway VC1S | Vultr 20GB SSD | Vultr 25GB SSD |
---|---|---|---|---|---|---|
Open protocols IPv4 | 1 (icmp), 2 (igmp), 6 (tcp), 17 (udp), 103 (pim), 136 (udplite), 255 (unknown) | 1 (icmp), 2 (igmp), 4 (ipv4), 6 (tcp), 17 (udp), 41 (ipv6), 47 (gre), 50 (esp), 51 (ah), 64 (sat), 103 (pim), 108 (ipcomp), 132 (sctp), 136 (udplite), 242 (unknown), 255 (unknown) | 1 (icmp), 2 (igmp), 6 (tcp), 17 (udp), 103 (pim), 136 (udplite), 255 (unknown) | 1 (icmp), 2 (igmp), 6 (tcp), 17 (udp), 136 (udplite), 255 (unknown) | 1 (icmp), 2 (igmp), 6 (tcp), 17 (udp), 103 (pim), 136 (udplite), 196 (unknown), 255 (unknown) | |
Open protocols IPv6 | 0 (hopopt), 4 (ipv4), 6 (tcp), 17 (udp), 41 (ipv6), 43 (ipv6-route), 44 (ipv6-frag), 47 (gre), 50 (esp), 51 (ah), 58 (ipv6-icmp), 59 (ipv6-nonxt), 60 (ipv6-opts), 108 (ipcomp), 132 (sctp), 136 (udplite), 255 (unknown) | 0 (hopopt), 6 (tcp), 17 (udp), 43 (ipv6-route), 44 (ipv6-frag), 58 (ipv6-icmp), 59 (ipv6-nonxt), 60 (ipv6-opts), 136 (udplite), 255 (unknown) | 0 (hopopt), 6 (tcp), 17 (udp), 43 (ipv6-route), 44 (ipv6-frag), 58 (ipv6-icmp), 59 (ipv6-nonxt), 60 (ipv6-opts), 103 (pim), 136 (udplite), 255 (unknown) |
OVH | Linode | DigitalOcean | Scaleway | Vultr | |
---|---|---|---|---|---|
Distro install in instance | Partial | Partial | Yes | Yes | Yes |
TODO. Pending to automate also this.
Notes: