Heroku vs self-hosted PaaS
Learn how to host your side projects for free or almost free
I’m always on the hunt for a better way to host my side projects. As I’ve mentioned in a previous post, I used to have a big fat dedicated server that I just stuck all my apps and side projects on. Then I switched to using a VPS with Dokku on it. But is this the way to go? Why not just use Heroku for everything? Is there a better self-hosted solution than Dokku out there? Let’s find out!
First of all we have to define what we want to get out of this. We want something that is as cheap as possible, can host anything we want to run and takes up as little maintenance time as possible. The “host whatever” part is important to me as we have a lot of “Node.js only” services these days and while those are great I still want to be able to host whatever I want to run. I would also like to not spend a lot of time on this so Kubernetes and other advanced solutions are out as well. Finally, if I want to run something like an Elixir/Phoenix then I don’t want to have to worry about making it work.
BaaS solutions(Backend as a Service) like Firebase are out because they cannot host whatever you want. Serverless solutions are also out because they can’t reliably host anything you want and they come with other kinds of problems. Please note that I don’t think that these kinds of services are bad in any way, but I would like to be able to host anything on my platform of choice for side projects if at all possible.
The simplest possible way to host a project that I can think of has to be Heroku. It takes no time at all to get started, it can scale very well and they have all kinds of database and third party services to use with very little effort. You simply push your code to a repository and then Heroku will build the application for you using what they call buildpacks. and host it. No servers to maintain, no data to backup and so on. It just works and works very well. The only caveat is price. While you can get started for free and that might be enough for some, it adds up quickly once you can no longer use their free plan.
Heroku uses a concept they call dynos. A dyno is by default a container with 512MB of RAM that runs a part of a single application. To scale your app you simply add more dynos and Heroku will automatically load balance it for you. You can also have dynos running background jobs and such as well as schedulers to do cron-like things. If your app gets a lot of traffic then you can simply add more dynos to increase performance and Heroku will load balance between them automatically. Heroku offers four main kinds of dynos: Free, Hobby, Standard and Performance.
The free dynos allow you to host as many applications as you wish but the total amount of hours your dynos can run each month is limited to 1000. This means that if you have a single application running on a single dyno then that will use about 720 of your free hours. The free dynos also sleep after 30 minutes of inactivity and then needs to start again if there is an incoming request and that can take a while especially if you use a slower stack like Ruby on Rails. If you have an app with a web dyno serving requests and a worker dyno processing background jobs then both can’t run 24⁄7 on the free plan because that will be 744*2 dyno hours in a month. You also cannot scale your hobby dynos ie you can only run a single web dyno and a single worker dyno in the same application on hobby dynos. Finally, you can only have two dynos total for a single application so one web dyno and one worker dyno for example. If you need a database then you can use the Heroku Postgres offering which allows you 10000 rows of data for free. You can also get a 25 MB redis database for free. This is what you can get for absolutely free. There are also a number of third party extensions which also offer free services.
It must be said that there some tricks that you can do to limit the amount of dynos you need to 1. For example, if you run Ruby you can use the Suckerpunch gem to run your background jobs. Suckerpunch runs in the same process as the one running the web server, so you only need 1 dyno. This works fine until you run out of RAM.
So what happens once you outgrow the free plan? Let’s say that you need a web dyno and a worker dyno(for sending emails and such in the background), both running 24⁄7 as well as more than 10000 rows in your database. The next tier of dyno is called a Hobby dyno and those cost $7 a month to run. These never sleep and you can have up to 10 different process types. Along with some metrics and such these are the main differences. You cannot scale these either however, so you can only have a single web dyno running your application. The cheapest database will also cost $9 a month and will hold up to 10 million rows. This is a lot better but keep in mind that you still do not have access to any kind of database memory cache and overall performance is quite poor. So in total you are looking at about $27 a month for two dynos and a database for a single application.
Once you move up to the “real” plans you are looking at $25 for a dyno and $50 for a database with production level performance and 4GB of RAM for caching and such. Mind that these dynos cannot be combined with the hobby dynos so the minimum cost will now be $100 a month for your application. One thing that you can do however to minimize costs is to share the database between applications. You can use the database for as many applications as you like as long as you have enough free connections. The database mentioned has a limit of 120 concurrent connections which should be plenty for most side project applications.
Dyno comparison table
|Tier||Cost per Dyno||Sleeps||Process types||Maximum dynos||Scaling|
Example app price comparison
|Class||Total cost||Database maximum rows|
|Production||$100||Unlimited. With cache|
In summary, Heroku offers three different tiers of pricing for your simple side projects with no special requirements: Free, $27 and $100. Mind that your time is not free(right?) so while this seems like a lot for a simple side project then you have to keep in mind that you have no servers to maintain, no packages to update, no backups to manage and all you have to do to is push your code to Git and the site will build and update automatically. This is worth a lot to busy people, but it is still way too much for me just to run some side projects which aren’t even making me any money, so I need to look elsewhere.
So what do we mean by “Heroku-like”. Generaly speaking it means that you should be able to deploy like you do on Heroku so a simple “git push” should get your code running. You should be able to host multiple apps with ease and deploy databases and such automatically. We don’t expect as much “magic” as Heroku has, but the day-to-day should be easy enough to manage.
There are quite a few of these kinds of services and this list is by no means meant to be exhaustive.
Dokku is one of the simplest Heroku likes out there and also one of the most mature ones. It is very simple to install and Digitalocean has a way of creating a droplet with Dokku preinstalled. You deploy your code by pushing it to a repository, much like Heroku. Dokku uses the Heroku buildpacks to identify which stack your application uses and builds it automatically. You can also deploy your application using a Dockerfile if you want something a bit more advanced than the buildpacks.
Dokku can also add services to your application. It has a wide variety of plugins for databases such as MySQL and PostgreSQL, caching plugins as well as an automatic SSL plugin for Let’s Encrypt. It has an active community which creates new plugins whenever something new comes along. The database plugins have built-in backups and export functionality.
Personally I deploy all my applications and side projects with Dokku and I find it to be the easiest way to get something running without using Heroku. In my opinion there is only one big drawback with Dokku: It can only run on a single server and it has no support for Docker Swarm or any other kind of multi-host deployment. The maintainers of Dokku have said that the goal for Dokku is to be a single-host solution and there are no plans for supporting any kind of multi-host deployment. This is not a problem for me since I don’t have massive applications or any kind of critical apps which require automatic failover or something like that. If you do however, then you will need to look for a more advanced solution. Keep in mind that any solution that is more advanced than Dokku is likely to be more complicated to setup and maintain and it will have a lot more moving parts.
All in all, Dokku is pretty great. It would be nice with support for multi-host deployments and some kind of neat graphical user interface, logging and such though.
Flynn was announced as a Heroku-like as well but unlike Dokku it has support for multiple servers, scaling and load balancing out of the box. The Flynn docs for deployment are very similar to both Heroku and Dokku. It has the same “push to deploy” method. Flynn has support for multiple databases as well as automatic backups and restoration of all your applications. It is however much more resource intensive than Dokku is and you need a bare minimum of two hosts to get started, each with 2GB memory or more.
Flynn was very promising when it was announced but it seems like the project is not getting anywhere. The company blog has not been updated in two years and there are multiple issues on the Github page asking if the project is abandoned. People have also reported that the project is buggy and does not work as well as it was intended. Flynn was announced before the rise of Kubernetes for example and that might explain why the project has slowed down. I would not use Flynn in production today for these reasons.
Rancher is a management tool that sits on top of Kubernetes. It is designed to assist with deploying Kubernetes clusters on multiple providers such as AWS and DigitalOcean. You can migrate resources between such deployments with ease as well. It will also abstract the Kubernetes configuration management and has a nice graphical user interface. To get started with Rancher all you need is a server with at least 4GB ram to act as a the main administration server for the Rancher deployment. Install Docker on the server and fire up a container with the Rancher administration stuff in it. Then you just visit the URL to the server and you are greeted with the Rancher UI. From here you can create a Kubernetes cluster by either having Rancher do it for you if you use cloud providers like AWS or DigitalOcean, or by running Docker commands on your servers manually. This took me about 5 minutes to do and most of that was spent installing Docker on the servers.
Once up and running I deployed a simple Wordpress application in a couple of minutes. A “real” application will however be more complicated to deploy because Rancher does not use buildpacks like Dokku does. This is a normal Docker deployment with images and containers on top of Kubernetes so it takes a bit more work to get going.
All in all I’m intrigued by Rancher but since I am looking for something simple then it is too advanced and resource intensive for my small side projects. I will however look into Rancher a bit more later and try to deploy one of my projects to it. That will probably be a blog post in it’s own!
CapRover is in many ways similar to Dokku. It uses Docker for deployment just like Dokku but CapRover does not support buildpack deployments as it uses Dockerfiles only. This is not necessarily a bad thing since Dockerfile deployments are great in Dokku as well. You don’t have to write your own dockerfiles however for simple deployments as there are multiple defaults for popular stacks such as Node.js, PHP and Ruby. While Dokku uses its concept of plugins for databases and such, CapRover has what is called a “one click app”. Essentially these are applications and services that can be deployed through CapRovers GUI. Yeah that’s right, there is a GUI here as well. Simply pick the database or other service that you want and deploy it. You can then connect to it by using it’s name as there is an internal Docker network between all the containers running in CapRover.
One of the big differences between Dokku and CapRover is that CapRover supports multiple servers by using Docker Swarm. It will automatically balance your apps across multiple servers and you can even have automatic load balancing out of the box.
As a test I launched 3 servers and installed CapRover on them. All you need to do to launch the first server is install Docker and then launch a container based on the CapRover image. Pro tip: Use something like Ansible to install Docker on multiple hosts at once since this is tedious. Adding more servers is a bit more complicated as you need to have SSH keys setup on the worker nodes so that the main server can SSH to it. No big deal though if you’re used to this sort of thing and 5 minutes later I had a nice 3 server cluster ready to launch applications.
In order to run in cluster mode CapRover needs to have a Docker Registry to store the images that all the servers will use. It can do this for you, which is a single click in the GUI, or you can use an external Docker Registry if you already have one. Once this is done you simply launch applications like normal.
Once the cluster was ready I tried to deploy one of my tiny little Node.js microservices to it. Since CapRover does not support buildpack deployments you have to use Dockerfiles for this. CapRover has this built-in for Node.js, PHP, Python/Django and Ruby/Rack however, so if you use any of those stacks you don’t have to write your own Dockerfile just to get started. All you need to to is add a simple Captain definition file to your application root and you are good to go. Making your own custom Dockerfile is probably a good idea for a more complex applicaiton though, but the defaults seem to work just fine to get started.
After I added the definition file, installed the NPM Caprover CLI I simply ran
caprover deploy to deploy my application. CapRover built the Docker image for the application and then it was online. I’m happy to say that it worked perfectly out of the box. You can even enable HTTPS in the GUI by using Let’s Encrypt. No more CLI for that stuff either. I then scaled my application to 3 to check the Docker Swarm stuff and CapRover launched additional containers on the other two servers and load balanced between then. All this took a grand total of about 10 minutes. Very nice! I then also tried to launch one of the prebuilt applications that CapRover has: Wordpress. It launched a MariaDB container as well as a PHP one and then I was greeted by the default Wordpress installation configuration. So simple!
What I really like about CapRover:
- Multiple servers!
- It has a very nice GUI with a lot of functionality. You can for example set environment variables here instead of adding them through the CLI.
- Application logs can be viewed in the GUI. This is great!
- Built in monitoring with Netdata. This only works for the main server however!
- Single click applications. You can easily launch Wordpress for example without having to type any commands at all.
- Built-in support for persistent application data in mounted Docker volumes.
So, would I then abandon Dokku for CapRover? Probably not at this point for a couple of reasons:
- I don’t need multiple servers. If I did it would be a different story. Adding multiple servers, a Docker registry and Docker Swarm is a lot of new moving parts. If something breaks I’m not comfortable fixing that stuff. Dokku is a simpler CLI tool on top of Docker.
- No buildpack support. Heroku buildpacks are a solid way to deploy applications with support for my languages and application stacks. A custom Dockerfile deployment is easier to customize but for simple deployments I prefer using buildpacks.
- No Procfile support. Dokku will automatically launch multiple containers for each process in your Procfile. So if you have for example a web process and a worker process it will launch two containers. If you use CapRover you have to either write a custom Dockerfile to launch multiple processes or use one CapRover app for each process. I would probably do the latter but a Procfile is easier to use out of the box.
- No rolling restarts. Dokku will make sure that the new version of your application is up and running before shutting the old containers down. CapRover will have a tiny bit of downtime between releases. Keep in mind that rolling restarts are not simple however.
- Backups are harder. Dokku has simple commands for backups. You can either dump the database to disk or put it in Amazon S3 automatically. If you use CapRover you have to do this manually by launching a container, connecting to the database and then dumping the data. This is not hard to do though, but not as slick as the Dokku solution.
CapRover is a fine project however and if I ever need multiple servers I will absolutely give it a try. You probably can’t go wrong with either project for a single server either however and I chose to stick with Dokku because I already have it running and I’m more comfortable with it.
If money was no problem then I would probably use Heroku for just about everything. You never have to think about servers, scaling or backups ever again(at least not until you sort of want to have scaling issues). However, since the costs add up quickly if you have a lot of moderately sized side projects that are not profitable, you probably want to go with something like Dokku or CapRover and then scale up as you go. I will take a more detailed look at CapRover and Rancher some other time since those projects both interest me greatly.Discuss on Twitter