Migrating from Vesta CP to Docker

February 19, 2021 - Other - 5 minute read

For the past 5 years, I've used the Vesta Control Panel to manage my websites, databases, and e-mail. I've been running Vesta CP...

headervesta

For the past 5 years, I’ve used the Vesta Control Panel to manage my websites, databases, and e-mail forwarders. Vesta is one of those ‘click-and-go’ management tools to set up webservers, e-mail, databases and much more. I’ve been running Vesta CP on a small DigitalOcean virtual machine (1 CPU, 1GB RAM, 25GB Disk) for $5 a month.

Moving into 2021, I decided it is finally time to put in the effort to break out these applications and migrate into a nice dockerized setup. As I want to use my VM for experimental web-apps & other projects too, this’ll help me keep an overview of what’s installed and the dependencies between applications.

For context, here’s an screenshot of the Vesta Control Panel web-interface.

vestacp

Starting Fresh

I start by setting up a fresh VM instance on DigitalOcean (1 CPU, 1GB RAM, 25GB Disk). I install the latest version of Ubuntu, set up Docker & docker-compose. I also set up NGINX as a reverse proxy on the instance. Now that everything’s installed, it’s time to SSH into the machine and start to dockerize.

The applications

I’m migrating the following applications from Vesta CP to Docker:

  1. My blog (HTML Built from Markdown - Jekyll)
  2. NeoDash, A graph app (HTML, Built from React)
  3. Two old websites (Plain HTML & CSS)
  4. Two Wordpress sites (PHP, using a MySQL DB)
  5. An e-mail forwarder

I want to run each of the applications in their own Docker container. For this, I use a docker-compose.yml file to define and run my containers, as well as specify their dependencies. If you’re not familiar with docker-compose, I highly recommend checking out the docs.

Dockerizing my Blog, NeoDash, and the old sites

For applications (1), (2) and (3) above, I use a similar approach to set up the docker containers. Take my blog as an example.

My blog site (nielsdejong.nl) consists of posts written in Markdown that are built into HTML by jekyll. In my code repository, after running jekyll build, the generated file hierarchy with HTML files will be generated to a _site directory:

./_site/
./_site/assets/
./_site/neo4j projects/
./_site/other/
./_site/index.html
./_site/about.html   

First, I clone and build the latest version of the blog to a folder on my new DigitalOcean virtual machine. Then, using a Docker httpd image as a webserver, I map the _site folder to the right directory inside the container. My docker-compose.yml file then looks like this:

version: '3.1'

services:
  nielsdejongdotnl:
    image: httpd:latest
    hostname: nielsdejong.nl
    container_name: nielsdejongdotnl
    volumes:
      - ./_site/:/usr/local/apache2/htdocs/
    ports:
      - 9001:80

You’ll notice several things here:

  • I specify the hostname nielsdejong.nl of the site.
  • I map the _site directory to /usr/local/apache2/htdocs/ inside the container.
  • I map external port 9001 to port 80 inside the container.

My other static websites, including the React app, are also set up with the httpd image. Similarly, I build the application, clone it onto the VM and map the directories. To avoid repetition, I’ve excluded the details from this post - but the docker-compose files are essentially identical.

Dockerizing Wordpress and the MySQL Database

To migrate a Wordpress site, I need two Docker containers:

  • a Docker container that runs the wordpress site (PHP).
  • a Docker container that runs the underlying database (MySQL).

In Vesta CP, I’ve got several Wordpress sites running using the same database. After moving all the Wordpress sites and the DB files to my virtual machine, I append mydocker-compose file, specify the containers and their configuration parameters:

version: '3.1'

services:
  mysql:
    image: mysql:5.7.31
    hostname: mysql
    container_name: mysql
    environment:
      MYSQL_DATABASE: [...]
      MYSQL_USER: [...]
      MYSQL_ROOT_PASSWORD: [...]
      MYSQL_PASSWORD: [...]
    volumes:
     - ./db/:/var/lib/mysql/
    ports:
      - 3306:3306
      
  wordpress-site-1:
    container_name: wordpress-site-1
    image: wordpress:4.2-apache
    restart: always
    environment:
      WORDPRESS_DB_HOST: [...]
      WORDPRESS_DB_NAME: [...]
      WORDPRESS_DB_USER: [...]
      WORDPRESS_DB_PASSWORD: [...]
    volumes:
      - ./site/:/var/www/html/
    ports:
      - 9002:80
    depends_on:
      - mysql

Key things to note here:

  • I map directories site and db to the respective folders inside the containers.
  • the environment variables (db host, db name, db user & db password) are the same for both of the containers.
  • I specify that the wordpress container depends on the mysql container.

Dockerizing the E-mail forwarder

The last thing Vesta managed for me is forwarding e-mails: I want all emails sent to my personal domain to go to my gmail address. Luckily there’s a Docker image available that does exactly this. The docker-compose config for the e-mail forwarder:

version: '3.1'

services:
  smf:
    image: zixia/simple-mail-forwarder
    ports:
      - "25:25"
    environment:
      - SMF_CONFIG=@nielsdejong.nl:[MY GMAIL ADDRESS]
    restart: always

Configuring NGINX

After running docker-compose start, all my containers are up and running. The last thing to do now is configure NGINX to forward incoming HTTP requests to the right container. I install NGINX (sudo apt-get install nginx), after which I need to configure the websites. For this, I create a configuration file at /etc/nginx/sites-enabled/nielsdejong.nl.conf:

server {
  server_name   nielsdejong.nl www.nielsdejong.nl;
  listen        80;
  location / {
    proxy_pass  http://localhost:9001;
  }
}

After running nginx -s reload, this works, all incoming traffic sent to nielsdejong.nl is forwarded to the right container. Next, need to do some tweaking to support SSL certificates.

To take care of this, I install Certbot. Certbot is a command line utility to generate and manage SSL certificates, it automatically adjusts NGINX configuration files too. After running Certbot, my NGINX config is updated, resulting in the configuration file to look like this:

server {
  server_name nielsdejong.nl www.nielsdejong.nl;

  location / {
    proxy_pass http://localhost:9001;
  }
 
  # managed by Certbot
  listen 443 ssl; 
  ssl_certificate /etc/letsencrypt/live/nielsdejong.nl/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/nielsdejong.nl/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}

server {
  server_name nielsdejong.nl www.nielsdejong.nl;
  
  if ($host = nielsdejong.nl) {
    return 301 https://$host$request_uri;
  } # managed by Certbot
  
  listen 80;
  return 404; # managed by Certbot
}

There’s several things going on here:

  • HTTP requests (port 80) are always redirected to port 443 (HTTPS)
  • All HTTPS requests (port 443) are then forwarded to the docker container (port 9001)
  • Several configuration parameters are defined to handle SSL. (This is all automatically done by certbot)

All that’s left is to reload NGINX one more time. Now, everything is ready!

Wrapping up

A quick docker stats shows all the containers running happily. There’s no issue running 9 containers on my small virtual machine instance:

dockerstats

Next up, I’m going to set up automatic deployment from Github, but that’s a story for another blog post. For now, feel free to reach out if you have any questions and/or are building a similar setup!

Updated: