For the past 5 years, I’ve used the Vesta Control Panel to manage my websites, databases, and e-mail forwarders. Vesta is one of those ‘click-and-go’ management tools to set up webservers, e-mail, databases and much more. I’ve been running Vesta CP on a small DigitalOcean virtual machine (1 CPU, 1GB RAM, 25GB Disk) for $5 a month.
Moving into 2021, I decided it is finally time to put in the effort to break out these applications and migrate into a nice dockerized setup. As I want to use my VM for experimental web-apps & other projects too, this’ll help me keep an overview of what’s installed and the dependencies between applications.
For context, here’s an screenshot of the Vesta Control Panel web-interface.
I start by setting up a fresh VM instance on DigitalOcean (1 CPU, 1GB RAM, 25GB Disk). I install the latest version of Ubuntu, set up Docker & docker-compose. I also set up NGINX as a reverse proxy on the instance. Now that everything’s installed, it’s time to SSH into the machine and start to dockerize.
I’m migrating the following applications from Vesta CP to Docker:
- My blog (HTML Built from Markdown - Jekyll)
- NeoDash, A graph app (HTML, Built from React)
- Two old websites (Plain HTML & CSS)
- Two Wordpress sites (PHP, using a MySQL DB)
- An e-mail forwarder
I want to run each of the applications in their own Docker container. For this, I use a
docker-compose.yml file to define and run my containers, as well as specify their dependencies.
If you’re not familiar with docker-compose, I highly recommend checking out the docs.
Dockerizing my Blog, NeoDash, and the old sites
For applications (1), (2) and (3) above, I use a similar approach to set up the docker containers. Take my blog as an example.
My blog site (nielsdejong.nl) consists of posts written in Markdown that are built into HTML by jekyll.
In my code repository, after running
jekyll build, the generated file hierarchy with HTML files will be generated to a
./_site/ ./_site/assets/ ./_site/neo4j projects/ ./_site/other/ ./_site/index.html ./_site/about.html
First, I clone and build the latest version of the blog to a folder on my new DigitalOcean virtual machine.
Then, using a Docker
httpd image as a webserver, I map the
_site folder to the right directory inside the container.
docker-compose.yml file then looks like this:
You’ll notice several things here:
- I specify the hostname
nielsdejong.nlof the site.
- I map the
/usr/local/apache2/htdocs/inside the container.
- I map external port
80inside the container.
My other static websites, including the React app, are also set up with the
Similarly, I build the application, clone it onto the VM and map the directories. To avoid repetition, I’ve excluded the details from this post - but the docker-compose files are essentially identical.
Dockerizing Wordpress and the MySQL Database
To migrate a Wordpress site, I need two Docker containers:
- a Docker container that runs the wordpress site (PHP).
- a Docker container that runs the underlying database (MySQL).
In Vesta CP, I’ve got several Wordpress sites running using the same database.
After moving all the Wordpress sites and the DB files to my virtual machine, I append my
docker-compose file, specify the containers and their configuration parameters:
Key things to note here:
- I map directories
dbto the respective folders inside the containers.
- the environment variables (db host, db name, db user & db password) are the same for both of the containers.
- I specify that the wordpress container depends on the mysql container.
Dockerizing the E-mail forwarder
The last thing Vesta managed for me is forwarding e-mails: I want all emails sent to my personal domain to go to my gmail address.
Luckily there’s a Docker image available that does exactly this. The
docker-compose config for the e-mail forwarder:
docker-compose start, all my containers are up and running.
The last thing to do now is configure NGINX to forward incoming HTTP requests to the right container.
I install NGINX (
sudo apt-get install nginx), after which I need to configure the websites.
For this, I create a configuration file at
nginx -s reload, this works, all incoming traffic sent to nielsdejong.nl is forwarded to the right container. Next, need to do some tweaking to support SSL certificates.
To take care of this, I install Certbot. Certbot is a command line utility to generate and manage SSL certificates, it automatically adjusts NGINX configuration files too. After running Certbot, my NGINX config is updated, resulting in the configuration file to look like this:
There’s several things going on here:
- HTTP requests (port 80) are always redirected to port 443 (HTTPS)
- All HTTPS requests (port 443) are then forwarded to the docker container (port 9001)
- Several configuration parameters are defined to handle SSL. (This is all automatically done by
All that’s left is to reload NGINX one more time. Now, everything is ready!
docker stats shows all the containers running happily. There’s no issue running 9 containers on my small virtual machine instance:
Next up, I’m going to set up automatic deployment from Github, but that’s a story for another blog post. For now, feel free to reach out if you have any questions and/or are building a similar setup!