POSTS
Migrating to Docker
I’ve been running a couple of servers on DigitalOcean for some time. I wanted to consolidate the services on each of these servers into one host, but in separate virtual environments, so this seemed like a good opportunity to try out Docker. Docker is a technology allowing us to create processes running within a Container - a sort of lightweight virtual machine that shares system resources with the host such as processes, RAM, and hard disk space. Here’s the full definition from Wikipedia:
a Docker container, as opposed to a traditional virtual machine, does not require or include a separate operating system. Instead, it relies on the kernel’s functionality and uses resource isolation (CPU, memory, block I/O, network, etc.) and separate namespaces to completely isolate the application’s view of the operating system.
By using containers, resources can be isolated, services restricted, and processes provisioned to have a private view of the operating system with their own process ID space, file system structure, and network interfaces. Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory and I/O.
Current Setup
On one of the hosts, I’m running SparkDossier - a Clojure app for recording thoughts, hunches, and ideas. SparkDossier uses PostgreSQL for database persistence and nginx as a reverse proxy.
On the other host, I’m running this blog, an instance of Ghost with nginx as a reverse proxy. Ghost data is stored in a SQLite database so it’s pretty portable.
My plan was to set up a single instance of nginx that would act as a reverse proxy for both SparkDossier and Ghost, redirecting the user based on the domain name being visited.
Migrating Ghost to Docker
Migrating Ghost to Docker was relatively straightforward. By using a pre-built Docker image, I just needed to extract config.js and the content folder from the existing host into a folder named blog, then run the command (replacing
docker run -d -p 80:2368 -v <override-dir>:/ghost-override dockerfile/ghost
Migrating SparkDossier to Docker
As mentioned, SparkDossier uses Clojure which runs on the JVM, so I chose to compile all the app dependencies into a single runnable JAR. Conveniently, there is also a pre-built Docker image for Java, so the SparkDossier Dockerfile can just focus on what my application needs to run:
FROM java
ADD sparkdossier.jar /
ENV DBURL=//db/sparkdossier
ENV DBUSER=<user>
ENV DBPASS=<pass>
ENV RECAPTCHA=<api-key>
EXPOSE 3000
CMD java -jar sparkdossier.jar
By using the official postgreSQL image, we can get the database up and running quickly as well using the following Dockerfile:
FROM postgres
ENV POSTGRES_USER=<user>
ENV POSTGRES_PASSWORD=<pass>
Configuring multi-host nginx
Setting up nginx as a reverse proxy for both Ghost and SparkDossier turned out to be a little less straightforward. Firstly, I wanted to make use of the environment variables exposed by Docker for configuring network addresses and ports between multiple containers, but it turns out that nginx does not support the usage of environment variables in configuring server blocks. Fortunately also Docker also updates the hosts file with linked container addresses, although I still needed to hard-code port numbers to their default values.
Secondly, during my local testing, nginx managed to route requests to the correct container based on the domain name, but after deploying to DigitalOcean it could only route requests to the Ghost container. To fix the problem, I had to modify my nginx configuration to set server_names_hash_bucket_size
to a higher number such as 64. Apparently the default value is dependent on the current processor, so it’s better to have this explicitly set when setting up multiple virtual hosts.
My final nginx configuration is as follows:
Dockerfile
FROM nginx
ADD ghost /etc/nginx/sites-enabled/
ADD sparkdossier /etc/nginx/sites-enabled/
RUN rm /etc/nginx/nginx.conf
ADD nginx.conf /etc/nginx/nginx.conf
nginx.conf
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 768;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main ‘$remote_addr - $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”’;
access_log /var/log/nginx/access.log main;
server_names_hash_bucket_size 64;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
gzip on;
gzip_disable “msie6”;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
ghost
server {
listen 80;
listen [::]:80;
server_name eugene.io;
root /usr/share/nginx/html;
index index.html index.htm;
client_max_body_size 10G;
location / {
proxy_pass http://blog:2368;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
}
}
sparkdossier
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name sparkdossier.com www.sparkdossier.com;
root /usr/share/nginx/html;
index index.html index.htm;
client_max_body_size 10G;
location / {
proxy_pass http://app:3000;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
}
}
Putting it all together
With my apps deployed in various Docker containers, I now needed a way to link the containers together. Instead of running individual Docker commands to configure and run each container, it is easier and more repeatable to use an “orchestration tool”, in which you provide a document containing a high-level description of how you want your containers put together, and the tool carries out your instructions by translating it into Docker commands.
Docker has an official orchestration tool called Compose, although the one I ended up trying out was Crane.
To set everything up on the DigitalOcean server, I started out with DigitalOcean’s 1-click Docker app and installed Crane, then placed the Dockerfile and relevant configuration for each container into a sub-folder of their own, ending up with the following directory structure:
.
|— blog
| |— app
| | |— config.js
| | `— content
| | `— ...
|— crane.yaml
|— sparkdossier
| |— app
| | |— Dockerfile
| | `— sparkdossier-0.1.4-standalone.jar
| |— nginx
| | |— Dockerfile
| | |— nginx.conf
| | `— sparkdossier
| `— postgresql
| |— backup.sh
| |— Dockerfile
| |— dump.db
| `— restore.sh
`— web
|— Dockerfile
|— ghost
|— nginx.conf
`— sparkdossier
Lastly, I created the crane.yaml
file containing the description of how I wanted the containers built and put together:
containers:
web:
image: web
dockerfile: web
run:
publish: ["80:80"]
link: ["app:app", "blog:blog"]
restart: "always"
detach: true
app:
image: app
dockerfile: sparkdossier/app
run:
link: ["db:db"]
restart: "always"
detach: true
db:
image: db
dockerfile: sparkdossier/postgresql
run:
restart: "always"
detach: true
blog:
image: dockerfile/ghost
run:
volume: ["blog/app:/ghost-override"]
restart: "always"
detach: true
In crane.yaml
I set each container’s restart policy to “always”, which means they will restart automatically whenever the host server restarts.
Finally, I had everything setup and ready to go. Starting up all the apps involves simply running the command crane lift
.
Conclusion
Overally, using Docker was a really smooth experience. Using pre-built images saved me from having to download, install and set up dependencies on my own. With fast startup times and image caching, I could also try out many various configurations without worrying about messing anything up. I also have a lot more confidence in the repeatability of my setup.
-
clojure
ghost
docker