Cyrus Stoller home about consulting

Installing Ghost using Docker

A few days ago I decided that I wanted to start a new blog where I’ll be sharing a short idea everyday. I’ve been following the Ghost project for a while and thought this would be a good opportunity to try it out. I’ve also been interested in getting more familiar with Docker. So, I decided to deploy my new Ghost blog using Docker. It took me longer than expected, so I thought I’d share and hopefully spare you some headache. I’m not an expert on Docker, so if you have suggestions on how to improve this, please let me know.

Setting up your VPS and installing Docker

I have open sourced some puppet manifests that I’ve used to do this before. To use them follow the instructions here.

$ deploy/puppet_apply_with_args.sh docker

And subsequently with

$ deploy/update.sh deployer@host /tmp/puppet docker

Feel free to install Docker however you like. For this tutorial I’m using Docker version 1.6.2.

To make it so you can run docker commands without sudo you need to change the ownership of /var/run/docker.sock. Or if you prefer you can create a new unix group called docker, but in my case I just used deployer.

$ sudo chown deployer:deployer /var/run/docker.sock

Installing the Docker image

Now that Docker is installed on your VPS, you need to get a Docker image to run. Looking on Docker Hub, I found the official repository by looking here. To install this, I ran:

$ sudo docker pull ghost:0.6.4

Starting your container

Now I have the docker image installed, but it is not running yet. To do that, I ran:

$ docker run -d --name chirp -v /var/www/chirp:/var/lib/ghost -p 2368:2368 \
  -e NODE_ENV=production ghost

It took me longer than I’d like to admit that I needed explicitly set NODE_ENV to get Ghost to run in production. The name chirp is unimportant, but is descriptive for my project. The -v flag (more info) allows for a shared filesystem. This is important so that the config.js and sqlite database can live outside the disposable container. And the -p flag (more info) is important so that nginx can route requests to our container.

This is where I thought I’d be done setting up ghost, but there were a couple more steps.

First, you need to copy the paths section from the development section to the production section in your config.js.

Second, you need to setup a mail service to configure your account. I used Mandrill because of their generous free tier, but you can use which ever mail provider you like. I figured this out with help from this blog post by Marshall Thompson.

After these changes, here is what my production section looks like:

config = {
  production: {
    url: 'http://chirp.cyrusstoller.com',
    mail: {
      transport: 'SMTP',
      options: {
        host: 'smtp.mandrillapp.com',
        service: 'Mandrill',
        port: 587,
        auth: {
          user: 'xxx@example.com',
          pass: 'your api key'
        }
      }
    },
    database: {
      client: 'sqlite3',
      connection: {
        filename: path.join(process.env.GHOST_CONTENT, '/data/ghost.db')
      },
      debug: false
    },
    server: {
      // Host to be passed to node's `net.Server#listen()`
      host: '0.0.0.0',
      // Port to be passed to node's `net.Server#listen()`, 
      // for iisnode set this to `process.env.PORT`
      port: '2368'
    },
    paths: {
      contentPath: path.join(process.env.GHOST_CONTENT, '/')
    }
  }
}

Setting up nginx

The last step is to make it so that nginx can serve requests to your docker container, so visitors can see you blog on port 80, the default for HTTP.

I added the following to a file in /etc/nginx/sites-enabled/.

server {
  listen 80;
  server_name chirp.cyrusstoller.com;

  access_log /var/log/nginx/chirp_access.log;

  location / {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    proxy_pass http://localhost:2368;
  }
}

Then I reload nginx with:

$ sudo service reload nginx

The reason that I use reload instead of restart is because the new configuration files will be parsed before terminating the old process. So, if you have a bad configuration file, there’s a chance that you will accidentally stop the nginx process.

Conclusion

I’m eager to learn more about Docker best practices. Let me know if you have any tips.

Category Tutorial