Bits of Simplicity

Accessing the localhost of one Docker container from another

01/17/2018

This requires a little bit of explanation before we can really dive into the meat and potatoes.
Lets say you have a WordPress project in docker. Your docker environment is built to reflect your production environment.
Your docker-compose file looks something like the following: A mysql container with data volume, a php-fpm container exposing on port 9000, and finally a nginx container serving the php's exposed fpm port over port 80. You have a link setup between php and nginx so they can talk to each other. On your host visiting http://localhost:80 brings up your project.

You are coding away and all is well; until it isn't. For one reason or another you need to have your backend code make a request to itself. Maybe you are implementing batch processing, or maybe using a badly written theme; what ever the case might be. Your code looks something like this: `$response = wp_remote_get('site_url( '/batchit/' );');` and it isn't working. Even more frustrating is that it is working perfectly on production. The reason might not be obvious at fist. Docker containers are like mini isolated computers. They are confined to their own filesystems, process lists, and network stacks with resource limits. When containers are linked it creates a virtual network in which the isolated containers can talk to each other. Remember that in our setup `site_url()` is `localhost`, but on production that is changed to our live url. Localhost inside of nginx is not the same as localhost inside of php-fpm. So when our code runs inside of php-fpm and accesses localhost, it finds there is nothing running on port 80.

Now there are a few clever DNS hacks that will allow you to access nginx's localhost, but those require you to change `site_url()` which in turn also means changing the url you access the site from. There is a better way! Docker will allow you to have two or more containers share the same network stack. They will share the same IP address and port numbers as the first container, and processes on the two containers will be able to connect to each other over the loopback interface. Exactly what we need. If you are not using docker-compose you can use the `--net` flag, and if you are using docker-compose you can use `network_mode`. In both causes the options are the same. Two options in particular:

network_mode: "service:[service name]"
network_mode: "container:[container name/id]"


In our case we just need to remove the link from php-fpm to nginx and add `network_mode: "service:nginx"`. We also want to make sure to add `depends_on` with nignx to our php service. This is to ensure that our nginx network is ready before you try and start our php service. Now both containers share the same network namespace! Note: You may also have to edit your nignx config to point to localhost:9000 instead of php:9000.

Here is an example docker-compose:

version: '3'
services:
    mysql:
      image: mysql:5.6
      volumes:
        - mysql_data:/var/lib/mysql
      environment:
        MYSQL_ROOT_PASSWORD: secret
        MYSQL_DATABASE: project
        MYSQL_USER: project
        MYSQL_PASSWORD: project
      expose:
        - 3306

    nginx:
      build: ./build/docker/nginx/
      ports:
          - 80:80
      links:
        - mysql
      volumes:
        - .:/var/www/html

    php:
      build: ./build/docker/php/
      depends_on:
        - nginx
      network_mode: "service:nginx"
      volumes:
        - .:/var/www/html

    phpmyadmin:
      image: phpmyadmin/phpmyadmin
      ports:
        - 8080:80
      links:
        - mysql
      environment:
        PMA_HOST: mysql
        
volumes:
  mysql_data:

Setting up Deplicity & BackBlaze for automated backups

01/13/2018

I got a new Dell XPS recently for development, so naturally I installed Linux, and like any good technophile I wanted an easy way to do backups just in case anything happens. There are a few ways to approach backups. Backing up to a local hard drive, or a remote hard drive, or even the cloud. The criteria I was looking for was automated encrypted cloud backups. I decided on using Duplicity for doing the backups and BackBlaze to be the storage backend.

Duplicity is a software suite that provides encrypted, digitally signed, versioned, remote backup of files. It's GPL, free, and pretty awesome. Duplicity backs directories by producing encrypted tar-format volumes. It uses plain old GnuGP for signing and encryption & rsync for uploads. There is a pretty sweet bash wrapper called duplicity-backup.sh. It allows you to create a configuration file to make working with Duplicity even easier.

BackBlaze is a pretty well known industry leader in cloud storage. They do have backup software and plans if you are windows, but since I am not and want to ensure my backups are secure I went with their B2 Cloud Storage offering. At the time of writing this the first 10 GB of storage is free and the first gig downloaded perday is also free.

Setup

Create a new bucket on BackBlaze. Installing Duplicity on Ubuntu is a peace of cake. Just need to apt-get install

sudo apt-get install duplicity

Then install B2 using pip and python.

sudo pip install --upgrade b2

Next I cloned down duplicity-backup.sh into my home directory.

git clone https://github.com/zertrin/duplicity-backup.sh.git .duplicity-backup

Make a copy of the example config and edit it to suit your needs. At this point you can either use a password for encryption or use/create gpg keys.

cd .duplicity-backup
cp duplicity-backup.conf.example duplicity-backup.conf

After you are done with configuration it is a good idea to test your backup.

./duplicity-backup.sh -b
./duplicity-backup.sh -v

Automation

At this point you should be able to manually backup, but that isn't very much fun. I want it automated. Your first thought might be to setup cron to fire off your backups at set times, but the issue with cron is if you miss the time (laptop is off) then you have to wait for the next cron. Enter anacron. Anacron is a computer program that performs periodic command scheduling, which is traditionally done by cron, but without assuming that the system is running continuously. Thus, it can be used to control the execution of daily, weekly, and monthly jobs on systems that don't run 24 hours a day. Prefect for firing our backups. It is installed on most systems by default. I decided to create a user anacrontab.

mkdir -p ~/.anacron/{etc,spool}
vim ~/.anacron/etc/anacrontab


# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.

SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# period delay job-identifier command
1 1 duplicity.backup $HOME/.duplicity-backup/duplicity-backup.sh -c $HOME/.duplicity-backup/duplicity-backup.conf -b

Then add a crontab to ensure anacron fires.

crontab -e
@hourly /usr/sbin/anacron -s -t $HOME/.anacron/etc/anacrontab -S $HOME/.anacron/spool

And that is it! Anacron logs to syslog, so to check that it is running you can simply fire it off by hand and check the syslog.

/usr/sbin/anacron -s -f -t $HOME/.anacron/etc/anacrontab -S $HOME/.anacron/spool

 

sudo cat /var/log/syslog | grep duplicity.backup

 

Backups should be encrypted and automatic. If an issue ever comes up just restore using Duplicity.