Bits of Simplicity

Neglecting this site lately

07/17/2018

I have been neglecting this site lately. Unfortunately I have been pretty busy these past months and haven't had the time. One of the main things I have been working on is my boat. 


I took her out for the maiden voyage and found some small issues. Everything fixable, but since I am not used to this type of work it is taking me longer than expected. 


Next up I setup a XMPP server for some friends and myself to mess around on. I want to move away from Google hangouts and XMPP made the most since. I am planning to do a good write up on how the process went.


Thats all for now! 

I bought a boat

05/23/2018

I bought a sail boat. A Macgregor 25. It should be a great boat to expand my skills with. Can't wait to get it into the water.

Taking back control

04/19/2018

Anyone paying attention to the news lately will know that privacy has become a mainstream issue. For years users have been trading their personal data for access to online services. The information economy dominates the modern internet. And with enforcement of GDPR coming into full effect and the Facebook Cambridge Analytica scandal; it is becoming hard to ignore. More and more users are deleting Facebook and Google accounts in search of more privacy-conscious services. Myself included. I have decided to take back control over my data and services, or at least as much as I can.

Reducing my second & third-party data footprint is no small task. To start with I want to replace as many services I use with FOSS self-hosted ones as I can. For the services that I can't or don't want to self-host, I will use privacy conscious services when I can. The first and most important change is email.

For years I used Gmail as my primary email provider. Ever since I was a teenager, and at the time it was by far the best email provider around. Very little to no spam, and a fantastic fast web client. The company motto of 'do no evil' also resonated with me. But I have since grown up and have realized that if you have to tell yourself to 'do no evil' there is a good chance that you might be doing evil. Not only does Google scan email contents and meta information to power their targeted advertising platform. Users are also completely at the behest of Google and their policies. If you don't play by their rules they can ban you and lose access to your account. To put it simply; you do not own your Gmail address, Google does.

This idea of ownership is actually pretty important. It's the difference between being a peer on the network and being another dumb terminal. An email address is the primary method of contact on the internet. Losing access to an email address can leave you cut off from online services. Having an address that you own on a domain you own; means that you can switch email providers anytime you wish. And not have to worry about updating your address on different online services.

Anyone that has self-hosted email will tell you that it can be a bit of a nightmare. I have done it in the past and don't want to go down that route. Enter ProtonMail. ProtonMail is a secure email provider based in Switzerland that is privacy-focused. All messages are end-to-end encryption with minimal access to user data. They have free accounts, or paid accounts that give you the ability to use your own domain. I have been using them for about three months now, and I can't recommend them enough. Over time I have been updating different services to the new address. It is unlikely that I will completely migrate away from Gmail (at least for now), but it is now my seconded provider.

With email out of the way; the next big second & third-party data footprint is social media. I don't have a Facebook (and never have), and I am not active on Twitter or Google+. I do have Reddit & Hacker News accounts that I use on a regular basis, but I am trying to using those less and less for different reasons. And as odd as it might sound; I do want to be more social on the Internet. But I want control over what I choose to share and who has access to that information. That is where GNU Social and the fediverse come in to play.

GNU Social is a free and open source microblogging platform that you can host yourself. Built on top of OStatus protocol it federates with other platforms such as Mastodon. GNU Social is still a little rough around the edges, but it gets the job done. If you are looking for a more traditional experience then I would suggest giving Mastodon a try.

Taking back control of my media consumption has been a bit of a challenge. One major change is using an RSS reader. Managing my feeds means I am no longer blasted with click-bait useless information, or tracked by every ad network under the sun. If I find a website or blog interesting I add it to my RSS reader. I have been self-hosting an instance of Tiny Tiny RSS reader and is excellent for my needs.

I have moved away from Chrome and back to Firefox. It's fast and lightweight and not maintained by an ad company. Although I have a few issues with Pocket included with Firefox out of the box; it is a good compromise. I have been using Firefox developer edition for a few months at work and have had no major issues.

De-Googling even further I have been using Duck Duck Go as my primary search engine. It has been fantastic. At first, I found myself using bangs to fall back to Google on an almost daily basis, but now I actually find DDG to have better search results. Google's mission has changed from being the best at search to be the best at stealing your attention. When searching for a topic on Google the first page is almost always filled with "news articles" from different "news" sites. They are trying to push larger publishing platforms instead of quality information.

These changes are a small step in the right direction. I can't get back the data that has already been collected, but I can limit my future exposure. Personal privacy isn't dead yet, but if large companies like Amazon, Facebook and Google have their way; it will be. The larger battle over privacy is about to happen, and I for one don't want to give them any more ammo than they already have.

Dude, where's my php.ini?

03/12/2018

Ever need to adjust your php.ini file but never know which file to edit?

If you need to adjust the ini file for php-cli simply run:

php --ini

Unfortunately if you want to find the ini file for mod-php or php-fpm you will have to add

<?php phpinfo(); ?>

to a file and do a quick search.

Usually I find that you can get the location of the ini files using the cli command and go from there to find the mod/fpm files.

Admin Overhaul and tagging

03/12/2018

I have finally gotten around to overhauling the admin area and add tagging.

The admin area is completely new. It is much easier to navigate, and is a strong base to build on. I have also switched out TinyMCE for SummerNote editor; loving it so far. TinyMCE was just a pain to include and work with.

I have also added tagging with categories coming soon. It will allow me to organize content a little better.

Next up is adding a payload that I can download and sign with my gpg key and reupload. The idea being a javascript library would verify the payload on the front end. I am also looking to add page support for more evergreen content.

More style updates

02/04/2018

Made a few quick style updates to the blog; also dropped the sha1 from each post.

The main one is styling of pre elements. They are much more readable and stand out from normal text now.

Accessing the localhost of one Docker container from another

01/17/2018

This requires a little bit of explanation before we can really dive into the meat and potatoes.
Lets say you have a WordPress project in docker. Your docker environment is built to reflect your production environment.
Your docker-compose file looks something like the following: A mysql container with data volume, a php-fpm container exposing on port 9000, and finally a nginx container serving the php's exposed fpm port over port 80. You have a link setup between php and nginx so they can talk to each other. On your host visiting http://localhost:80 brings up your project.

You are coding away and all is well; until it isn't. For one reason or another you need to have your backend code make a request to itself. Maybe you are implementing batch processing, or maybe using a badly written theme; what ever the case might be. Your code looks something like this: `$response = wp_remote_get('site_url( '/batchit/' );');` and it isn't working. Even more frustrating is that it is working perfectly on production. The reason might not be obvious at fist. Docker containers are like mini isolated computers. They are confined to their own filesystems, process lists, and network stacks with resource limits. When containers are linked it creates a virtual network in which the isolated containers can talk to each other. Remember that in our setup `site_url()` is `localhost`, but on production that is changed to our live url. Localhost inside of nginx is not the same as localhost inside of php-fpm. So when our code runs inside of php-fpm and accesses localhost, it finds there is nothing running on port 80.

Now there are a few clever DNS hacks that will allow you to access nginx's localhost, but those require you to change `site_url()` which in turn also means changing the url you access the site from. There is a better way! Docker will allow you to have two or more containers share the same network stack. They will share the same IP address and port numbers as the first container, and processes on the two containers will be able to connect to each other over the loopback interface. Exactly what we need. If you are not using docker-compose you can use the `--net` flag, and if you are using docker-compose you can use `network_mode`. In both causes the options are the same. Two options in particular:

network_mode: "service:[service name]"
network_mode: "container:[container name/id]"


In our case we just need to remove the link from php-fpm to nginx and add `network_mode: "service:nginx"`. We also want to make sure to add `depends_on` with nignx to our php service. This is to ensure that our nginx network is ready before you try and start our php service. Now both containers share the same network namespace! Note: You may also have to edit your nignx config to point to localhost:9000 instead of php:9000.

Here is an example docker-compose:

version: '3'
services:
    mysql:
      image: mysql:5.6
      volumes:
        - mysql_data:/var/lib/mysql
      environment:
        MYSQL_ROOT_PASSWORD: secret
        MYSQL_DATABASE: project
        MYSQL_USER: project
        MYSQL_PASSWORD: project
      expose:
        - 3306

    nginx:
      build: ./build/docker/nginx/
      ports:
          - 80:80
      links:
        - mysql
      volumes:
        - .:/var/www/html

    php:
      build: ./build/docker/php/
      depends_on:
        - nginx
      network_mode: "service:nginx"
      volumes:
        - .:/var/www/html

    phpmyadmin:
      image: phpmyadmin/phpmyadmin
      ports:
        - 8080:80
      links:
        - mysql
      environment:
        PMA_HOST: mysql
        
volumes:
  mysql_data:

Setting up Deplicity & BackBlaze for automated backups

01/13/2018

I got a new Dell XPS recently for development, so naturally I installed Linux, and like any good technophile I wanted an easy way to do backups just in case anything happens. There are a few ways to approach backups. Backing up to a local hard drive, or a remote hard drive, or even the cloud. The criteria I was looking for was automated encrypted cloud backups. I decided on using Duplicity for doing the backups and BackBlaze to be the storage backend.

Duplicity is a software suite that provides encrypted, digitally signed, versioned, remote backup of files. It's GPL, free, and pretty awesome. Duplicity backs directories by producing encrypted tar-format volumes. It uses plain old GnuGP for signing and encryption & rsync for uploads. There is a pretty sweet bash wrapper called duplicity-backup.sh. It allows you to create a configuration file to make working with Duplicity even easier.

BackBlaze is a pretty well known industry leader in cloud storage. They do have backup software and plans if you are windows, but since I am not and want to ensure my backups are secure I went with their B2 Cloud Storage offering. At the time of writing this the first 10 GB of storage is free and the first gig downloaded perday is also free.

Setup

Create a new bucket on BackBlaze. Installing Duplicity on Ubuntu is a peace of cake. Just need to apt-get install

sudo apt-get install duplicity

Then install B2 using pip and python.

sudo pip install --upgrade b2

Next I cloned down duplicity-backup.sh into my home directory.

git clone https://github.com/zertrin/duplicity-backup.sh.git .duplicity-backup

Make a copy of the example config and edit it to suit your needs. At this point you can either use a password for encryption or use/create gpg keys.

cd .duplicity-backup
cp duplicity-backup.conf.example duplicity-backup.conf

After you are done with configuration it is a good idea to test your backup.

./duplicity-backup.sh -b
./duplicity-backup.sh -v

Automation

At this point you should be able to manually backup, but that isn't very much fun. I want it automated. Your first thought might be to setup cron to fire off your backups at set times, but the issue with cron is if you miss the time (laptop is off) then you have to wait for the next cron. Enter anacron. Anacron is a computer program that performs periodic command scheduling, which is traditionally done by cron, but without assuming that the system is running continuously. Thus, it can be used to control the execution of daily, weekly, and monthly jobs on systems that don't run 24 hours a day. Prefect for firing our backups. It is installed on most systems by default. I decided to create a user anacrontab.

mkdir -p ~/.anacron/{etc,spool}
vim ~/.anacron/etc/anacrontab


# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.

SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# period delay job-identifier command
1 1 duplicity.backup $HOME/.duplicity-backup/duplicity-backup.sh -c $HOME/.duplicity-backup/duplicity-backup.conf -b

Then add a crontab to ensure anacron fires.

crontab -e
@hourly /usr/sbin/anacron -s -t $HOME/.anacron/etc/anacrontab -S $HOME/.anacron/spool

And that is it! Anacron logs to syslog, so to check that it is running you can simply fire it off by hand and check the syslog.

/usr/sbin/anacron -s -f -t $HOME/.anacron/etc/anacrontab -S $HOME/.anacron/spool

 

sudo cat /var/log/syslog | grep duplicity.backup

 

Backups should be encrypted and automatic. If an issue ever comes up just restore using Duplicity.