This requires a little bit of explanation before we can really dive into the meat and potatoes.
Lets say you have a WordPress project in docker. Your docker environment is built to reflect your production environment.
Your docker-compose file looks something like the following: A mysql container with data volume, a php-fpm container exposing on port 9000, and finally a nginx container serving the php's exposed fpm port over port 80. You have a link setup between php and nginx so they can talk to each other. On your host visiting http://localhost:80 brings up your project.
You are coding away and all is well; until it isn't. For one reason or another you need to have your backend code make a request to itself. Maybe you are implementing batch processing, or maybe using a badly written theme; what ever the case might be. Your code looks something like this: `$response = wp_remote_get('site_url( '/batchit/' );');` and it isn't working. Even more frustrating is that it is working perfectly on production. The reason might not be obvious at fist. Docker containers are like mini isolated computers. They are confined to their own filesystems, process lists, and network stacks with resource limits. When containers are linked it creates a virtual network in which the isolated containers can talk to each other. Remember that in our setup `site_url()` is `localhost`, but on production that is changed to our live url. Localhost inside of nginx is not the same as localhost inside of php-fpm. So when our code runs inside of php-fpm and accesses localhost, it finds there is nothing running on port 80.
Now there are a few clever DNS hacks that will allow you to access nginx's localhost, but those require you to change `site_url()` which in turn also means changing the url you access the site from. There is a better way! Docker will allow you to have two or more containers share the same network stack. They will share the same IP address and port numbers as the first container, and processes on the two containers will be able to connect to each other over the loopback interface. Exactly what we need. If you are not using docker-compose you can use the `--net` flag, and if you are using docker-compose you can use `network_mode`. In both causes the options are the same. Two options in particular:
network_mode: "service:[service name]"
network_mode: "container:[container name/id]"
In our case we just need to remove the link from php-fpm to nginx and add `network_mode: "service:nginx"`. We also want to make sure to add `depends_on` with nignx to our php service. This is to ensure that our nginx network is ready before you try and start our php service. Now both containers share the same network namespace! Note: You may also have to edit your nignx config to point to localhost:9000 instead of php:9000.
Here is an example docker-compose: