You know what? It was not so long ago that my life as a developer was all rather dull. The tasks took longer than they should and there were always those pesky bugs that left me scratching my head trying to figure out just why something worked incorrectly on local development, forcing developers into the worst of times – where starting over again became an option (for me). But things are different now! adding Docker into my workflow has made everything much saner because without containerization at least one thing would be completely broken: installation processes for packages such as composer.
Introduction to Isolation
Docker is a technology that helps to keep our projects and software up-to-date. This article will not go into detail about what Docker does but it might be helpful to know some of its features so we can make better decisions when deciding whether or not to use this tool in a production environment.
Most devs I know typically have a single work machine but they do tend to work on multiple projects at a time. Each of these projects has different and conflicting software dependencies. One project might only be compatible with PHP 5.6 while another is on the cutting edge with PHP 8. This could also extend to the database layer where different projects require different versions of MySQL for example.
There are a few approaches that I used before Docker. A virtual machine is a software program that acts as a separate computer. It is also capable of running its applications and programs like a separate computer.
Now to my favorite isolation approach: containerization. Containerization is a lightweight alternative to virtual machines that involves encapsulating an application in a container with its operating environment. While Virtual Machines contain an entire Operating System, Containerization technologies share a single host operating system and appropriate binaries, libraries, or drivers across containers. The most popular containerization technology is the open-source Docker, created by Docker, Inc.
Now let’s create the project
Most popular software packages already include their docker images packaged and available for download on the docker hub. Devs can also package their applications as Docker images for global distribution via docker hub or internal distribution within their organizations. In this tutorial, we are going to be making use of images that have already been packaged for us to use.
We can create and configure containers by hand in docker by writing commands on our terminal. Regardless, I prefer to define the configurations for each service/container in a YAML configuration file using a tool called Docker Compose.
The objective of this project is to set up an environment that allows us to work on our WordPress plugins and themes. This environment will serve as a sandbox where we can quickly test our changes without ever having to restart our containers. It will also be extremely easy to move the project to a different machine. For example, I work on my iMac at home and when I’m on the go I always transfer the projects onto my MacBook Pro.
|__ docker-compose.yml |__ plugins | |__ advanced-custom-fields |__ themes | |__ name-of-theme | |__ ...
This is the container that will house the WordPress application. Think of this as a separate stand-alone machine. This is also the machine you will hit to access the WordPress site.
First, we specify the name of the service to be WordPress. Next, we set the image, this tells docker what image to download on the docker hub. The restart field is saying we want Docker to restart the container if it fails. In ports, we are telling Docker to expose this service on host port 80 and tunnel that to port 80 on the WordPress container. This means that anyone going through our host machine on port 80 will be tunnelled to port 80 on the WordPress container. Finally, in the environment field, we define all the environment variables that should be supplied to our container. The WORDPRESS_DB_PASSWORD variable is required by the WordPress image. This is the password it will use to connect to the MySQL container.
The database lives in this container. It won’t be exposed like the WordPress container is, but we do want it to be reachable from the WordPress container. Docker Compose takes care of that for us by making sure that all the containers defined in a file can talk to each other unless we restrict otherwise. We do not specify what port number to expose here because port 3306 is already exposed by the MySQL image.
services:... mysql: image: mysql:5.7 restart: always environment: MYSQL_ROOT_PASSWORD: example volumes: - mysql-data:/var/lib/mysql ...volumes: mysql-data:
We will skip explaining the terms that were covered in the WordPress service. Here we also specify an environment variable: MYSQL_ROOT_PASSWORD. The value of MYSQL_ROOT_PASSWORD is used as a password for the root user. Next, we specify a volume to use to hold the data. Docker containers are designed to be stateless, which means they neither read nor store information about their state from one session to the next. So if we bring down/destroy our docker setup we lose all information we collected in that session. In the case of a database, we do not want to lose the information in the database. Named volumes as we have defined in the MySQL service instructs docker to persist the data from the container unto the host machine. In the above snippet, we are telling Docker to persist data from the container’s /var/lib/mysql directory to our host machine. We do not have to specify the actual location on our host machine where this data should be persisted. We only identify the volume by giving it a name. This way, we can have multiple containers share the same persistent volume.
Making our plugin directory available on Docker
The final step is to expose our plugin to the WordPress container. We need to know a couple of things to do this: the location of our plugin on our host machine and the location of the plugin within the WordPress container. Based on the project structure that we outlined at the top, our plugin should be in the plugins directory of our project folder. We can specify the source location in two ways either relatively or by providing the full path. We will be going with the relative approach for the location of the plugin on the host machine and the absolute approach for the location of the plugin within docker.
services: wordpress: image: wordpress:5.2.1 restart: always ports: - 80:80 environment: WORDPRESS_DB_PASSWORD: example volumes: - ./plugins/dev-plugin:/var/www/html/wp-content/plugins/dev-plugin
Now Let’s Hook it all together
version: ‘3.1’services: mysql: image: mysql:5.7 restart: always environment: MYSQL_ROOT_PASSWORD: example volumes: - mysql-data:/var/lib/mysql wordpress: image: wordpress:5.2.1 restart: always ports: - 80:80 environment: WORDPRESS_DB_PASSWORD: example volumes: - ./plugins/dev-plugin:/var/www/html/wp-content/plugins/dev-plugin volumes: mysql-data
Once we have our configuration, we can fire up our docker containers with the docker-compose command: docker-compose up -d. This command will pull all our images and fire up our containers. So that when you access
you will be redirected to WordPress setup page where you will be asked to set up your new WordPress instance.
Once you get the hang of it, Docker makes it easy to get a project up and running quickly. But its appeal (at least in my opinion), is that it makes it even easier to carry your projects around and work on multiple projects without worrying about conflicting dependencies running your day.