Project Reasoning

The purpose of this project was to gain hands on experience using Docker to create multi-container applications which are loosely coupled. With Docker being an industry standard tool, I thought this would be a crucial skill to have as an IT professional. Though Node.js was not the focus of this project, it helped me understand the server side better, and how efficient it is when used within a container verses running on a local machine.

Project Introduction

The application created for this project was personal goals tracker. The user is presented with an initiative UI that allows them to type in a goal and add it to their list using the “Add Goal” button. If the user has met the goal or they wish to remove it, they can simply click on it and it will be deleted from the list. The project was segmented into three areas, frontend, backend and the database. Each of these components were deployed into their own Docker container. My code editor of choice was Visual Studio Code, in which I was able to import my local application.

Database

I began by setting up the database, which was responsible for storing and persisting the data entered in by the user. The database of choice was MongoDB as it was efficient for this project and having structured data was not paramount. MongoDB already has an existing image, therefore I was able to run the following command to deploy the container:

docker run –name mongodb –rm –d –p 27017:27017 mongo

The command above runs the mongo image, imported from Dockerhub as container. I then assigned the name “mongodb” and used “—rm” to remove the container once it is shut down to aid with the clear up. The “-d” allows me to run the command in detached (runs in the background) mode freeing up the terminal to run other commands. The “-p” command denotes the port, in this case “27017” which is the default MongoDB port. This port can be set up as a listening port on the local machine, hence the duplication in the command. Much of these commands are utilised throughout the project. I used the following command to check that the container was running.

docker ps

Backend

The next focus of the project was to deploy the backend element of the application into a container. For this a Docker file was created to write a custom image. This Docker file was responsible for importing node, as well as setting up the internal working directory, installing any dependencies, copying any local code into the container, exposing the port referenced in the server (port 80) and finally executing the node runtime.

Following on from that, I was able to build the image using the following command. This command builds the and image based on the Docker file and assigns it with a tag. In this case the tag was “goals-node”.

docker build –t goals-node .

Once this image was created, it could then be deployed onto a container using the following command which was similar to the one used for MongoDB. However prior to doing this I need to amend the “app.js” file within the backend so it could access the MongoDB container. Docker has it’s own syntax for mongoose connection which as “host.docker.internal” before referencing port “27017”. In addition to this I also needed to reference the server port “80” as well as the listing port on the local host “80”.

docker run –name goals-backend --rm  -d –p 80:80 goals-node

Frontend

Once this image was created, it could then be deployed onto a container using the following command which was similar to the one used for MongoDB. However prior to doing this I need to amend the “app.js” file within the backend so it could access the MongoDB container. Docker has it’s own syntax for mongoose connection which as “host.docker.internal” before referencing port “27017”. In addition to this I also needed to reference the server port “80” as well as the listing port on the local host “80”.

docker build –t goals-react .

Once built, the container could then be started using the following command. As this is a react application, the container needed to be ran in interactive mode using the “-it” flag

docker run –name gaols-frontend –rm –p 3000:3001 –it goals-react

Refining the Project

At this stage, the project is now fully functionally with all three containers communicating with each other. However, the project could be refined further by creating a network to house the containers and adding bind mounts to ensure the data persists even if the containers are stopped.

Creating a network

To create a network the following command was used (goals-net is the name of the network). At this point, I also stopped my running containers as these would need to be restarted with the new network attached.

docker network create goals-net

At this stage, the Mongo database could now be assigned to the network using the following command

docker run –name mongodb –rm –d –network goals-net mongo

The port no longer needs to be published for containers within the same network. However, in order for this to work the Mongo connection needed to be updated.  The “host.docker.internal” which was previously set in the background now needed to be changed to the actual name of the container “mongodb”. For the connection to work, the backend image required a rebuild.

Volumes, Data Persistence and Security

The next of the project revolved around data persistence and security. The application needed to be set up in a way that allows data to persist even when the containers are stopped. The first container that was set up for data persistence was the one running Mongo DB. This involved creating a volume at runtime. In addition to this, security for the database was also added using a root username and password. The command used was as follows

docker run –name goals-mongo –v data:/data/bd –rm –d –network goals-net –e MONGO_INITDB_ROOT_USERNAME=admin –e MONGO_INITDB_ROOT_PASSWORD=secret mongo

For the above command to work, the following also need to be added to the “app.js” script in order to authenticate the connection.

mongodb://${process.env.MONGODB_USERNAME}:${process.env.MONGODB_PASSWORD}@mongodb-adam:27017/course-goals?authSource=admin

The next container that required data persistence was the node backend. This needed to be a bind mount set up on my local machine, so that any local code changes made will be reflected within the container and vice versa. This bind needed to be the full path of the local directory. In addition to this, a volume for the node modules also needed to be added to ensure the modules within the container are not over written by anything stored locally. The reason being is, dependencies within the local directory may be different than those within the container. The following command was used to reflect this.

docker run --name goals-backend -v /Users/adam/Docker + Kubernetes/multi-01-starting-setup/backend:/app -v logs:/app/logs -v /app/node_modules --rm -d —network -p 80:80 goals-node

However, even though the bind mount had been set, any changes will not automatically be reflected. For this to work, the nodemon dependencies needed to be added to the “package.json” which watches for changes within the JavaScript and will automatically restart the server. In the previous command “-v logs:/app/logs” was used as a way to track the logs. Running “docker logs goals-backend” would verify whether the server restarts after a change.

The final container that required data persistence was the front end application. This allowed for changes to make to the front end code without the need to re-run the container. The command used for this was as follows.

docker run -v "/Users/adam/Docker + Kubernetes/multi-01-starting-setup/frontend/src/":/app/src --name goals-frontend --rm -d -p 3001:3000 -it goals-react

Summary

At this stage, the application was now fully module with each key element running within its own container. The application was also made efficient for further development with the use of volumes and bind mounts for data persistence. I found this project to be incredibly useful for my learning as it allowed me to gain hands on experience with Docker from set up, troubleshooting, data management and deployment.