Deploying My First Service on Docker
Jul 2, 2025
Docker is not a wishlist of tools that I will and should learn in 2025, but situations seem to ask me to understand it.
Without hard feelings, I am actually very curious this time because Docker is actually quite familiar to my ears, but I was not too interested in knowing more about it at that time.
Completing the backend service that I had previously developed (NodeJS - express), I started to think if I wanted to deploy this to a server, what steps should I take? What's the difference if I deploy manually to a VPS and what situations require me to just deploy to Docker?
The questions about the Docker concept were answered. In short, as far as I understand it: Docker stores services on the same machine. So docker isolates the operating system and divides the same host OS kernel for each container that we will later create for each service. Understanding this concept can at least build a foundation for the concept of thinking, how docker will at least help me in my project this time. Or at least how I save resources.
I have also learned to create a VM (virtual machine), so I understand the concept of VM and I was also ask myself if VM and docker are the same. It turns out not entirely. VM scope divides one whole machine, but Docker is more about dividing resources on the same machine. But you can imagine both have virtual and isolated concepts.
- Docker image: a template that we create from a DockerFile
- Docker container: the Docker image that we run
My project contains more or less:
BE service using node js (express)
There is redis to manage session caching
Mysql as a database
S3 object as a medium for storing images
Integrated with LLM Gemini
The first I run docker init which will later be adjusted for my project. But later I can also adjust it again. Docker init will generate files:
.dockerignore -> list folders / files to be ignored
Dockerfile -> script automation
compose.yaml -> define environment
README.Docker.md -> readme file
Dockerfile
I have created a bash shell script to run multiple commands in one command. The concept of Dockerfile is more or less the same as bash shell. I already have an idea of the contents that should be in the file. It's like cloning a repository from scratch, installing dependencies, and running a build. I don't really have any issues here except for one thing that I will tell you about in my Problems I Faced section.
Compose YAML
The most crucial part that contains credentials and environment. I was confused in this section because what should I fill this file with? However, after some research, I finally found out that the compose.yaml file will later contain variables that are more or less the same as those in the .env file. Compose yaml actually contains configurations that will define services, networks (port mapping), and volumes (places to share data outside the container).
Problems I Faced
Because previously I did not know the main goal of compose yaml which is used to build the environment, so I did a build like a regular setup (without a database). Then I got the ECONNREFUSED error. Because I thought docker was connected to my laptop's default localhost (because I ran it locally) and the environment in .env was with it, so I thought docker would run just like my regular laptop. It turned out not, because the ideology of docker here is isolated, so I still have to declare the database I use and etc.
Then when I had made a connection to the database, there was an error in the race condition where my application ran before the database. So I used wait-on to start the application synchronously waiting for the database to run. This is the point where I changed the contents of the docker file much more complex than I imagined. Since wait-on had never bumps to my head before, I had to create a bash shell entrypoint that would override the process of waiting for this database and I deliberately separated it into a new .sh file so as not to mix it with the docker file that was used for building the image.
Faced with the native addon bcrypt, I finally moved to bcryptjs so that I didn't have to compile a lot. Although when writing this, someone suggested installing node-gyp to help compile native add-ons written in C or C++.
I thought I only needed to declare the database, but it turns out that the access & refresh key for JWT, s3, gemini, and redis also need to be defined in the compose yaml. When I collaborate with the team, I often pay attention to the files in the repository. Although I'm not too curious about what's inside, I know that the docker file and docker compose will usually be carried over to repository. So, if I put the password and credentials in compose yaml, it will have a high risk if it exposed and pushed to the public repository. Awareness of this makes me hide the passwords and shoot them to take from .env which clearly will not be carried over to the repository.
Things I learned
Because the process of deploying this service is the first thing I feel, there are things that become #TIL (Today I Learn). Things that I can learn from my experience:
Docker is really isolated. So it's like building everything from scratch, even though it is running on a local machine. I still have to build independent services like databases (which I myself don't need to bother with, because docker has this).
Insert variables from .env into compose yaml without writing their values hardcoded
