Setting Up a Simple Express NodeJs Server on Docker

·

7 min read

Why Use Docker?

Docker is an amazing tool for deploying small isolated blocks of code. In other words, "portable, self-sufficient containers." Here, I'll be packaging a simple Express server into a docker container to show you how it's done.

Docker is helpful since it allows you to have a piece of code that is reproducible on everyone else's machine. Since Docker creates an isolated virtual machine to run the code, it should work the same on all devices. In other words, if you have two computers you're developing on (a Windows and Linux machine) then if you package your code in Docker your code will work seamlessly across all devices regardless of the version of the Windows and Linux OS.

Even better, if you are collaborating with other developers with different systems configured (i.e. path variables or npm versions) then Docker will establish a baseline so the code works (or doesn't work) for everyone. This may prove to be extremely useful when trying to deploy a piece of code since the server's infrastructure may be different from your host machine.

Getting Started

Sample code can be found here: GitHub Repo

Let's first install Docker. Next, let's establish our npm project and install our npm package express

npm init
npm install express

Next, let's copy over some Express code into an index.js in your test directory that will create a quick server on port 3000.

const express = require('express');
const app = express();

app.get('/', (req, res) => {
  res.send('<h2>Hi There!!!</h2>');
});

const port = process.env.PORT || 3000;
app.listen(port, () => {
  console.log(`listening on port ${port}`);
});

Run node index.js to make sure the file is copied correctly and everything works. To view the <h2>Hi There!!!</h2> code, simply go to localhost:3000

Next up, let's create a file called "Dockerfile". This will house the instructions Docker will use to build a Docker containers. To be precise, Dockerfile gives instructions on creating a Docker image, which is like a recipe book for Docker to make dishes (Docker containers, or your actual Express server).

In your Dockerfile, we need to first establish our build environment. I will be using Node version 16 to run our simple Express server so our first line in Dockerfile will be

FROM node:16

For semantic purposes, we should add another line to specify where we will be placing our files into in our Docker container with the line

WORKDIR /app

In other words, we are establishing the working directory of our Docker container.

Next up, we should send our package.json file over to Docker and run npm install on our Docker container to download the required dependencies (Express). These are done with

COPY package.json .
RUN npm install

With the first line, it copies package.json into the relative path of . or the current working directory which we set to /app earlier.

Now, we need to copy the rest of our files in the app into our Docker container. This is done with

COPY . ./

Next, we should semantically tell Docker what port we are exposing to the public. Here, we are exposing port 3000 for our Express server. This is done with

EXPOSE 3000

Now, with our Docker container ready, we should trigger a command to signal the execution of our Docker container code. In this case, we are running the express server so we should run node index.js. To tell Docker to run this, we add a final line of

CMD ["node", "index.js"]

And that's it, that's all we need in our Dockerfile. The Dockerfile should look like

# Build time
FROM node:16
WORKDIR /app

# Grab the package.json and npm install
COPY package.json .
RUN npm install

# Optimization to not re-run npm install if no new npm packages installed
# This double re-adding COPY package.json as it doesn't change often
COPY . ./
EXPOSE 3000

# Runtime
CMD ["node", "index.js"]

Dockerignore

However, notice that our COPY . ./ line will manually copy over our node_modules into the Docker container. You may ask why do we even need to do the RUN npm install command then, but there are multiple reasons which I will list in the appendix at the bottom of the article. For now, believe that the structure I've outlined above is optimal and instead we should prevent COPY . ./ from copying over our node_modules again.

To prevent this, we can add a .dockerignore file, similar to .gitignore. Put this into your .dockerignore to ignore the following files

node_modules
Dockerfile

Docker Terminal Commands

Now, everything is set up to build our server into Docker. We need to first create our recipe (a Docker image) for our dish (our container/an Express server). To build the Docker image, run this in your command line (note the sudo is for Linux users only)

sudo docker build -t node-express-image .

The -t flag specifies a name for this Docker image, which I've named node-express-image. In addition, the . at the end of the line represents the context for the Docker image. This basically means the file path to the Dockerfile.

To check if your image has been built, run

sudo docker images

Underneath, you should see our node-express-image Docker image, as well as node which you've downloaded as a dependency for node-express-image.

Next up we need to take our recipe (Docker image) and use it to cook our dish (Docker container). The template for this command is

sudo docker run -p <incoming port>:<exposed port> -d --name node-express-container node-express-image

Here the -p flag specifies the ports involved.

  • <exposed port> is the port our Node Express server exposes for the outside to see. In our case, it's port 3000
  • <incoming port> is the port that you will use on your local machine (or server) to access the Docker port. This could be port 3000, but it could also be port 4000. If we make it port 4000, then we will need to go to localhost:4000 to view the Express server exposed on port 3000

Next, we passed in a -d flag that runs Docker in detached mode. This simply means you can run other commands and runs the container in the background (instead of forcing you to open another terminal).

Next, the --name flag names the container. I've named this node-express-container.

Lastly, we pass in the image (or recipe) for Docker to follow. We are using the node-express-image we created earlier.

So, we should be running

sudo docker run -p 4000:3000 -d --name node-express-container node-express-image

That's all you need! Go to localhost:4000 to view the running Express server.

To see all current running instances of Docker containers, run

sudo docker ps

You should see our image with the name of node-express-container.

Let's go clean it all up now and delete our containers and images.

Run this to remove the Docker container

sudo docker rm node-express-container -f

We pass in the -f flag to forcefully remove the container since it's currently running. To do it without the force flag we'd need to first stop the container from running and then remove it.

Next, we need to remove the Docker image. We do this with

sudo docker rmi node-express-image

Now, if we run sudo docker ps and sudo docker images we shouldn't see any remnants of our project. You could delete node as well if you want to.

Congratulations! You now know how to build and run Docker containers. Tune in for the next part where I go into Docker compose that helps automate many of the manual Docker commands you just ran.

Docker Cheat Sheet

sudo docker build -t <image-name> . to build Docker Images

sudo docker images to get a list of Docker Images

sudo docker run -p <incoming port>:<exposed port> -d --name <container-name> <image-name> to run a Docker Container

sudo docker ps to get a list of active Docker Containers

sudo docker rm <container-name> -f to forcefully remove a Docker Container

sudo docker rmi <image-name> to remove a Docker Image

Next Installment of this Series: Advanced Docker

Appendix

Why do we RUN npm install and then COPY . ./

The way Dockerfile works is that it caches the previous build results line by line so that if you ever want to rebuild the Docker image it will be faster.

Let's say I modify index.js res.send to be <h2>Hi There!</h2> with only one "!" instead of three of them. Now I will need to rebuild the Docker image. However, I didn't touch any node_modules so it would be a waste of time to rerun npm install (or copy over all the node_modules again).

To prevent recreating the node_modules, I split the code up. Now, Docker can intelligently determine that no code has changed before running RUN npm install, aka the package.json has not changed. Thus, it will skip this line and go straight to the next COPY . ./ line that has indeed changed (since I've modified the index.js).

This saves a ton of time since node_modules do not frequently change.

Did you find this article valuable?

Support KevThatDev's Blog by becoming a sponsor. Any amount is appreciated!