Docker containers are a way for you to separate your code from the computing environment in which it operates, making it possible to deploy to different hardware and operating systems without experiencing compatibility issues.
In this article, you will learn:
what a container is
why you need containers
how to package your applications into containers using Docker
how to utilize GCP Cloud Build to build your Docker Files
how to host your Docker images privately using Google Container Registry
When I first started programming, life was much simpler. We picked a language like C or BASIC, wrote the code, and either sent the code over or compiled it and sent the binaries over. Back then, you were either using DOS or Linux, and all we needed to know was where our code needed to run. This was in the 90s.
When we started writing software for Windows, things started getting a little complicated. We had Microsoft Visual C++ for a Software Development Kit (SDK), and writing the software was pretty much the same. We would build and test on our local systems, and everything would work alright. The problem arose when we needed to send the software to end-users. At this point, we would get lots of emails about missing Dynamic Link Libraries (DLLs). These were files that contained functionality that was embedded in the SDK, but not in the operating system.
Package managers were developed to manage the dependencies.
We would provide our compiled binaries and indicate what SDK we used, and the managers would retrieve every dependency and bundle it into the software installer.
I got introduced to node.js a few years ago. The key highlight was semantic versioning. This lets you make use of libraries or packages and also specify what version of a package your software depends on. Things were still relatively simple back then. Everything went into your package.json.
Things were still simple. Node.js had one major version, and Angular and React were written from scratch.
I went off for a while, and when I returned, node.js had multiple versions and Angular and React required a node app! You even needed to compile them!!
What does all this mean? Simply that you can not assure that what one developer produces on one computer will run without any hitches on another computer. While you can send them your package.json and tell them what OS you are running, they might have something installed that just prevents things from working.
Now, if everything works on your system, you might be tempted to purchase a computer, set it up to work properly, and ship it to the user.
If you were more modern, you could take the route of creating a virtual machine and sending that to users. And, quite a few pieces of software are bundled as virtual machines.
The major concern with this approach is the size of the resulting virtual machine. You will need to install an entire operating system, and then install everything else: dependencies, databases, etc.
What if there was a way of bundling something smaller? There is, and it’s called a container. It is a new approach to configuring computing resources. This approach is called Infrastructure-as-Code, IaC, and makes use of a container management system.
One popular container management system is called Docker.
Docker - Infrastructure as Code
Docker makes use of a configuration file that is called a Dockerfile. This lets you specify an operating system, any commands that you want to run on the fresh installation of the OS, and any files that you would like to copy into the container, and finally, what commands to run to get your app off the ground.
1. Create a Dockerfile
The following is an example of a Dockerfile.
The code snippet on line 2 illustrates how we specify a parent image. This image implements Python version 2.7. We know that what we want is an OS, but here we specify that we want Python 2.7. What is going on here is that someone has created an image with a base operating system that is normally Debian, but could be something else.
To find out more about this container, you can visit hub.docker.com and search for the image. Also, you might notice that the image had a -slim suffix, which specifies that we want to make use of a streamlined version of the container. Container images can sometimes contain packages we don’t need. Slim containers could be 60mb in size, while full-fledged containers could be 600mb in size.
Dockerfiles are normally placed in the root folder of your application or project. We need to specify where our application will be placed inside our container. On line 5 we specify that our application will be placed in the /app folder in our container.
Next, on line 8, we copy the contents of our application folder into /app inside the container. Next, we need to install any requirements. This is done on line 11.
Our app will be accessed via port 80 from a browser, so we need to expose the port. The beauty of containers is that the firewall blocks all ports by default. We open up port 80 on line 14.
If you are into environment variables, you might need to set one. We do that on line 17.
Finally, we run our python script on line 20.
All of that goes into our file, but we don’t have a container image yet.
2. Build Your Image
You need to install docker, and you can do that from docker.com. Afterward, you can build your docker image using the following snippet from the command line.
docker build -t my_image .
The final period is there to point at the location of your Dockerfile. When done, you can search for images using the following line code snippet
docker image ls -a
3. Test Your Image
When you have an image, you can test it using the following code snippet
docker run -p 4000:80 my_image
The -p parameter specifies what port is open in the container (80) and what port will be exposed to the web (4000).
4. Upload the Image to Docker Registry
You need to share this image, but before doing that, you need to visit hub.docker.com and create an account, then come back and tag your image. You can do that using the following:
docker tag my_image username/repository:tag
Finally, push your image to the Docker registry using the following code snippet
docker push username/repository:tag
Working with Docker can prove to be resource-intensive, requiring both memory and bandwidth. You might also need to keep your containers private and manage who has credentialed access. That is somewhere that Google Cloud Platform comes in handy.
Cloud Build and Container Registry
You can outsource the build of your container images to the cloud. To do this, you will need to have an account on https://cloud.google.com. Log in, or create an account, and you can proceed to the first step.
1. Build Your Image on GCP
You can submit your Dockerfile and other application files to the cloud using the following from the cloud shell.
gcloud builds submit --tag gcr.io/[PROJECT-ID]/my-image
[PROJECT-ID] refers to your GCP project ID, which you can find on the home page of your project console. If all goes well, your build will be stored in Google’s container registry.
2. Locate Your Images on Google Container Registry
To see any images you have there, run the following command.
gcloud container images list
3. Run the Image
To download and run the image on your system, run the following command
docker run -d -p 8080:8080 gcr.io/[PROJECT-ID]/my_image
If you get an authentication error, run the following command
gcloud auth configure-docker
Scale-Out
Containers let you launch them whenever you need to. As a result, they are good candidates for handling variable loads. You can spin up a container to handle web traffic, and when traffic increases you can spin up more containers to handle requests. When the traffic goes down, you can shut down the containers you don’t need, thus freeing up resources.
If that sounds difficult, it’s because it isn’t the job of a software developer. Instead, it’s handled by infrastructure engineers. There are various options for scaling-out and managing Docker containers. This is called Container Orchestration. Docker itself provides Swarms. You need your own physical or virtual servers to set up a swarm cluster.
If you work on the Google Cloud, you have two options available to you, namely Cloud Run and Google Kubernetes Engine.
Do keep an eye out for future articles on container orchestration.
An older version of this article was published on my Medium blog.
If you enjoyed reading this article and haven't subscribed yet, you should consider doing that so you can get my periodic newsletter.
Comments