December 14, 2022

1573 words 8 mins read

Docker - Zero to Hero Easily!

Docker - Zero to Hero Easily!

With Docker, you can do a lot of things. It’s the defacto containerization tool for cloud software, but it can do much much more. You can host almost anything in it! It can host locally a wordpress blog on your computer, multiple database environments for your projects and it can even run Minecraft or even Quake 3 servers! You can run docker images on your Raspberry Pi or your Windows PC with the same results. Truly a beautiful swiss army knife every tech-savvy people must learn! Let’s learn everything about Docker, right now, right here in one place!

Casually explained, Docker is a tool that you can imagine as a container ship inside your machine. (This is the reason everybody picking these cheesy thumbnails). You can run all sorts of services, and various-sized software components on this ship, they are the containers. They are extremely easy to reproduce/instantiate based on a blueprint, so you can destroy and recreate these containers anytime, no sweat. So far we have:

  • Image - Blueprint of a container
  • Container - It’s a running instantiated Image. An actual running service.
  • Host machine - It’s our computer. Localhost. This is where the browser is running

Keep in mind, when starting a container from an image, you can add a few extra parameters to it, like environment variables, port settings etc… There are multiple ways of starting a container. Probably the shortest way is the docker run command. With this, you can start something with only one line. Let’s take a look at this magic spell:

docker run --name my-postgres -e POSTGRES_PASSWORD=password -p 5000:5432 -d postgres:14-alpine3.17

The format is:

docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

  • docker run - it’s the command
  • --name my-postgres -e POSTGRES_PASSWORD=password -p 5432:5432 -d - these are the options, we name the container, set an environment variable with -e and tell docker, with -p 5000:5432 to make the services’ port 5432 available on the host’s 5000 we run it detached with -d
  • postgres:14-alpine3.17 this is the image we want to start followed by a tag. The tag pretty much marks the version, but You can think of it as part of the name. If no tag is given, the docker falls back to the :latest tag.

You can find images on the docker’s official image repository hub.docker.com with its details. Here is the image and its details for the Postgres example. You can find all the obviously container-specific environments there. You can find images on the docker’s official image repository hub.docker.com with its details. Here is the image and its details for the Postgres example. You can find all the obviously container-specific environments there.

graph LR; A[Localhost]-- can only call\nopened port -->B[Container 1]; subgraph Docker System B end

You can see what’s running right now with docker ps, but it’s probably better to get used to the docker ps -a to show all containers. Even the stopped ones as well. After running our Postgres image we should see something with these commands.

So this is the docker-run command. Cool right? Wrong. We won’t use it, we did not want the extra parameters merely passed in the actual command’s parameter every single time manually with our bare hands. We could write it down on a sticky note or maybe create some sort of script, but luckily docker has an official tool for this case, which is already installed with Docker desktop.

Behold docker-compose!

I strongly suggest skipping the docker run thingy entirely and going straight for the compose. docker-compose is a set of docker run-s with their parameters written in one file. You can start it (compose up) and stop the whole stack (compose down). You can send this one file to someone or versioning it, at everybody will have the exact same stack running, with the same parameters. Here is an example of the docker run command in a docker-compose.yml file

version: '3.1'
services:
  my-postgres:
    image: postgres:14-alpine3.17
    environment:
      POSTGRES_PASSWORD: password
    ports:
      - 5000:5432

More about docker-compose in the official documentation. You can start and stop this stack with docker-compose up and docker-compose down. If you want, you can detach it too, with docker-compose up -d

Small summary, right now we are already able to:

  • start a service, preferably from docker-compose
  • write a basic docker-compose.yml with one container
  • setup environment variables for a container
  • open ports from the container to the host machine
  • docker ps and docker ps -a commands.

Run Multiple Services

There will be quickly a need for starting multiple containers. How about starting a PgAdmin web interface for our Postgres DB? Before I paste a docker-compose.yml here, let me raise an important question in advance: There will be quickly a need for starting multiple containers. How about starting a PgAdmin web interface for our Postgres DB? Before I paste a docker-compose.yml here, let me raise an important question in advance:

- What is the visibility of these containers? How can they access each other’s data?

So we already saw one example of access. From your host machine (localhost) access a container. The containers by default closed independent pieces. They can use the internet from the inside but nothing else. They have their little DNS servers, and better yet, they are small independent Linux OSs (most of the time). Whenever we wish to access them we should explicitly say what ports we want to open on them with the port parameter.

The other frequently used way of access is from one container to another container. We can access containers directly within docker, by their aliases. A container has multiple aliases, but what I use often is the name you give in the docker-compose.yml in our case: my-postgres. So if we have a container with the alias api-container and we want to call its API from a different container, we use something like http://api-container:80/api. This needs no port opening on the api-container, this communication is only within the docker system. And only if they are in the same docker network (about it later).

graph LR; A-.Can not call\nif port not open.->C A[Localhost]-- can only call\nopened port -->B[Container 1]; subgraph Docker System B-- can call\nby alias -->C[Container 2] end

For checking the container’s exact aliases we do the following: With docker ps we identify the Container Id. With this ID we can do a bunch of things, but now, we just proceed to print all its details with the:

docker inspect [ContainerId]

This will print a lot of information about the container, and we can find a list of the aliases there as well! We can also inspect the network of the container, but if you start them from the same docker-compose or the same folder, they will go to the same network anyway!

Now back to the example. Now let’s add a pgadmin for our compose.:

version: '3.1'
services:

  postgres: # changed
    image: postgres
    restart: always # <- new stuff restart container when failed. Try keep up.
    environment:
      POSTGRES_PASSWORD: password
    ports:
      - 5432:5432 # changed

  pgadmin:
    image: dpage/pgadmin4
    restart: always
    environment:
        PGADMIN_DEFAULT_EMAIL: admin@test.com
        PGADMIN_DEFAULT_PASSWORD: admin
        PGADMIN_LISTEN_PORT: 80
    ports:
      - 8080:80
graph LR; A-- port\n5432-->C A[Localhost]-- port\n8080 -->B[PgAdmin]; subgraph Docker System B-- alias:\npostgres -->C[Postgres] end

Volumes

We can start a complete DB stack now and stop it anytime. Another challenge we face from time to time is the need of adding/saving a file or folder to/from the container. This is where volumes come in handy. For example, our Postgres image has the functionality, that it will run any .sql files that you put in its /docker-entrypoint-initdb.d folder. Since it’s not a simple environment variable, we have to mount a file somehow to this folder. There is an easy way to do it with the volumes parameter. Let’s add the following part to the compose right under the ports part of the Postgres:

  postgres:
    ...
    volumes:
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql

Create the init.sql near the docker-compose file with some SQL in it, like create table mytesttable or something, just for the sake of the example. If we compose down and compose up, we can see that the SQL did indeed happen! The table should be there! This is useful to really fine-tune the infrastructural settings, modify config files etc…

There is another use case! What if want to mount a folder to save the content of a certain part of a container to our localhost? This way the database can survive a down-and-up composition! We can do this as well. Postgres store its valuable data in the folder /var/lib/postgresql/data, so let’s mount a folder to that too:

    volumes:
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      - ./postgres-data:/var/lib/postgresql/data

How to Go Inside a Container

There will be quickly a need for taking a look inside a running container. We can do that by executing a command inside the container’s own little system and take give the terminal to the user. It is simpler than you think:

docker exec -it [ContainerId] bash docker exec -it [ContainerId] bash

or

docker exec -it [ContainerId] sh

based on what the container has. Sometimes they are so lightweight they have no bash just sh. With this command, we enter straight into the container. This is useful because of multiple reasons.

Create Custom Docker Images

Coming soon! We will discuss this in a separate article :)