This is not particularly clever – but it will be functional. It is be designed to walk through using automation tools to build a docker image for the ubiquitous “Hello World” demonstration. During this process we are going to talk about what you need and review what each of the steps actually provide.
HOUSE CLEANING
For this example: We should do a few housecleaning skills for the sake of this example: We are going to prune any left over stopped containers so that we can prune the images. If the container was stopped it’s image is going to be present and linked to it preventing us from removing it. So we will first….
docker container prune -f
This will prune back any stopped containers and release their images from use.
For this example we are going to trim everything back (the images) This will get rid of all existing images on your node. So be sure that this is what you intend to do.
docker image prune -a -f
BEGIN THE BEGIN
Let’s create a directory “sample” and place the file “app.py” in it.
cd ~
mkdir sample
Now we populate the file “app.py”
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return 'Hello World!'
app.run(host='0.0.0.0', port=80)
This would be the basic python flask example on port 80 for “Hello World”. Not that special. You may (ahem, most probably) already have this environment and modules installed. However keep in mind that we are going to “build” this in a image reserved to support our example.
Let’s also create a file named “requirements.txt”. This will list the sole requirement for our python code to run (that of Flask)
Flask
Let’s create a file named “Dockerfile”. This is a simple example. It could be more optimal but I wanted to create an exampled that is about as simple and straight forward as possible.. This demonstrates what you would need to do to install components on a fresh system.
FROM ubuntu
WORKDIR /code
run apt update
RUN apt install -y python3
RUN apt install -y python-is-python3
RUN apt install -y python3-pip
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 80
COPY . .
CMD [ "python", "app.py" ]
Line 1: We are going to in this example pull the image from the “ubuntu” image.
Line 2: We are setting our WORKDIR to /code inside the image
Line 3: We are going to run “apt update” first to prepare the “cache” environment as this is a fresh environment.
Line 4: We are going to install “Python3”
Line 5: We are going to install the “python-is-python3” module which allows “python” to invoke “python3”
Line 6: We are going to install the “python3-pip” which is going to provide us with the “pip” command
Line 7: We are going force an update (may not strictly be necessary)
Line 8: We are going to copy the “requiements.txt” file we created into the image.
Line 9: We are going to “Run” the command “pip install -r requirements.txt” to force python to import “Flask” from the requirements.txt file
Line 10: We are going to expose port 80 of our environment. This allows the host OS to pass this port to the docker image
Line 11:
Line 12: When ran we invoke :”python app.py”
Now we can simply “docker build .” (please note the trailing “.” which is easy to miss) This “builds” a docker image. So the “Dockerfile” is a powerful script for automating the “build” of an image.
docker build .
This is going to take a moment. If we did our housecleaning above the system won’t have any images and will have to pull absolutely everything down from scratch. The first thing system will do is pulling down the initial “ubuntu” image. The build process will continue building an image per our instructions.
Now if we list the images we should see two images.
root@node5:/home/ubuntu/sample# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 84d7118aa815 30 seconds ago 482MB
ubuntu latest 27941809078c 5 weeks ago 77.8MB
DEFINE A SERVICES FILE:
Now we create a file named “docker-compose.yml” which will be a YAML file that will describe the system/environment of what we are we are wanting to bring up. Before we automated the building of the image. Here we automate the bringing up of the environment. This is a “simple” example: Multiple “services” could be described such that launching this “container” might orchestrate multiple services.
version: "3"
services:
web:
build: .
ports:
- "80:80"
volumes:
- .:/code
Now, if we issue the command “docker-compose up” The system will launch the image/container and using the YML file to automate the work of bringing up the container. In this case this automates the exposure of port 80. we might see something like the following
root@node5:/home/ubuntu/sample# docker-compose up
Starting sample_web_1 ... done
Attaching to sample_web_1
web_1 | * Serving Flask app 'app' (lazy loading)
web_1 | * Environment: production
web_1 | WARNING: This is a development server. Do not use it in a production deployment.
web_1 | Use a production WSGI server instead.
web_1 | * Debug mode: off
web_1 | * Running on all addresses (0.0.0.0)
web_1 | WARNING: This is a development server. Do not use it in a production deployment.
web_1 | * Running on http://127.0.0.1:80
web_1 | * Running on http://172.19.0.2:80 (Press CTRL+C to quit)
please note: That the directory I was in was “sample” and the service was “web”. (hence web_1 (the instance) was logged to the console)
If you are testing remember that this is done as HTTP not HTTPS. You might need to take pains to ensure this is what the browser does. 🙂
RUNNING DETACHED
In the next example we run the container as a “detached” process which allows processing to continue after invocation.
root@node5:/home/ubuntu/sample# docker-compose up -d
Starting sample_web_1 ... done
root@node5:/home/ubuntu/sample#
NOTE: example for for quick testing.
git clone https://github.com/tlh45342/docker-compose-example1.git
References:
https://docs.docker.com/compose/gettingstarted/