RUN RUN COPY RUN FROM RUN COPY RUN CMD, EXPOSE ... ``` * The build fails as soon as an instruction fails * If `RUN ` fails, the build doesn't produce an image * If it succeeds, it produces a clean image (without test libraries and data) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/ShippingContainerSFBay.jpg)] --- name: toc-dockerfile-examples class: title Dockerfile examples .nav[ [Previous section](#toc-tips-for-efficient-dockerfiles) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-exercise--writing-better-dockerfiles) ] .debug[(automatically generated title slide)] --- # Dockerfile examples There are a number of tips, tricks, and techniques that we can use in Dockerfiles. But sometimes, we have to use different (and even opposed) practices depending on: - the complexity of our project, - the programming language or framework that we are using, - the stage of our project (early MVP vs. super-stable production), - whether we're building a final image or a base for further images, - etc. We are going to show a few examples using very different techniques. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## When to optimize an image When authoring official images, it is a good idea to reduce as much as possible: - the number of layers, - the size of the final image. This is often done at the expense of build time and convenience for the image maintainer; but when an image is downloaded millions of time, saving even a few seconds of pull time can be worth it. .small[ ```dockerfile RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd ... RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \ && echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \ && tar -xzf wordpress.tar.gz -C /usr/src/ \ && rm wordpress.tar.gz \ && chown -R www-data:www-data /usr/src/wordpress ``` ] (Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## When to *not* optimize an image Sometimes, it is better to prioritize *maintainer convenience*. In particular, if: - the image changes a lot, - the image has very few users (e.g. only 1, the maintainer!), - the image is built and run on the same machine, - the image is built and run on machines with a very fast link ... In these cases, just keep things simple! (Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ```dockerfile FROM debian:sid RUN apt-get update -q RUN apt-get install -yq build-essential make RUN apt-get install -yq zlib1g-dev RUN apt-get install -yq ruby ruby-dev RUN apt-get install -yq python-pygments RUN apt-get install -yq nodejs RUN apt-get install -yq cmake RUN gem install --no-rdoc --no-ri github-pages COPY . /blog WORKDIR /blog VOLUME /blog/_site EXPOSE 4000 CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"] ``` .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Multi-dimensional versioning systems Images can have a tag, indicating the version of the image. But sometimes, there are multiple important components, and we need to indicate the versions for all of them. This can be done with environment variables: ```dockerfile ENV PIP=9.0.3 \ ZC_BUILDOUT=2.11.2 \ SETUPTOOLS=38.7.0 \ PLONE_MAJOR=5.1 \ PLONE_VERSION=5.1.0 \ PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d ``` (Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Entrypoints and wrappers It is very common to define a custom entrypoint. That entrypoint will generally be a script, performing any combination of: - pre-flights checks (if a required dependency is not available, display a nice error message early instead of an obscure one in a deep log file), - generation or validation of configuration files, - dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`), - and more. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## A typical entrypoint script ```dockerfile #!/bin/sh set -e # first arg is '-f' or '--some-option' # or first arg is 'something.conf' if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then set -- redis-server "$@" fi # allow the container to be started with '--user' if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then chown -R redis . exec su-exec redis "$0" "$@" fi exec "$@" ``` (Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Factoring information To facilitate maintenance (and avoid human errors), avoid to repeat information like: - version numbers, - remote asset URLs (e.g. source tarballs) ... Instead, use environment variables. .small[ ```dockerfile ENV NODE_VERSION 10.2.1 ... RUN ... && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \ && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \ && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \ && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \ && tar -xf "node-v$NODE_VERSION.tar.xz" \ && cd "node-v$NODE_VERSION" \ ... ``` ] (Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Overrides In theory, development and production images should be the same. In practice, we often need to enable specific behaviors in development (e.g. debug statements). One way to reconcile both needs is to use Compose to enable these behaviors. Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Production image This Dockerfile builds an image leveraging gunicorn: ```dockerfile FROM python RUN pip install flask RUN pip install gunicorn RUN pip install redis COPY . /src WORKDIR /src CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app EXPOSE 5000 ``` (Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Development Compose file This Compose file uses the same image, but with a few overrides for development: - the Flask development server is used (overriding `CMD`), - the `DEBUG` environment variable is set, - a volume is used to provide a faster local development workflow. .small[ ```yaml services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src ``` ] (Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## How to know which best practices are better? - The main goal of containers is to make our lives easier. - In this chapter, we showed many ways to write Dockerfiles. - These Dockerfiles use sometimes diametrally opposed techniques. - Yet, they were the "right" ones *for a specific situation.* - It's OK (and even encouraged) to start simple and evolve as needed. - Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration! .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/aerial-view-of-containers.jpg)] --- name: toc-exercise--writing-better-dockerfiles class: title Exercise β writing better Dockerfiles .nav[ [Previous section](#toc-dockerfile-examples) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-naming-and-inspecting-containers) ] .debug[(automatically generated title slide)] --- # Exercise β writing better Dockerfiles Let's update our Dockerfiles to leverage multi-stage builds! The code is at: https://github.com/jpetazzo/wordsmith Use a different tag for these images, so that we can compare their sizes. What's the size difference between single-stage and multi-stage builds? .debug[[intro-fullday.yml](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/intro-fullday.yml)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/blue-containers.jpg)] --- name: toc-naming-and-inspecting-containers class: title Naming and inspecting containers .nav[ [Previous section](#toc-exercise--writing-better-dockerfiles) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-labels) ] .debug[(automatically generated title slide)] --- class: title # Naming and inspecting containers ![Markings on container door](images/title-naming-and-inspecting-containers.jpg) .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Objectives In this lesson, we will learn about an important Docker concept: container *naming*. Naming allows us to: * Reference easily a container. * Ensure unicity of a specific container. We will also see the `inspect` command, which gives a lot of details about a container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Naming our containers So far, we have referenced containers with their ID. We have copy-pasted the ID, or used a shortened prefix. But each container can also be referenced by its name. If a container is named `thumbnail-worker`, I can do: ```bash $ docker logs thumbnail-worker $ docker stop thumbnail-worker etc. ``` .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Default names When we create a container, if we don't give a specific name, Docker will pick one for us. It will be the concatenation of: * A mood (furious, goofy, suspicious, boring...) * The name of a famous inventor (tesla, darwin, wozniak...) Examples: `happy_curie`, `clever_hopper`, `jovial_lovelace` ... .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Specifying a name You can set the name of the container when you create it. ```bash $ docker run --name ticktock jpetazzo/clock ``` If you specify a name that already exists, Docker will refuse to create the container. This lets us enforce unicity of a given resource. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Renaming containers * You can rename containers with `docker rename`. * This allows you to "free up" a name without destroying the associated container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Inspecting a container The `docker inspect` command will output a very detailed JSON map. ```bash $ docker inspect [{ ... (many pages of JSON here) ... ``` There are multiple ways to consume that information. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Parsing JSON with the Shell * You *could* grep and cut or awk the output of `docker inspect`. * Please, don't. * It's painful. * If you really must parse JSON from the Shell, use JQ! (It's great.) ```bash $ docker inspect | jq . ``` * We will see a better solution which doesn't require extra tools. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Using `--format` You can specify a format string, which will be parsed by Go's text/template package. ```bash $ docker inspect --format '{{ json .Created }}' "2015-02-24T07:21:11.712240394Z" ``` * The generic syntax is to wrap the expression with double curly braces. * The expression starts with a dot representing the JSON object. * Then each field or member can be accessed in dotted notation syntax. * The optional `json` keyword asks for valid JSON output. (e.g. here it adds the surrounding double-quotes.) .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/chinook-helicopter-container.jpg)] --- name: toc-labels class: title Labels .nav[ [Previous section](#toc-naming-and-inspecting-containers) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-getting-inside-a-container) ] .debug[(automatically generated title slide)] --- # Labels * Labels allow to attach arbitrary metadata to containers. * Labels are key/value pairs. * They are specified at container creation. * You can query them with `docker inspect`. * They can also be used as filters with some commands (e.g. `docker ps`). .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Labels.md)] --- ## Using labels Let's create a few containers with a label `owner`. ```bash docker run -d -l owner=alice nginx docker run -d -l owner=bob nginx docker run -d -l owner nginx ``` We didn't specify a value for the `owner` label in the last example. This is equivalent to setting the value to be an empty string. .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Labels.md)] --- ## Querying labels We can view the labels with `docker inspect`. ```bash $ docker inspect $(docker ps -lq) | grep -A3 Labels "Labels": { "maintainer": "NGINX Docker Maintainers ", "owner": "" }, ``` We can use the `--format` flag to list the value of a label. ```bash $ docker inspect $(docker ps -q) --format 'OWNER={{.Config.Labels.owner}}' ``` .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Labels.md)] --- ## Using labels to select containers We can list containers having a specific label. ```bash $ docker ps --filter label=owner ``` Or we can list containers having a specific label with a specific value. ```bash $ docker ps --filter label=owner=alice ``` .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Labels.md)] --- ## Use-cases for labels * HTTP vhost of a web app or web service. (The label is used to generate the configuration for NGINX, HAProxy, etc.) * Backup schedule for a stateful service. (The label is used by a cron job to determine if/when to backup container data.) * Service ownership. (To determine internal cross-billing, or who to page in case of outage.) * etc. .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Labels.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/container-cranes.jpg)] --- name: toc-getting-inside-a-container class: title Getting inside a container .nav[ [Previous section](#toc-labels) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-limiting-resources) ] .debug[(automatically generated title slide)] --- class: title # Getting inside a container ![Person standing inside a container](images/getting-inside.png) .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Objectives On a traditional server or VM, we sometimes need to: * log into the machine (with SSH or on the console), * analyze the disks (by removing them or rebooting with a rescue system). In this chapter, we will see how to do that with containers. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Getting a shell Every once in a while, we want to log into a machine. In an perfect world, this shouldn't be necessary. * You need to install or update packages (and their configuration)? Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...) * You need to view logs and metrics? Collect and access them through a centralized platform. In the real world, though ... we often need shell access! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Not getting a shell Even without a perfect deployment system, we can do many operations without getting a shell. * Installing packages can (and should) be done in the container image. * Configuration can be done at the image level, or when the container starts. * Dynamic configuration can be stored in a volume (shared with another container). * Logs written to stdout are automatically collected by the Docker Engine. * Other logs can be written to a shared volume. * Process information and metrics are visible from the host. _Let's save logging, volumes ... for later, but let's have a look at process information!_ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Viewing container processes from the host If you run Docker on Linux, container processes are visible on the host. ```bash $ ps faux | less ``` * Scroll around the output of this command. * You should see the `jpetazzo/clock` container. * A containerized process is just like any other process on the host. * We can use tools like `lsof`, `strace`, `gdb` ... To analyze them. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- class: extra-details ## What's the difference between a container process and a host process? * Each process (containerized or not) belongs to *namespaces* and *cgroups*. * The namespaces and cgroups determine what a process can "see" and "do". * Analogy: each process (containerized or not) runs with a specific UID (user ID). * UID=0 is root, and has elevated privileges. Other UIDs are normal users. _We will give more details about namespaces and cgroups later._ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a running container * Sometimes, we need to get a shell anyway. * We _could_ run some SSH server in the container ... * But it is easier to use `docker exec`. ```bash $ docker exec -ti ticktock sh ``` * This creates a new process (running `sh`) _inside_ the container. * This can also be done "manually" with the tool `nsenter`. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Caveats * The tool that you want to run needs to exist in the container. * Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time. (This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.) * Most importantly: the container needs to be running. * What if the container is stopped or crashed? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a stopped container * A stopped container is only _storage_ (like a disk drive). * We cannot SSH into a disk drive or USB stick! * We need to connect the disk to a running machine. * How does that translate into the container world? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Analyzing a stopped container As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`. ```bash docker run jpetazzo/crashtest ``` The container starts, but then stops immediately, without any output. What would MacGyver™ do? First, let's check the status of that container. ```bash docker ps -l ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Viewing filesystem changes * We can use `docker diff` to see files that were added / changed / removed. ```bash docker diff ``` * The container ID was shown by `docker ps -l`. * We can also see it with `docker ps -lq`. * The output of `docker diff` shows some interesting log files! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Accessing files * We can extract files with `docker cp`. ```bash docker cp :/var/log/nginx/error.log . ``` * Then we can look at that log file. ```bash cat error.log ``` (The directory `/run/nginx` doesn't exist.) .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Exploring a crashed container * We can restart a container with `docker start` ... * ... But it will probably crash again immediately! * We cannot specify a different program to run with `docker start` * But we can create a new image from the crashed container ```bash docker commit debugimage ``` * Then we can run a new container from that image, with a custom entrypoint ```bash docker run -ti --entrypoint sh debugimage ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- class: extra-details ## Obtaining a complete dump * We can also dump the entire filesystem of a container. * This is done with `docker export`. * It generates a tar archive. ```bash docker export | tar tv ``` This will give a detailed listing of the content of the container. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/container-housing.jpg)] --- name: toc-limiting-resources class: title Limiting resources .nav[ [Previous section](#toc-getting-inside-a-container) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-container-networking-basics) ] .debug[(automatically generated title slide)] --- # Limiting resources - So far, we have used containers as convenient units of deployment. - What happens when a container tries to use more resources than available? (RAM, CPU, disk usage, disk and network I/O...) - What happens when multiple containers compete for the same resource? - Can we limit resources available to a container? (Spoiler alert: yes!) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Container processes are normal processes - Containers are closer to "fancy processes" than to "lightweight VMs". - A process running in a container is, in fact, a process running on the host. - Let's look at the output of `ps` on a container host running 3 containers : ``` 0 2662 0.2 0.3 /usr/bin/dockerd -H fd:// 0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe 0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off; 101 23543 0.0 0.0 | \_ `nginx`: worker process 0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2 0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 0 23725 0.0 0.0 \_ `/bin/sh` ``` - The highlighted processes are containerized processes. (That host is running nginx, elasticsearch, and alpine.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## By default: nothing changes - What happens when a process uses too much memory on a Linux system? -- - Simplified answer: - swap is used (if available); - if there is not enough swap space, eventually, the out-of-memory killer is invoked; - the OOM killer uses heuristics to kill processes; - sometimes, it kills an unrelated process. -- - What happens when a container uses too much memory? - The same thing! (i.e., a process eventually gets killed, possibly in another container.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting container resources - The Linux kernel offers rich mechanisms to limit container resources. - For memory usage, the mechanism is part of the *cgroup* subsystem. - This subsystem allows to limit the memory for a process or a group of processes. - A container engine leverages these mechanisms to limit memory for a container. - The out-of-memory killer has a new behavior: - it runs when a container exceeds its allowed memory usage, - in that case, it only kills processes in that container. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting memory in practice - The Docker Engine offers multiple flags to limit memory usage. - The two most useful ones are `--memory` and `--memory-swap`. - `--memory` limits the amount of physical RAM used by a container. - `--memory-swap` limits the total amount (RAM+swap) used by a container. - The memory limit can be expressed in bytes, or with a unit suffix. (e.g.: `--memory 100m` = 100 megabytes.) - We will see two strategies: limiting RAM usage, or limiting both .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting RAM usage Example: ```bash docker run -ti --memory 100m python ``` If the container tries to use more than 100 MB of RAM, *and* swap is available: - the container will not be killed, - memory above 100 MB will be swapped out, - in most cases, the app in the container will be slowed down (a lot). If we run out of swap, the global OOM killer still intervenes. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting both RAM and swap usage Example: ```bash docker run -ti --memory 100m --memory-swap 100m python ``` If the container tries to use more than 100 MB of memory, it is killed. On the other hand, the application will never be slowed down because of swap. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## When to pick which strategy? - Stateful services (like databases) will lose or corrupt data when killed - Allow them to use swap space, but monitor swap usage - Stateless services can usually be killed with little impact - Limit their mem+swap usage, but monitor if they get killed - Ultimately, this is no different from "do I want swap, and how much?" .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting CPU usage - There are no less than 3 ways to limit CPU usage: - setting a relative priority with `--cpu-shares`, - setting a CPU% limit with `--cpus`, - pinning a container to specific CPUs with `--cpuset-cpus`. - They can be used separately or together. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Setting relative priority - Each container has a relative priority used by the Linux scheduler. - By default, this priority is 1024. - As long as CPU usage is not maxed out, this has no effect. - When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority. - In other words: a container with `--cpu-shares 2048` will receive twice as much than the default. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Setting a CPU% limit - This setting will make sure that a container doesn't use more than a given % of CPU. - The value is expressed in CPUs; therefore: `--cpus 0.1` means 10% of one CPU, `--cpus 1.0` means 100% of one whole CPU, `--cpus 10.0` means 10 entire CPUs. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Pinning containers to CPUs - On multi-core machines, it is possible to restrict the execution on a set of CPUs. - Examples: `--cpuset-cpus 0` forces the container to run on CPU 0; `--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7; `--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11. - This will not reserve the corresponding CPUs! (They might still be used by other containers, or uncontainerized processes.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting disk usage - Most storage drivers do not support limiting the disk usage of containers. (With the exception of devicemapper, but the limit cannot be set easily.) - This means that a single container could exhaust disk space for everyone. - In practice, however, this is not a concern, because: - data files (for stateful services) should reside on volumes, - assets (e.g. images, user-generated content...) should reside on object stores or on volume, - logs are written on standard output and gathered by the container engine. - Container disk usage can be audited with `docker ps -s` and `docker diff`. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/containers-by-the-water.jpg)] --- name: toc-container-networking-basics class: title Container networking basics .nav[ [Previous section](#toc-limiting-resources) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-container-network-drivers) ] .debug[(automatically generated title slide)] --- class: title # Container networking basics ![A dense graph network](images/title-container-networking-basics.jpg) .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Objectives We will now run network services (accepting requests) in containers. At the end of this section, you will be able to: * Run a network service in a container. * Manipulate container networking basics. * Find a container's IP address. We will also explain the different network models used by Docker. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## A simple, static web server Run the Docker Hub image `nginx`, which contains a basic web server: ```bash $ docker run -d -P nginx 66b1ce719198711292c8f34f84a7b68c3876cf9f67015e752b94e189d35a204e ``` * Docker will download the image from the Docker Hub. * `-d` tells Docker to run the image in the background. * `-P` tells Docker to make this service reachable from other computers. (`-P` is the short version of `--publish-all`.) But, how do we connect to our web server now? .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Finding our web server port We will use `docker ps`: ```bash $ docker ps CONTAINER ID IMAGE ... PORTS ... e40ffb406c9e nginx ... 0.0.0.0:32768->80/tcp ... ``` * The web server is running on port 80 inside the container. * This port is mapped to port 32768 on our Docker host. We will explain the whys and hows of this port mapping. But first, let's make sure that everything works properly. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (GUI) Point your browser to the IP address of your Docker host, on the port shown by `docker ps` for container port 80. ![Screenshot](images/welcome-to-nginx.png) .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (CLI) You can also use `curl` directly from the Docker host. Make sure to use the right port number if it is different from the example below: ```bash $ curl localhost:32768 Welcome to nginx! ... ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## How does Docker know which port to map? * There is metadata in the image telling "this image has something on port 80". * We can see that metadata with `docker inspect`: ```bash $ docker inspect --format '{{.Config.ExposedPorts}}' nginx map[80/tcp:{}] ``` * This metadata was set in the Dockerfile, with the `EXPOSE` keyword. * We can see that with `docker history`: ```bash $ docker history nginx IMAGE CREATED CREATED BY 7f70b30f2cc6 11 days ago /bin/sh -c #(nop) CMD ["nginx" "-g" "β¦ 11 days ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM] 11 days ago /bin/sh -c #(nop) EXPOSE 80/tcp ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Why are we mapping ports? * We are out of IPv4 addresses. * Containers cannot have public IPv4 addresses. * They have private addresses. * Services have to be exposed port by port. * Ports have to be mapped to avoid conflicts. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Finding the web server port in a script Parsing the output of `docker ps` would be painful. There is a command to help us: ```bash $ docker port 80 32768 ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Manual allocation of port numbers If you want to set port numbers yourself, no problem: ```bash $ docker run -d -p 80:80 nginx $ docker run -d -p 8000:80 nginx $ docker run -d -p 8080:80 -p 8888:80 nginx ``` * We are running three NGINX web servers. * The first one is exposed on port 80. * The second one is exposed on port 8000. * The third one is exposed on ports 8080 and 8888. Note: the convention is `port-on-host:port-on-container`. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Plumbing containers into your infrastructure There are many ways to integrate containers in your network. * Start the container, letting Docker allocate a public port for it. Then retrieve that port number and feed it to your configuration. * Pick a fixed port number in advance, when you generate your configuration. Then start your container by setting the port numbers manually. * Use a network plugin, connecting your containers with e.g. VLANs, tunnels... * Enable *Swarm Mode* to deploy across a cluster. The container will then be reachable through any node of the cluster. When using Docker through an extra management layer like Mesos or Kubernetes, these will usually provide their own mechanism to expose containers. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Finding the container's IP address We can use the `docker inspect` command to find the IP address of the container. ```bash $ docker inspect --format '{{ .NetworkSettings.IPAddress }}' 172.17.0.3 ``` * `docker inspect` is an advanced command, that can retrieve a ton of information about our containers. * Here, we provide it with a format string to extract exactly the private IP address of the container. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Pinging our container We can test connectivity to the container using the IP address we've just discovered. Let's see this now by using the `ping` tool. ```bash $ ping 64 bytes from : icmp_req=1 ttl=64 time=0.085 ms 64 bytes from : icmp_req=2 ttl=64 time=0.085 ms 64 bytes from : icmp_req=3 ttl=64 time=0.085 ms ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Section summary We've learned how to: * Expose a network port. * Manipulate container networking basics. * Find a container's IP address. In the next chapter, we will see how to connect containers together without exposing their ports. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/distillery-containers.jpg)] --- name: toc-container-network-drivers class: title Container network drivers .nav[ [Previous section](#toc-container-networking-basics) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-the-container-network-model) ] .debug[(automatically generated title slide)] --- # Container network drivers The Docker Engine supports many different network drivers. The built-in drivers include: * `bridge` (default) * `none` * `host` * `container` The driver is selected with `docker run --net ...`. The different drivers are explained with more details on the following slides. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Network_Drivers.md)] --- ## The default bridge * By default, the container gets a virtual `eth0` interface. (In addition to its own private `lo` loopback interface.) * That interface is provided by a `veth` pair. * It is connected to the Docker bridge. (Named `docker0` by default; configurable with `--bridge`.) * Addresses are allocated on a private, internal subnet. (Docker uses 172.17.0.0/16 by default; configurable with `--bip`.) * Outbound traffic goes through an iptables MASQUERADE rule. * Inbound traffic goes through an iptables DNAT rule. * The container can have its own routes, iptables rules, etc. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Network_Drivers.md)] --- ## The null driver * Container is started with `docker run --net none ...` * It only gets the `lo` loopback interface. No `eth0`. * It can't send or receive network traffic. * Useful for isolated/untrusted workloads. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Network_Drivers.md)] --- ## The host driver * Container is started with `docker run --net host ...` * It sees (and can access) the network interfaces of the host. * It can bind any address, any port (for ill and for good). * Network traffic doesn't have to go through NAT, bridge, or veth. * Performance = native! Use cases: * Performance sensitive applications (VOIP, gaming, streaming...) * Peer discovery (e.g. Erlang port mapper, Raft, Serf...) .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Network_Drivers.md)] --- ## The container driver * Container is started with `docker run --net container:id ...` * It re-uses the network stack of another container. * It shares with this other container the same interfaces, IP address(es), routes, iptables rules, etc. * Those containers can communicate over their `lo` interface. (i.e. one can bind to 127.0.0.1 and the others can connect to it.) .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Network_Drivers.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/lots-of-containers.jpg)] --- name: toc-the-container-network-model class: title The Container Network Model .nav[ [Previous section](#toc-container-network-drivers) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-service-discovery-with-containers) ] .debug[(automatically generated title slide)] --- class: title # The Container Network Model ![A denser graph network](images/title-the-container-network-model.jpg) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Objectives We will learn about the CNM (Container Network Model). At the end of this lesson, you will be able to: * Create a private network for a group of containers. * Use container naming to connect services together. * Dynamically connect and disconnect containers to networks. * Set the IP address of a container. We will also explain the principle of overlay networks and network plugins. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## The Container Network Model The CNM was introduced in Engine 1.9.0 (November 2015). The CNM adds the notion of a *network*, and a new top-level command to manipulate and see those networks: `docker network`. ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f blog-dev overlay 228a4355d548 blog-prod overlay ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## What's in a network? * Conceptually, a network is a virtual switch. * It can be local (to a single Engine) or global (spanning multiple hosts). * A network has an IP subnet associated to it. * Docker will allocate IP addresses to the containers connected to a network. * Containers can be connected to multiple networks. * Containers can be given per-network names and aliases. * The names and aliases can be resolved via an embedded DNS server. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Network implementation details * A network is managed by a *driver*. * The built-in drivers include: * `bridge` (default) * `none` * `host` * `macvlan` * A multi-host driver, *overlay*, is available out of the box (for Swarm clusters). * More drivers can be provided by plugins (OVS, VLAN...) * A network can have a custom IPAM (IP allocator). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Differences with the CNI * CNI = Container Network Interface * CNI is used notably by Kubernetes * With CNI, all the nodes and containers are on a single IP network * Both CNI and CNM offer the same functionality, but with very different methods .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: pic ## Single container in a Docker network ![bridge0](images/bridge1.png) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: pic ## Two containers on a single Docker network ![bridge2](images/bridge2.png) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: pic ## Two containers on two Docker networks ![bridge3](images/bridge3.png) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Creating a network Let's create a network called `dev`. ```bash $ docker network create dev 4c1ff84d6d3f1733d3e233ee039cac276f425a9d5228a4355d54878293a889ba ``` The network is now visible with the `network ls` command: ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f dev bridge ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Placing containers on a network We will create a *named* container on this network. It will be reachable with its name, `es`. ```bash $ docker run -d --name es --net dev elasticsearch:2 8abb80e229ce8926c7223beb69699f5f34d6f1d438bfc5682db893e798046863 ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Communication between containers Now, create another container on this network. .small[ ```bash $ docker run -ti --net dev alpine sh root@0ecccdfa45ef:/# ``` ] From this new container, we can resolve and ping the other one, using its assigned name: .small[ ```bash / # ping es PING es (172.18.0.2) 56(84) bytes of data. 64 bytes from es.dev (172.18.0.2): icmp_seq=1 ttl=64 time=0.221 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=2 ttl=64 time=0.114 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=3 ttl=64 time=0.114 ms ^C --- es ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.114/0.149/0.221/0.052 ms root@0ecccdfa45ef:/# ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving container addresses In Docker Engine 1.9, name resolution is implemented with `/etc/hosts`, and updating it each time containers are added/removed. .small[ ```bash [root@0ecccdfa45ef /]# cat /etc/hosts 172.18.0.3 0ecccdfa45ef 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.18.0.2 es 172.18.0.2 es.dev ``` ] In Docker Engine 1.10, this has been replaced by a dynamic resolver. (This avoids race conditions when updating `/etc/hosts`.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/plastic-containers.JPG)] --- name: toc-service-discovery-with-containers class: title Service discovery with containers .nav[ [Previous section](#toc-the-container-network-model) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-ambassadors) ] .debug[(automatically generated title slide)] --- # Service discovery with containers * Let's try to run an application that requires two containers. * The first container is a web server. * The other one is a redis data store. * We will place them both on the `dev` network created before. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Running the web server * The application is provided by the container image `jpetazzo/trainingwheels`. * We don't know much about it so we will try to run it and see what happens! Start the container, exposing all its ports: ```bash $ docker run --net dev -d -P jpetazzo/trainingwheels ``` Check the port that has been allocated to it: ```bash $ docker ps -l ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Test the web server * If we connect to the application now, we will see an error page: ![Trainingwheels error](images/trainingwheels-error.png) * This is because the Redis service is not running. * This container tries to resolve the name `redis`. Note: we're not using a FQDN or an IP address here; just `redis`. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Start the data store * We need to start a Redis container. * That container must be on the same network as the web server. * It must have the right name (`redis`) so the application can find it. Start the container: ```bash $ docker run --net dev --name redis -d redis ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Test the web server again * If we connect to the application now, we should see that the app is working correctly: ![Trainingwheels OK](images/trainingwheels-ok.png) * When the app tries to resolve `redis`, instead of getting a DNS error, it gets the IP address of our Redis container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## A few words on *scope* * What if we want to run multiple copies of our application? * Since names are unique, there can be only one container named `redis` at a time. * However, we can specify the network name of our container with `--net-alias`. * `--net-alias` is scoped per network, and independent from the container name. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Using a network alias instead of a name Let's remove the `redis` container: ```bash $ docker rm -f redis ``` And create one that doesn't block the `redis` name: ```bash $ docker run --net dev --net-alias redis -d redis ``` Check that the app still works (but the counter is back to 1, since we wiped out the old Redis container). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Names are *local* to each network Let's try to ping our `es` container from another container, when that other container is *not* on the `dev` network. ```bash $ docker run --rm alpine ping es ping: bad address 'es' ``` Names can be resolved only when containers are on the same network. Containers can contact each other only when they are on the same network (you can try to ping using the IP address to verify). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases We would like to have another network, `prod`, with its own `es` container. But there can be only one container named `es`! We will use *network aliases*. A container can have multiple network aliases. Network aliases are *local* to a given network (only exist in this network). Multiple containers can have the same network alias (even on the same network). In Docker Engine 1.11, resolving a network alias yields the IP addresses of all containers holding this alias. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Creating containers on another network Create the `prod` network. ```bash $ docker network create prod 5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c ``` We can now create multiple containers with the `es` alias on the new `prod` network. ```bash $ docker run -d --name prod-es-1 --net-alias es --net prod elasticsearch:2 38079d21caf0c5533a391700d9e9e920724e89200083df73211081c8a356d771 $ docker run -d --name prod-es-2 --net-alias es --net prod elasticsearch:2 1820087a9c600f43159688050dcc164c298183e1d2e62d5694fd46b10ac3bc3d ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving network aliases Let's try DNS resolution first, using the `nslookup` tool that ships with the `alpine` image. ```bash $ docker run --net prod --rm alpine nslookup es Name: es Address 1: 172.23.0.3 prod-es-2.prod Address 2: 172.23.0.2 prod-es-1.prod ``` (You can ignore the `can't resolve '(null)'` errors.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Connecting to aliased containers Each ElasticSearch instance has a name (generated when it is started). This name can be seen when we issue a simple HTTP request on the ElasticSearch API endpoint. Try the following command a few times: .small[ ```bash $ docker run --rm --net dev centos curl -s es:9200 { "name" : "Tarot", ... } ``` ] Then try it a few times by replacing `--net dev` with `--net prod`: .small[ ```bash $ docker run --rm --net prod centos curl -s es:9200 { "name" : "The Symbiote", ... } ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Good to know ... * Docker will not create network names and aliases on the default `bridge` network. * Therefore, if you want to use those features, you have to create a custom network first. * Network aliases are *not* unique on a given network. * i.e., multiple containers can have the same alias on the same network. * In that scenario, the Docker DNS server will return multiple records. (i.e. you will get DNS round robin out of the box.) * Enabling *Swarm Mode* gives access to clustering and load balancing with IPVS. * Creation of networks and network aliases is generally automated with tools like Compose. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## A few words about round robin DNS Don't rely exclusively on round robin DNS to achieve load balancing. Many factors can affect DNS resolution, and you might see: - all traffic going to a single instance; - traffic being split (unevenly) between some instances; - different behavior depending on your application language; - different behavior depending on your base distro; - different behavior depending on other factors (sic). It's OK to use DNS to discover available endpoints, but remember that you have to re-resolve every now and then to discover new endpoints. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Custom networks When creating a network, extra options can be provided. * `--internal` disables outbound traffic (the network won't have a default gateway). * `--gateway` indicates which address to use for the gateway (when outbound traffic is allowed). * `--subnet` (in CIDR notation) indicates the subnet to use. * `--ip-range` (in CIDR notation) indicates the subnet to allocate from. * `--aux-address` allows to specify a list of reserved addresses (which won't be allocated to containers). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Setting containers' IP address * It is possible to set a container's address with `--ip`. * The IP address has to be within the subnet used for the container. A full example would look like this. ```bash $ docker network create --subnet 10.66.0.0/16 pubnet 42fb16ec412383db6289a3e39c3c0224f395d7f85bcb1859b279e7a564d4e135 $ docker run --net pubnet --ip 10.66.66.66 -d nginx b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09 ``` *Note: don't hard code container IP addresses in your code!* *I repeat: don't hard code container IP addresses in your code!* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Overlay networks * The features we've seen so far only work when all containers are on a single host. * If containers span multiple hosts, we need an *overlay* network to connect them together. * Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN, *enabled with Swarm Mode*. * Other plugins (Weave, Calico...) can provide overlay networks as well. * Once you have an overlay network, *all the features that we've used in this chapter work identically across multiple hosts.* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (overlay) Out of the scope for this intro-level workshop! Very short instructions: - enable Swarm Mode (`docker swarm init` then `docker swarm join` on other nodes) - `docker network create mynet --driver overlay` - `docker service create --network mynet myimage` See https://jpetazzo.github.io/container.training for all the deets about clustering! .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (plugins) Out of the scope for this intro-level workshop! General idea: - install the plugin (they often ship within containers) - run the plugin (if it's in a container, it will often require extra parameters; don't just `docker run` it blindly!) - some plugins require configuration or activation (creating a special file that tells Docker "use the plugin whose control socket is at the following location") - you can then `docker network create --driver pluginname` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Connecting and disconnecting dynamically * So far, we have specified which network to use when starting the container. * The Docker Engine also allows to connect and disconnect while the container runs. * This feature is exposed through the Docker API, and through two Docker CLI commands: * `docker network connect ` * `docker network disconnect ` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Dynamically connecting to a network * We have a container named `es` connected to a network named `dev`. * Let's start a simple alpine container on the default network: ```bash $ docker run -ti alpine sh / # ``` * In this container, try to ping the `es` container: ```bash / # ping es ping: bad address 'es' ``` This doesn't work, but we will change that by connecting the container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Finding the container ID and connecting it * Figure out the ID of our alpine container; here are two methods: * looking at `/etc/hostname` in the container, * running `docker ps -lq` on the host. * Run the following command on the host: ```bash $ docker network connect dev `` ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Checking what we did * Try again to `ping es` from the container. * It should now work correctly: ```bash / # ping es PING es (172.20.0.3): 56 data bytes 64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms 64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms ^C ``` * Interrupt it with Ctrl-C. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Looking at the network setup in the container We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`: .small[ ```bash / # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 18: eth0@if19: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever 20: eth1@if21: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1 valid_lft forever preferred_lft forever / # ``` ] Each network connection is materialized with a virtual network interface. As we can see, we can be connected to multiple networks at the same time. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Disconnecting from a network * Let's try the symmetrical command to disconnect the container: ```bash $ docker network disconnect dev ``` * From now on, if we try to ping `es`, it will not resolve: ```bash / # ping es ping: bad address 'es' ``` * Trying to ping the IP address directly won't work either: ```bash / # ping 172.20.0.3 ... (nothing happens until we interrupt it with Ctrl-C) ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases are scoped per network * Each network has its own set of network aliases. * We saw this earlier: `es` resolves to different addresses in `dev` and `prod`. * If we are connected to multiple networks, the resolver looks up names in each of them (as of Docker Engine 18.03, it is the connection order) and stops as soon as the name is found. * Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not** give us the addresses of all the `es` services; but only the ones in `dev` or `prod`. * However, we can lookup `es.dev` or `es.prod` if we need to. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Finding out about our networks and names * We can do reverse DNS lookups on containers' IP addresses. * If the IP address belongs to a network (other than the default bridge), the result will be: ``` name-or-first-alias-or-container-id.network-name ``` * Example: .small[ ```bash $ docker run -ti --net prod --net-alias hello alpine / # apk add --no-cache drill ... OK: 5 MiB in 13 packages / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03 inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ... / # drill -t ptr `3.0.21.172`.in-addr.arpa ... ;; ANSWER SECTION: 3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`. ... ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Building with a custom network * We can build a Dockerfile with a custom network with `docker build --network NAME`. * This can be used to check that a build doesn't access the network. (But keep in mind that most Dockerfiles will fail, because they need to install remote packages and dependencies!) * This may be used to access an internal package repository. (But try to use a multi-stage build instead, if possible!) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-1.jpg)] --- name: toc-ambassadors class: title Ambassadors .nav[ [Previous section](#toc-service-discovery-with-containers) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-local-development-workflow-with-docker) ] .debug[(automatically generated title slide)] --- class: title # Ambassadors ![Two serious-looking persons shaking hands](images/title-ambassador.jpg) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## The ambassador pattern Ambassadors are containers that "masquerade" or "proxy" for another service. They abstract the connection details for this services, and can help with: * discovery (where is my service actually running?) * migration (what if my service has to be moved while I use it?) * fail over (how do I know to which instance of a replicated service I should connect?) * load balancing (how to I spread my requests across multiple instances of a service?) * authentication (what if my service requires credentials, certificates, or otherwise?) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Introduction to Ambassadors The ambassador pattern: * Takes advantage of Docker's per-container naming system and abstracts connections between services. * Allows you to manage services without hard-coding connection information inside applications. To do this, instead of directly connecting containers you insert ambassador containers. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- class: pic ![ambassador](images/ambassador-diagram.png) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Interacting with ambassadors * The web container uses normal Docker networking to connect to the ambassador. * The database container also talks with an ambassador. * For both containers, the ambassador is totally transparent. (There is no difference between normal operation and operation with an ambassador.) * If the database container is moved (or a failover happens), its new location will be tracked by the ambassador containers, and the web application container will still be able to connect, without reconfiguration. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Ambassadors for simple service discovery Use case: * my application code connects to `redis` on the default port (6379), * my Redis service runs on another machine, on a non-default port (e.g. 12345), * I want to use an ambassador to let my application connect without modification. The ambassador will be: * a container running right next to my application, * using the name `redis` (or linked as `redis`), * listening on port 6379, * forwarding connections to the actual Redis service. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Ambassadors for service migration Use case: * my application code still connects to `redis`, * my Redis service runs somewhere else, * my Redis service is moved to a different host+port, * the location of the Redis service is given to me via e.g. DNS SRV records, * I want to use an ambassador to automatically connect to the new location, with as little disruption as possible. The ambassador will be: * the same kind of container as before, * running an additional routine to monitor DNS SRV records, * updating the forwarding destination when the DNS SRV records are updated. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Ambassadors for credentials injection Use case: * my application code still connects to `redis`, * my application code doesn't provide Redis credentials, * my production Redis service requires credentials, * my staging Redis service requires different credentials, * I want to use an ambassador to abstract those credentials. The ambassador will be: * a container using the name `redis` (or a link), * passed the credentials to use, * running a custom proxy that accepts connections on Redis default port, * performing authentication with the target Redis service before forwarding traffic. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Ambassadors for load balancing Use case: * my application code connects to a web service called `api`, * I want to run multiple instances of the `api` backend, * those instances will be on different machines and ports, * I want to use an ambassador to abstract those details. The ambassador will be: * a container using the name `api` (or a link), * passed the list of backends to use (statically or dynamically), * running a load balancer (e.g. HAProxy or NGINX), * dispatching requests across all backends transparently. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## "Ambassador" is a *pattern* There are many ways to implement the pattern. Different deployments will use different underlying technologies. * On-premise deployments with a trusted network can track container locations in e.g. Zookeeper, and generate HAproxy configurations each time a location key changes. * Public cloud deployments or deployments across unsafe networks can add TLS encryption. * Ad-hoc deployments can use a master-less discovery protocol like avahi to register and discover services. * It is also possible to do one-shot reconfiguration of the ambassadors. It is slightly less dynamic but has much less requirements. * Ambassadors can be used in addition to, or instead of, overlay networks. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Service meshes * A service mesh is a configurable network layer. * It can provide service discovery, high availability, load balancing, observability... * Service meshes are particularly useful for microservices applications. * Service meshes are often implemented as proxies. * Applications connect to the service mesh, which relays the connection where needed. *Does that sound familiar?* .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Ambassadors and service meshes * When using a service mesh, a "sidecar container" is often used as a proxy * Our services connect (transparently) to that sidecar container * That sidecar container figures out where to forward the traffic ... Does that sound familiar? (It should, because service meshes are essentially app-wide or cluster-wide ambassadors!) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Section summary We've learned how to: * Understand the ambassador pattern and what it is used for (service portability). For more information about the ambassador pattern, including demos on Swarm and ECS: * AWS re:invent 2015 [DVO317](https://www.youtube.com/watch?v=7CZFpHUPqXw) * [SwarmWeek video about Swarm+Compose](https://youtube.com/watch?v=qbIvUvwa6As) Some services meshes and related projects: * [Istio](https://istio.io/) * [Linkerd](https://linkerd.io/) * [Gloo](https://gloo.solo.io/) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-2.jpg)] --- name: toc-local-development-workflow-with-docker class: title Local development workflow with Docker .nav[ [Previous section](#toc-ambassadors) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-windows-containers) ] .debug[(automatically generated title slide)] --- class: title # Local development workflow with Docker ![Construction site](images/title-local-development-workflow-with-docker.jpg) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Objectives At the end of this section, you will be able to: * Share code between container and host. * Use a simple local development workflow. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Local development in a container We want to solve the following issues: - "Works on my machine" - "Not the same version" - "Missing dependency" By using Docker containers, we will get a consistent development environment. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Working on the "namer" application * We have to work on some application whose code is at: https://github.com/jpetazzo/namer. * What is it? We don't know yet! * Let's download the code. ```bash $ git clone https://github.com/jpetazzo/namer ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the code ```bash $ cd namer $ ls -1 company_name_generator.rb config.ru docker-compose.yml Dockerfile Gemfile ``` -- Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe? .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the `Dockerfile` ```dockerfile FROM ruby COPY . /src WORKDIR /src RUN bundler install CMD ["rackup", "--host", "0.0.0.0"] EXPOSE 9292 ``` * This application is using a base `ruby` image. * The code is copied in `/src`. * Dependencies are installed with `bundler`. * The application is started with `rackup`. * It is listening on port 9292. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Building and running the "namer" application * Let's build the application with the `Dockerfile`! -- ```bash $ docker build -t namer . ``` -- * Then run it. *We need to expose its ports.* -- ```bash $ docker run -dP namer ``` -- * Check on which port the container is listening. -- ```bash $ docker ps -l ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Connecting to our application * Point our browser to our Docker node, on the port allocated to the container. -- * Hit "reload" a few times. -- * This is an enterprise-class, carrier-grade, ISO-compliant company name generator! (With 50% more bullshit than the average competition!) (Wait, was that 50% more, or 50% less? *Anyway!*) ![web application 1](images/webapp-in-blue.png) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Making changes to the code Option 1: * Edit the code locally * Rebuild the image * Re-run the container Option 2: * Enter the container (with `docker exec`) * Install an editor * Make changes from within the container Option 3: * Use a *volume* to mount local files into the container * Make changes locally * Changes are reflected into the container .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Our first volume We will tell Docker to map the current directory to `/src` in the container. ```bash $ docker run -d -v $(pwd):/src -P namer ``` * `-d`: the container should run in detached mode (in the background). * `-v`: the following host directory should be mounted inside the container. * `-P`: publish all the ports exposed by this image. * `namer` is the name of the image we will run. * We don't specify a command to run because it is already set in the Dockerfile. Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell). .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Mounting volumes inside containers The `-v` flag mounts a directory from your host into your Docker container. The flag structure is: ```bash [host-path]:[container-path]:[rw|ro] ``` * If `[host-path]` or `[container-path]` doesn't exist it is created. * You can control the write status of the volume with the `ro` and `rw` options. * If you don't specify `rw` or `ro`, it will be `rw` by default. There will be a full chapter about volumes! .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Testing the development container * Check the port used by our new container. ```bash $ docker ps -l CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 045885b68bc5 namer rackup 3 seconds ago Up ... 0.0.0.0:32770->9292/tcp ... ``` * Open the application in your web browser. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Making a change to our application Our customer really doesn't like the color of our text. Let's change it. ```bash $ vi company_name_generator.rb ``` And change ```css color: royalblue; ``` To: ```css color: red; ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Viewing our changes * Reload the application in our browser. -- * The color should have changed. ![web application 2](images/webapp-in-red.png) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Understanding volumes * Volumes are *not* copying or synchronizing files between the host and the container. * Volumes are *bind mounts*: a kernel mechanism associating a path to another. * Bind mounts are *kind of* similar to symbolic links, but at a very different level. * Changes made on the host or on the container will be visible on the other side. (Since under the hood, it's the same file on both anyway.) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Trash your servers and burn your code *(This is the title of a [2013 blog post](http://chadfowler.com/2013/06/23/immutable-deployments.html) by Chad Fowler, where he explains the concept of immutable infrastructure.)* -- * Let's mess up majorly with our container. (Remove files or whatever.) * Now, how can we fix this? -- * Our old container (with the blue version of the code) is still running. * See on which port it is exposed: ```bash docker ps ``` * Point our browser to it to confirm that it still works fine. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Immutable infrastructure in a nutshell * Instead of *updating* a server, we deploy a new one. * This might be challenging with classical servers, but it's trivial with containers. * In fact, with Docker, the most logical workflow is to build a new image and run it. * If something goes wrong with the new image, we can always restart the old one. * We can even keep both versions running side by side. If this pattern sounds interesting, you might want to read about *blue/green deployment* and *canary deployments*. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Recap of the development workflow 1. Write a Dockerfile to build an image containing our development environment. (Rails, Django, ... and all the dependencies for our app) 2. Start a container from that image. Use the `-v` flag to mount our source code inside the container. 3. Edit the source code outside the containers, using regular tools. (vim, emacs, textmate...) 4. Test the application. (Some frameworks pick up changes automatically. Others require you to Ctrl-C + restart after each modification.) 5. Iterate and repeat steps 3 and 4 until satisfied. 6. When done, commit+push source code changes. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Debugging inside the container Docker has a command called `docker exec`. It allows users to run a new process in a container which is already running. If sometimes you find yourself wishing you could SSH into a container: you can use `docker exec` instead. You can get a shell prompt inside an existing container this way, or run an arbitrary process for automation. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## `docker exec` example ```bash $ # You can run ruby commands in the area the app is running and more! $ docker exec -it bash root@5ca27cf74c2e:/opt/namer# irb irb(main):001:0> [0, 1, 2, 3, 4].map {|x| x ** 2}.compact => [0, 1, 4, 9, 16] irb(main):002:0> exit ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Stopping the container Now that we're done let's stop our container. ```bash $ docker stop ``` And remove it. ```bash $ docker rm ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Section summary We've learned how to: * Share code between container and host. * Set our working directory. * Use a simple local development workflow. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/two-containers-on-a-truck.jpg)] --- name: toc-windows-containers class: title Windows Containers .nav[ [Previous section](#toc-local-development-workflow-with-docker) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-working-with-volumes) ] .debug[(automatically generated title slide)] --- class: title # Windows Containers ![Container with Windows](images/windows-containers.jpg) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Objectives At the end of this section, you will be able to: * Understand Windows Container vs. Linux Container. * Know about the features of Docker for Windows for choosing architecture. * Run other container architectures via QEMU emulation. .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Are containers *just* for Linux? Remember that a container must run on the kernel of the OS it's on. - This is both a benefit and a limitation. (It makes containers lightweight, but limits them to a specific kernel.) - At its launch in 2013, Docker did only support Linux, and only on amd64 CPUs. - Since then, many platforms and OS have been added. (Windows, ARM, i386, IBM mainframes ... But no macOS or iOS yet!) -- - Docker Desktop (macOS and Windows) can run containers for other architectures (Check the docs to see how to [run a Raspberry Pi (ARM) or PPC container](https://docs.docker.com/docker-for-mac/multi-arch/)!) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## History of Windows containers - Early 2016, Windows 10 gained support for running Windows binaries in containers. - These are known as "Windows Containers" - Win 10 expects Docker for Windows to be installed for full features - These must run in Hyper-V mini-VM's with a Windows Server x64 kernel - No "scratch" containers, so use "Core" and "Nano" Server OS base layers - Since Hyper-V is required, Windows 10 Home won't work (yet...) -- - Late 2016, Windows Server 2016 ships with native Docker support - Installed via PowerShell, doesn't need Docker for Windows - Can run native (without VM), or with [Hyper-V Isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## LCOW (Linux Containers On Windows) While Docker on Windows is largely playing catch up with Docker on Linux, it's moving fast; and this is one thing that you *cannot* do on Linux! - LCOW came with the [2017 Fall Creators Update](https://blog.docker.com/2018/02/docker-for-windows-18-02-with-windows-10-fall-creators-update/). - It can run Linux and Windows containers side-by-side on Win 10. - It is no longer necessary to switch the Engine to "Linux Containers". (In fact, if you want to run both Linux and Windows containers at the same time, make sure that your Engine is set to "Windows Containers" mode!) -- If you are a Docker for Windows user, start your engine and try this: ```bash docker pull microsoft/nanoserver:1803 ``` (Make sure to switch to "Windows Containers mode" if necessary.) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Run Both Windows and Linux containers - Run a Windows Nano Server (minimal CLI-only server) ```bash docker run --rm -it microsoft/nanoserver:1803 powershell Get-Process exit ``` - Run busybox on Linux in LCOW ```bash docker run --rm --platform linux busybox echo hello ``` (Although you will not be able to see them, this will create hidden Nano and LinuxKit VMs in Hyper-V!) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Did We Say Things Move Fast - Things keep improving. - Now `--platform` defaults to `windows`, some images support both: - golang, mongo, python, redis, hello-world ... and more being added - you should still use `--plaform` with multi-os images to be certain - Windows Containers now support `localhost` accessible containers (July 2018) - Microsoft (April 2018) added Hyper-V support to Windows 10 Home ... ... so stay tuned for Docker support, maybe?!? .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Other Windows container options Most "official" Docker images don't run on Windows yet. Places to Look: - Hub Official: https://hub.docker.com/u/winamd64/ - Microsoft: https://hub.docker.com/r/microsoft/ .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## SQL Server? Choice of Linux or Windows - Microsoft [SQL Server for Linux 2017](https://hub.docker.com/r/microsoft/mssql-server-linux/) (amd64/linux) - Microsoft [SQL Server Express 2017](https://hub.docker.com/r/microsoft/mssql-server-windows-express/) (amd64/windows) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Windows Tools and Tips - PowerShell [Tab Completion: DockerCompletion](https://github.com/matt9ucci/DockerCompletion) - Best Shell GUI: [Cmder.net](https://cmder.net/) - Good Windows Container Blogs and How-To's - Docker DevRel [Elton Stoneman, Microsoft MVP](https://blog.sixeyed.com/) - Docker Captain [Nicholas Dille](https://dille.name/blog/) - Docker Captain [Stefan Scherer](https://stefanscherer.github.io/) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/wall-of-containers.jpeg)] --- name: toc-working-with-volumes class: title Working with volumes .nav[ [Previous section](#toc-windows-containers) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-compose-for-development-stacks) ] .debug[(automatically generated title slide)] --- class: title # Working with volumes ![volume](images/title-working-with-volumes.jpg) .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Objectives At the end of this section, you will be able to: * Create containers holding volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Working with volumes Docker volumes can be used to achieve many things, including: * Bypassing the copy-on-write system to obtain native disk I/O performance. * Bypassing copy-on-write to leave some files out of `docker commit`. * Sharing a directory between multiple containers. * Sharing a directory between the host and a container. * Sharing a *single file* between the host and a container. * Using remote storage and custom storage with "volume drivers". .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Volumes are special directories in a container Volumes can be declared in two different ways. * Within a `Dockerfile`, with a `VOLUME` instruction. ```dockerfile VOLUME /uploads ``` * On the command-line, with the `-v` flag for `docker run`. ```bash $ docker run -d -v /uploads myapp ``` In both cases, `/uploads` (inside the container) will be a volume. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes bypass the copy-on-write system Volumes act as passthroughs to the host filesystem. * The I/O performance on a volume is exactly the same as I/O performance on the Docker host. * When you `docker commit`, the content of volumes is not brought into the resulting image. * If a `RUN` instruction in a `Dockerfile` changes the content of a volume, those changes are not recorded neither. * If a container is started with the `--read-only` flag, the volume will still be writable (unless the volume is a read-only volume). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes can be shared across containers You can start a container with *exactly the same volumes* as another one. The new container will have the same volumes, in the same directories. They will contain exactly the same thing, and remain in sync. Under the hood, they are actually the same directories on the host anyway. This is done using the `--volumes-from` flag for `docker run`. We will see an example in the following slides. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Sharing app server logs with another container Let's start a Tomcat container: ```bash $ docker run --name webapp -d -p 8080:8080 -v /usr/local/tomcat/logs tomcat ``` Now, start an `alpine` container accessing the same volume: ```bash $ docker run --volumes-from webapp alpine sh -c "tail -f /usr/local/tomcat/logs/*" ``` Then, from another window, send requests to our Tomcat container: ```bash $ curl localhost:8080 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Volumes exist independently of containers If a container is stopped or removed, its volumes still exist and are available. Volumes can be listed and manipulated with `docker volume` subcommands: ```bash $ docker volume ls DRIVER VOLUME NAME local 5b0b65e4316da67c2d471086640e6005ca2264f3... local pgdata-prod local pgdata-dev local 13b59c9936d78d109d094693446e174e5480d973... ``` Some of those volume names were explicit (pgdata-prod, pgdata-dev). The others (the hex IDs) were generated automatically by Docker. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Naming volumes * Volumes can be created without a container, then used in multiple containers. Let's create a couple of volumes directly. ```bash $ docker volume create webapps webapps ``` ```bash $ docker volume create logs logs ``` Volumes are not anchored to a specific path. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Using our named volumes * Volumes are used with the `-v` option. * When a host path does not contain a /, it is considered to be a volume name. Let's start a web server using the two previous volumes. ```bash $ docker run -d -p 1234:8080 \ -v logs:/usr/local/tomcat/logs \ -v webapps:/usr/local/tomcat/webapps \ tomcat ``` Check that it's running correctly: ```bash $ curl localhost:1234 ... (Tomcat tells us how happy it is to be up and running) ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Using a volume in another container * We will make changes to the volume from another container. * In this example, we will run a text editor in the other container. (But this could be a FTP server, a WebDAV server, a Git receiver...) Let's start another container using the `webapps` volume. ```bash $ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp ``` Vandalize the page, save, exit. Then run `curl localhost:1234` again to see your changes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Using custom "bind-mounts" In some cases, you want a specific directory on the host to be mapped inside the container: * You want to manage storage and snapshots yourself. (With LVM, or a SAN, or ZFS, or anything else!) * You have a separate disk with better performance (SSD) or resiliency (EBS) than the system disk, and you want to put important data on that disk. * You want to share your source directory between your host (where the source gets edited) and the container (where it is compiled or executed). Wait, we already met the last use-case in our example development workflow! Nice. ```bash $ docker run -d -v /path/on/the/host:/path/in/container image ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Migrating data with `--volumes-from` The `--volumes-from` option tells Docker to re-use all the volumes of an existing container. * Scenario: migrating from Redis 2.8 to Redis 3.0. * We have a container (`myredis`) running Redis 2.8. * Stop the `myredis` container. * Start a new container, using the Redis 3.0 image, and the `--volumes-from` option. * The new container will inherit the data of the old one. * Newer containers can use `--volumes-from` too. * Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Data migration in practice Let's create a Redis container. ```bash $ docker run -d --name redis28 redis:2.8 ``` Connect to the Redis container and set some data. ```bash $ docker run -ti --link redis28:redis busybox telnet redis 6379 ``` Issue the following commands: ```bash SET counter 42 INFO server SAVE QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Upgrading Redis Stop the Redis container. ```bash $ docker stop redis28 ``` Start the new Redis container. ```bash $ docker run -d --name redis30 --volumes-from redis28 redis:3.0 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Testing the new Redis Connect to the Redis container and see our data. ```bash docker run -ti --link redis30:redis busybox telnet redis 6379 ``` Issue a few commands. ```bash GET counter INFO server QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Volumes lifecycle * When you remove a container, its volumes are kept around. * You can list them with `docker volume ls`. * You can access them by creating a container with `docker run -v`. * You can remove them with `docker volume rm` or `docker system prune`. Ultimately, _you_ are the one responsible for logging, monitoring, and backup of your volumes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes defined by an image Wondering if an image has volumes? Just use `docker inspect`: ```bash $ # docker inspect training/datavol [{ "config": { . . . "Volumes": { "/var/webapp": {} }, . . . }] ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes used by a container To look which paths are actually volumes, and to what they are bound, use `docker inspect` (again): ```bash $ docker inspect [{ "ID": "", . . . "Volumes": { "/var/webapp": "/var/lib/docker/vfs/dir/f4280c5b6207ed531efd4cc673ff620cef2a7980f747dbbcca001db61de04468" }, "VolumesRW": { "/var/webapp": true }, }] ``` * We can see that our volume is present on the file system of the Docker host. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Sharing a single file The same `-v` flag can be used to share a single file (instead of a directory). One of the most interesting examples is to share the Docker control socket. ```bash $ docker run -it -v /var/run/docker.sock:/var/run/docker.sock docker sh ``` From that container, you can now run `docker` commands communicating with the Docker Engine running on the host. Try `docker ps`! .warning[Since that container has access to the Docker socket, it has root-like access to the host.] .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Volume plugins You can install plugins to manage volumes backed by particular storage systems, or providing extra features. For instance: * [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS, EFS). * [Portworx](https://portworx.com/) - provides distributed block store for containers. * [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale to several petabytes. It provides interfaces for object, block and file storage. * and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)! .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Volumes vs. Mounts * Since Docker 17.06, a new options is available: `--mount`. * It offers a new, richer syntax to manipulate data in containers. * It makes an explicit difference between: - volumes (identified with a unique name, managed by a storage plugin), - bind mounts (identified with a host path, not managed). * The former `-v` / `--volume` option is still usable. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## `--mount` syntax Binding a host path to a container path: ```bash $ docker run \ --mount type=bind,source=/path/on/host,target=/path/in/container alpine ``` Mounting a volume to a container path: ```bash $ docker run \ --mount source=myvolume,target=/path/in/container alpine ``` Mounting a tmpfs (in-memory, for temporary files): ```bash $ docker run \ --mount type=tmpfs,destination=/path/in/container,tmpfs-size=1000000 alpine ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Section summary We've learned how to: * Create and manage volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/Container-Ship-Freighter-Navigation-Elbe-Romance-1782991.jpg)] --- name: toc-compose-for-development-stacks class: title Compose for development stacks .nav[ [Previous section](#toc-working-with-volumes) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-exercise--writing-a-compose-file) ] .debug[(automatically generated title slide)] --- # Compose for development stacks Dockerfiles are great to build container images. But what if we work with a complex stack made of multiple containers? Eventually, we will want to write some custom scripts and automation to build, run, and connect our containers together. There is a better way: using Docker Compose. In this section, you will use Compose to bootstrap a development environment. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## What is Docker Compose? Docker Compose (formerly known as `fig`) is an external tool. Unlike the Docker Engine, it is written in Python. It's open source as well. The general idea of Compose is to enable a very simple, powerful onboarding workflow: 1. Checkout your code. 2. Run `docker-compose up`. 3. Your app is up and running! .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose overview This is how you work with Compose: * You describe a set (or stack) of containers in a YAML file called `docker-compose.yml`. * You run `docker-compose up`. * Compose automatically pulls images, builds containers, and starts them. * Compose can set up links, volumes, and other Docker options for you. * Compose can run the containers in the background, or in the foreground. * When containers are running in the foreground, their aggregated output is shown. Before diving in, let's see a small example of Compose in action. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic ![composeup](images/composeup.gif) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Checking if Compose is installed If you are using the official training virtual machines, Compose has been pre-installed. If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them. If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`. You can always check that it is installed by running: ```bash $ docker-compose --version ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose First step: clone the source code for the app we will be working on. ```bash $ cd $ git clone https://github.com/jpetazzo/trainingwheels ... $ cd trainingwheels ``` Second step: start your app. ```bash $ docker-compose up ``` Watch Compose build and run your app with the correct parameters, including linking the relevant containers together. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose Verify that the app is running at `http://:8000`. ![composeapp](images/composeapp.png) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Stopping the app When you hit `^C`, Compose tries to gracefully terminate all of the containers. After ten seconds (or if you press `^C` again) it will forcibly kill them. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## The `docker-compose.yml` file Here is the file used in the demo: .small[ ```yaml version: "2" services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src redis: image: redis ``` ] .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose file structure A Compose file has multiple sections: * `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.) * `services` is mandatory. A service is one or more replicas of the same image running as containers. * `networks` is optional and indicates to which networks containers should be connected. (By default, containers will be connected on a private, per-compose-file network.) * `volumes` is optional and can define volumes to be used and/or shared by the containers. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose file versions * Version 1 is legacy and shouldn't be used. (If you see a Compose file without `version` and `services`, it's a legacy v1 file.) * Version 2 added support for networks and volumes. * Version 3 added support for deployment options (scaling, rolling updates, etc). The [Docker documentation](https://docs.docker.com/compose/compose-file/) has excellent information about the Compose file format if you need to know more about versions. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Containers in `docker-compose.yml` Each service in the YAML file must contain either `build`, or `image`. * `build` indicates a path containing a Dockerfile. * `image` indicates an image name (local, or on a registry). * If both are specified, an image will be built from the `build` directory and named `image`. The other parameters are optional. They encode the parameters that you would typically add to `docker run`. Sometimes they have several minor improvements. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Container parameters * `command` indicates what to run (like `CMD` in a Dockerfile). * `ports` translates to one (or multiple) `-p` options to map ports. You can specify local ports (i.e. `x:y` to expose public port `x`). * `volumes` translates to one (or multiple) `-v` options. You can use relative paths here. For the full list, check: https://docs.docker.com/compose/compose-file/ .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose commands We already saw `docker-compose up`, but another one is `docker-compose build`. It will execute `docker build` for all containers mentioning a `build` path. It can also be invoked automatically when starting the application: ```bash docker-compose up --build ``` Another common option is to start containers in the background: ```bash docker-compose up -d ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Check container status It can be tedious to check the status of your containers with `docker ps`, especially when running multiple apps at the same time. Compose makes it easier; with `docker-compose ps` you will see only the status of the containers of the current stack: ```bash $ docker-compose ps Name Command State Ports ---------------------------------------------------------------------------- trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (1) If you have started your application in the background with Compose and want to stop it easily, you can use the `kill` command: ```bash $ docker-compose kill ``` Likewise, `docker-compose rm` will let you remove containers (after confirmation): ```bash $ docker-compose rm Going to remove trainingwheels_redis_1, trainingwheels_www_1 Are you sure? [yN] y Removing trainingwheels_redis_1... Removing trainingwheels_www_1... ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (2) Alternatively, `docker-compose down` will stop and remove containers. It will also remove other resources, like networks that were created for the application. ```bash $ docker-compose down Stopping trainingwheels_www_1 ... done Stopping trainingwheels_redis_1 ... done Removing trainingwheels_www_1 ... done Removing trainingwheels_redis_1 ... done ``` Use `docker-compose down -v` to remove everything including volumes. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Special handling of volumes Compose is smart. If your container uses volumes, when you restart your application, Compose will create a new container, but carefully re-use the volumes it was using previously. This makes it easy to upgrade a stateful service, by pulling its new image and just restarting your stack with Compose. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose project name * When you run a Compose command, Compose infers the "project name" of your app. * By default, the "project name" is the name of the current directory. * For instance, if you are in `/home/zelda/src/ocarina`, the project name is `ocarina`. * All resources created by Compose are tagged with this project name. * The project name also appears as a prefix of the names of the resources. E.g. in the previous example, service `www` will create a container `ocarina_www_1`. * The project name can be overridden with `docker-compose -p`. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Running two copies of the same app If you want to run two copies of the same app simultaneously, all you have to do is to make sure that each copy has a different project name. You can: * copy your code in a directory with a different name * start each copy with `docker-compose -p myprojname up` Each copy will run in a different network, totally isolated from the other. This is ideal to debug regressions, do side-by-side comparisons, etc. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/ShippingContainerSFBay.jpg)] --- name: toc-exercise--writing-a-compose-file class: title Exercise β writing a Compose file .nav[ [Previous section](#toc-compose-for-development-stacks) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-managing-hosts-with-docker-machine) ] .debug[(automatically generated title slide)] --- # Exercise β writing a Compose file Let's write a Compose file for the wordsmith app! The code is at: https://github.com/jpetazzo/wordsmith .debug[[intro-fullday.yml](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/intro-fullday.yml)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/aerial-view-of-containers.jpg)] --- name: toc-managing-hosts-with-docker-machine class: title Managing hosts with Docker Machine .nav[ [Previous section](#toc-exercise--writing-a-compose-file) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-advanced-dockerfiles) ] .debug[(automatically generated title slide)] --- # Managing hosts with Docker Machine - Docker Machine is a tool to provision and manage Docker hosts. - It automates the creation of a virtual machine: - locally, with a tool like VirtualBox or VMware; - on a public cloud like AWS EC2, Azure, Digital Ocean, GCP, etc.; - on a private cloud like OpenStack. - It can also configure existing machines through an SSH connection. - It can manage as many hosts as you want, with as many "drivers" as you want. .debug[[containers/Docker_Machine.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Docker_Machine.md)] --- ## Docker Machine workflow 1) Prepare the environment: setup VirtualBox, obtain cloud credentials ... 2) Create hosts with `docker-machine create -d drivername machinename`. 3) Use a specific machine with `eval $(docker-machine env machinename)`. 4) Profit! .debug[[containers/Docker_Machine.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Docker_Machine.md)] --- ## Environment variables - Most of the tools (CLI, libraries...) connecting to the Docker API can use environment variables. - These variables are: - `DOCKER_HOST` (indicates address+port to connect to, or path of UNIX socket) - `DOCKER_TLS_VERIFY` (indicates that TLS mutual auth should be used) - `DOCKER_CERT_PATH` (path to the keypair and certificate to use for auth) - `docker-machine env ...` will generate the variables needed to connect to a host. - `$(eval docker-machine env ...)` sets these variables in the current shell. .debug[[containers/Docker_Machine.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Docker_Machine.md)] --- ## Host management features With `docker-machine`, we can: - upgrade a host to the latest version of the Docker Engine, - start/stop/restart hosts, - get a shell on a remote machine (with SSH), - copy files to/from remotes machines (with SCP), - mount a remote host's directory on the local machine (with SSHFS), - ... .debug[[containers/Docker_Machine.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Docker_Machine.md)] --- ## The `generic` driver When provisioning a new host, `docker-machine` executes these steps: 1) Create the host using a cloud or hypervisor API. 2) Connect to the host over SSH. 3) Install and configure Docker on the host. With the `generic` driver, we provide the IP address of an existing host (instead of e.g. cloud credentials) and we omit the first step. This allows to provision physical machines, or VMs provided by a 3rd party, or use a cloud for which we don't have a provisioning API. .debug[[containers/Docker_Machine.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Docker_Machine.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/blue-containers.jpg)] --- name: toc-advanced-dockerfiles class: title Advanced Dockerfiles .nav[ [Previous section](#toc-managing-hosts-with-docker-machine) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-application-configuration) ] .debug[(automatically generated title slide)] --- class: title # Advanced Dockerfiles ![construction](images/title-advanced-dockerfiles.jpg) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## Objectives We have seen simple Dockerfiles to illustrate how Docker build container images. In this section, we will see more Dockerfile commands. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## `Dockerfile` usage summary * `Dockerfile` instructions are executed in order. * Each instruction creates a new layer in the image. * Docker maintains a cache with the layers of previous builds. * When there are no changes in the instructions and files making a layer, the builder re-uses the cached layer, without executing the instruction for that layer. * The `FROM` instruction MUST be the first non-comment instruction. * Lines starting with `#` are treated as comments. * Some instructions (like `CMD` or `ENTRYPOINT`) update a piece of metadata. (As a result, each call to these instructions makes the previous one useless.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `RUN` instruction The `RUN` instruction can be specified in two ways. With shell wrapping, which runs the specified command inside a shell, with `/bin/sh -c`: ```dockerfile RUN apt-get update ``` Or using the `exec` method, which avoids shell string expansion, and allows execution in images that don't have `/bin/sh`: ```dockerfile RUN [ "apt-get", "update" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `RUN` instruction `RUN` will do the following: * Execute a command. * Record changes made to the filesystem. * Work great to install libraries, packages, and various files. `RUN` will NOT do the following: * Record state of *processes*. * Automatically start daemons. If you want to start something automatically when the container runs, you should use `CMD` and/or `ENTRYPOINT`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## Collapsing layers It is possible to execute multiple commands in a single step: ```dockerfile RUN apt-get update && apt-get install -y wget && apt-get clean ``` It is also possible to break a command onto multiple lines: ```dockerfile RUN apt-get update \ && apt-get install -y wget \ && apt-get clean ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `EXPOSE` instruction The `EXPOSE` instruction tells Docker what ports are to be published in this image. ```dockerfile EXPOSE 8080 EXPOSE 80 443 EXPOSE 53/tcp 53/udp ``` * All ports are private by default. * Declaring a port with `EXPOSE` is not enough to make it public. * The `Dockerfile` doesn't control on which port a service gets exposed. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## Exposing ports * When you `docker run -p ...`, that port becomes public. (Even if it was not declared with `EXPOSE`.) * When you `docker run -P ...` (without port number), all ports declared with `EXPOSE` become public. A *public port* is reachable from other containers and from outside the host. A *private port* is not reachable from outside. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `COPY` instruction The `COPY` instruction adds files and content from your host into the image. ```dockerfile COPY . /src ``` This will add the contents of the *build context* (the directory passed as an argument to `docker build`) to the directory `/src` in the container. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## Build context isolation Note: you can only reference files and directories *inside* the build context. Absolute paths are taken as being anchored to the build context, so the two following lines are equivalent: ```dockerfile COPY . /src COPY / /src ``` Attempts to use `..` to get out of the build context will be detected and blocked with Docker, and the build will fail. Otherwise, a `Dockerfile` could succeed on host A, but fail on host B. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD` `ADD` works almost like `COPY`, but has a few extra features. `ADD` can get remote files: ```dockerfile ADD http://www.example.com/webapp.jar /opt/ ``` This would download the `webapp.jar` file and place it in the `/opt` directory. `ADD` will automatically unpack zip files and tar archives: ```dockerfile ADD ./assets.zip /var/www/htdocs/assets/ ``` This would unpack `assets.zip` into `/var/www/htdocs/assets`. *However,* `ADD` will not automatically unpack remote archives. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD`, `COPY`, and the build cache * Before creating a new layer, Docker checks its build cache. * For most Dockerfile instructions, Docker only looks at the `Dockerfile` content to do the cache lookup. * For `ADD` and `COPY` instructions, Docker also checks if the files to be added to the container have been changed. * `ADD` always needs to download the remote file before it can check if it has been changed. (It cannot use, e.g., ETags or If-Modified-Since headers.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## `VOLUME` The `VOLUME` instruction tells Docker that a specific directory should be a *volume*. ```dockerfile VOLUME /var/lib/mysql ``` Filesystem access in volumes bypasses the copy-on-write layer, offering native performance to I/O done in those directories. Volumes can be attached to multiple containers, allowing to "port" data over from a container to another, e.g. to upgrade a database to a newer version. It is possible to start a container in "read-only" mode. The container filesystem will be made read-only, but volumes can still have read/write access if necessary. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `WORKDIR` instruction The `WORKDIR` instruction sets the working directory for subsequent instructions. It also affects `CMD` and `ENTRYPOINT`, since it sets the working directory used when starting the container. ```dockerfile WORKDIR /src ``` You can specify `WORKDIR` again to change the working directory for further operations. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENV` instruction The `ENV` instruction specifies environment variables that should be set in any container launched from the image. ```dockerfile ENV WEBAPP_PORT 8080 ``` This will result in an environment variable being created in any containers created from this image of ```bash WEBAPP_PORT=8080 ``` You can also specify environment variables when you use `docker run`. ```bash $ docker run -e WEBAPP_PORT=8000 -e WEBAPP_HOST=www.example.com ... ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `USER` instruction The `USER` instruction sets the user name or UID to use when running the image. It can be used multiple times to change back to root or to another user. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `CMD` instruction The `CMD` instruction is a default command run when a container is launched from the image. ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` Means we don't need to specify `nginx -g "daemon off;"` when running the container. Instead of: ```bash $ docker run /web_image nginx -g "daemon off;" ``` We can just do: ```bash $ docker run /web_image ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `CMD` instruction Just like `RUN`, the `CMD` instruction comes in two forms. The first executes in a shell: ```dockerfile CMD nginx -g "daemon off;" ``` The second executes directly, without shell processing: ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `CMD` instruction The `CMD` can be overridden when you run a container. ```bash $ docker run -it /web_image bash ``` Will run `bash` instead of `nginx -g "daemon off;"`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENTRYPOINT` instruction The `ENTRYPOINT` instruction is like the `CMD` instruction, but arguments given on the command line are *appended* to the entry point. Note: you have to use the "exec" syntax (`[ "..." ]`). ```dockerfile ENTRYPOINT [ "/bin/ls" ] ``` If we were to run: ```bash $ docker run training/ls -l ``` Instead of trying to run `-l`, the container will run `/bin/ls -l`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `ENTRYPOINT` instruction The entry point can be overridden as well. ```bash $ docker run -it training/ls bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr $ docker run -it --entrypoint bash training/ls root@d902fb7b1fc7:/# ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## How `CMD` and `ENTRYPOINT` interact The `CMD` and `ENTRYPOINT` instructions work best when used together. ```dockerfile ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ] ``` The `ENTRYPOINT` specifies the command to be run and the `CMD` specifies its options. On the command line we can then potentially override the options when needed. ```bash $ docker run -d /web_image -t ``` This will override the options `CMD` provided with new flags. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## Advanced Dockerfile instructions * `ONBUILD` lets you stash instructions that will be executed when this image is used as a base for another one. * `LABEL` adds arbitrary metadata to the image. * `ARG` defines build-time variables (optional or mandatory). * `STOPSIGNAL` sets the signal for `docker stop` (`TERM` by default). * `HEALTHCHECK` defines a command assessing the status of the container. * `SHELL` sets the default program to use for string-syntax RUN, CMD, etc. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## The `ONBUILD` instruction The `ONBUILD` instruction is a trigger. It sets instructions that will be executed when another image is built from the image being build. This is useful for building images which will be used as a base to build other images. ```dockerfile ONBUILD COPY . /src ``` * You can't chain `ONBUILD` instructions with `ONBUILD`. * `ONBUILD` can't be used to trigger `FROM` instructions. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/chinook-helicopter-container.jpg)] --- name: toc-application-configuration class: title Application Configuration .nav[ [Previous section](#toc-advanced-dockerfiles) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-logging) ] .debug[(automatically generated title slide)] --- # Application Configuration There are many ways to provide configuration to containerized applications. There is no "best way" β it depends on factors like: * configuration size, * mandatory and optional parameters, * scope of configuration (per container, per app, per customer, per site, etc), * frequency of changes in the configuration. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Command-line parameters ```bash docker run jpetazzo/hamba 80 www1:80 www2:80 ``` * Configuration is provided through command-line parameters. * In the above example, the `ENTRYPOINT` is a script that will: - parse the parameters, - generate a configuration file, - start the actual service. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Command-line parameters pros and cons * Appropriate for mandatory parameters (without which the service cannot start). * Convenient for "toolbelt" services instantiated many times. (Because there is no extra step: just run it!) * Not great for dynamic configurations or bigger configurations. (These things are still possible, but more cumbersome.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Environment variables ```bash docker run -e ELASTICSEARCH_URL=http://es42:9201/ kibana ``` * Configuration is provided through environment variables. * The environment variable can be used straight by the program, or by a script generating a configuration file. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Environment variables pros and cons * Appropriate for optional parameters (since the image can provide default values). * Also convenient for services instantiated many times. (It's as easy as command-line parameters.) * Great for services with lots of parameters, but you only want to specify a few. (And use default values for everything else.) * Ability to introspect possible parameters and their default values. * Not great for dynamic configurations. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration ``` FROM prometheus COPY prometheus.conf /etc ``` * The configuration is added to the image. * The image may have a default configuration; the new configuration can: - replace the default configuration, - extend it (if the code can read multiple configuration files). .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration pros and cons * Allows arbitrary customization and complex configuration files. * Requires to write a configuration file. (Obviously!) * Requires to build an image to start the service. * Requires to rebuild the image to reconfigure the service. * Requires to rebuild the image to upgrade the service. * Configured images can be stored in registries. (Which is great, but requires a registry.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Configuration volume ```bash docker run -v appconfig:/etc/appconfig myapp ``` * The configuration is stored in a volume. * The volume is attached to the container. * The image may have a default configuration. (But this results in a less "obvious" setup, that needs more documentation.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Configuration volume pros and cons * Allows arbitrary customization and complex configuration files. * Requires to create a volume for each different configuration. * Services with identical configurations can use the same volume. * Doesn't require to build / rebuild an image when upgrading / reconfiguring. * Configuration can be generated or edited through another container. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume * This is a powerful pattern for dynamic, complex configurations. * The configuration is stored in a volume. * The configuration is generated / updated by a special container. * The application container detects when the configuration is changed. (And automatically reloads the configuration when necessary.) * The configuration can be shared between multiple services if needed. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume example In a first terminal, start a load balancer with an initial configuration: ```bash $ docker run --name loadbalancer jpetazzo/hamba \ 80 goo.gl:80 ``` In another terminal, reconfigure that load balancer: ```bash $ docker run --rm --volumes-from loadbalancer jpetazzo/hamba reconfigure \ 80 google.com:80 ``` The configuration could also be updated through e.g. a REST API. (The REST API being itself served from another container.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Keeping secrets .warning[Ideally, you should not put secrets (passwords, tokens...) in:] * command-line or environment variables (anyone with Docker API access can get them), * images, especially stored in a registry. Secrets management is better handled with an orchestrator (like Swarm or Kubernetes). Orchestrators will allow to pass secrets in a "one-way" manner. Managing secrets securely without an orchestrator can be contrived. E.g.: - read the secret on stdin when the service starts, - pass the secret using an API endpoint. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/container-cranes.jpg)] --- name: toc-logging class: title Logging .nav[ [Previous section](#toc-application-configuration) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-deep-dive-into-container-internals) ] .debug[(automatically generated title slide)] --- # Logging In this chapter, we will explain the different ways to send logs from containers. We will then show one particular method in action, using ELK and Docker's logging drivers. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## There are many ways to send logs - The simplest method is to write on the standard output and error. - Applications can write their logs to local files. (The files are usually periodically rotated and compressed.) - It is also very common (on UNIX systems) to use syslog. (The logs are collected by syslogd or an equivalent like journald.) - In large applications with many components, it is common to use a logging service. (The code uses a library to send messages to the logging service.) *All these methods are available with containers.* .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Writing on stdout/stderr - The standard output and error of containers is managed by the container engine. - This means that each line written by the container is received by the engine. - The engine can then do "whatever" with these log lines. - With Docker, the default configuration is to write the logs to local files. - The files can then be queried with e.g. `docker logs` (and the equivalent API request). - This can be customized, as we will see later. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Writing to local files - If we write to files, it is possible to access them but cumbersome. (We have to use `docker exec` or `docker cp`.) - Furthermore, if the container is stopped, we cannot use `docker exec`. - If the container is deleted, the logs disappear. - What should we do for programs who can only log to local files? -- - There are multiple solutions. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Using a volume or bind mount - Instead of writing logs to a normal directory, we can place them on a volume. - The volume can be accessed by other containers. - We can run a program like `filebeat` in another container accessing the same volume. (`filebeat` reads local log files continuously, like `tail -f`, and sends them to a centralized system like ElasticSearch.) - We can also use a bind mount, e.g. `-v /var/log/containers/www:/var/log/tomcat`. - The container will write log files to a directory mapped to a host directory. - The log files will appear on the host and be consumable directly from the host. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Using logging services - We can use logging frameworks (like log4j or the Python `logging` package). - These frameworks require some code and/or configuration in our application code. - These mechanisms can be used identically inside or outside of containers. - Sometimes, we can leverage containerized networking to simplify their setup. - For instance, our code can send log messages to a server named `log`. - The name `log` will resolve to different addresses in development, production, etc. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Using syslog - What if our code (or the program we are running in containers) uses syslog? - One possibility is to run a syslog daemon in the container. - Then that daemon can be setup to write to local files or forward to the network. - Under the hood, syslog clients connect to a local UNIX socket, `/dev/log`. - We can expose a syslog socket to the container (by using a volume or bind-mount). - Then just create a symlink from `/dev/log` to the syslog socket. - VoilΓ ! .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Using logging drivers - If we log to stdout and stderr, the container engine receives the log messages. - The Docker Engine has a modular logging system with many plugins, including: - json-file (the default one) - syslog - journald - gelf - fluentd - splunk - etc. - Each plugin can process and forward the logs to another process or system. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## A word of warning about `json-file` - By default, log file size is unlimited. - This means that a very verbose container *will* use up all your disk space. (Or a less verbose container, but running for a very long time.) - Log rotation can be enabled by setting a `max-size` option. - Older log files can be removed by setting a `max-file` option. - Just like other logging options, these can be set per container, or globally. Example: ```bash $ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch ``` .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Demo: sending logs to ELK - We are going to deploy an ELK stack. - It will accept logs over a GELF socket. - We will run a few containers with the `gelf` logging driver. - We will then see our logs in Kibana, the web interface provided by ELK. *Important foreword: this is not an "official" or "recommended" setup; it is just an example. We used ELK in this demo because it's a popular setup and we keep being asked about it; but you will have equal success with Fluent or other logging stacks!* .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## What's in an ELK stack? - ELK is three components: - ElasticSearch (to store and index log entries) - Logstash (to receive log entries from various sources, process them, and forward them to various destinations) - Kibana (to view/search log entries with a nice UI) - The only component that we will configure is Logstash - We will accept log entries using the GELF protocol - Log entries will be stored in ElasticSearch, and displayed on Logstash's stdout for debugging .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Running ELK - We are going to use a Compose file describing the ELK stack. - The Compose file is in the container.training repository on GitHub. ```bash $ git clone https://github.com/jpetazzo/container.training $ cd container.training $ cd elk $ docker-compose up ``` - Let's have a look at the Compose file while it's deploying. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Our basic ELK deployment - We are using images from the Docker Hub: `elasticsearch`, `logstash`, `kibana`. - We don't need to change the configuration of ElasticSearch. - We need to tell Kibana the address of ElasticSearch: - it is set with the `ELASTICSEARCH_URL` environment variable, - by default it is `localhost:9200`, we change it to `elasticsearch:9200`. - We need to configure Logstash: - we pass the entire configuration file through command-line arguments, - this is a hack so that we don't have to create an image just for the config. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Sending logs to ELK - The ELK stack accepts log messages through a GELF socket. - The GELF socket listens on UDP port 12201. - To send a message, we need to change the logging driver used by Docker. - This can be done globally (by reconfiguring the Engine) or on a per-container basis. - Let's override the logging driver for a single container: ```bash $ docker run --log-driver=gelf --log-opt=gelf-address=udp://localhost:12201 \ alpine echo hello world ``` .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Viewing the logs in ELK - Connect to the Kibana interface. - It is exposed on port 5601. - Browse http://X.X.X.X:5601. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## "Configuring" Kibana - Kibana should offer you to "Configure an index pattern": in the "Time-field name" drop down, select "@timestamp", and hit the "Create" button. - Then: - click "Discover" (in the top-left corner), - click "Last 15 minutes" (in the top-right corner), - click "Last 1 hour" (in the list in the middle), - click "Auto-refresh" (top-right corner), - click "5 seconds" (top-left of the list). - You should see a series of green bars (with one new green bar every minute). - Our 'hello world' message should be visible there. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Important afterword **This is not a "production-grade" setup.** It is just an educational example. Since we have only one node , we did set up a single ElasticSearch instance and a single Logstash instance. In a production setup, you need an ElasticSearch cluster (both for capacity and availability reasons). You also need multiple Logstash instances. And if you want to withstand bursts of logs, you need some kind of message queue: Redis if you're cheap, Kafka if you want to make sure that you don't drop messages on the floor. Good luck. If you want to learn more about the GELF driver, have a look at [this blog post]( https://jpetazzo.github.io/2017/01/20/docker-logging-gelf/). .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/container-housing.jpg)] --- name: toc-deep-dive-into-container-internals class: title Deep dive into container internals .nav[ [Previous section](#toc-logging) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-namespaces) ] .debug[(automatically generated title slide)] --- # Deep dive into container internals In this chapter, we will explain some of the fundamental building blocks of containers. This will give you a solid foundation so you can: - understand "what's going on" in complex situations, - anticipate the behavior of containers (performance, security...) in new scenarios, - implement your own container engine. The last item should be done for educational purposes only! .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## There is no container code in the Linux kernel - If we search "container" in the Linux kernel code, we find: - generic code to manipulate data structures (like linked lists, etc.), - unrelated concepts like "ACPI containers", - *nothing* relevant to "our" containers! - Containers are composed using multiple independent features. - On Linux, containers rely on "namespaces, cgroups, and some filesystem magic." - Security also requires features like capabilities, seccomp, LSMs... .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/containers-by-the-water.jpg)] --- name: toc-namespaces class: title Namespaces .nav[ [Previous section](#toc-deep-dive-into-container-internals) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-control-groups) ] .debug[(automatically generated title slide)] --- # Namespaces - Provide processes with their own view of the system. - Namespaces limit what you can see (and therefore, what you can use). - These namespaces are available in modern kernels: - pid - net - mnt - uts - ipc - user (We are going to detail them individually.) - Each process belongs to one namespace of each type. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Namespaces are always active - Namespaces exist even when you don't use containers. - This is a bit similar to the UID field in UNIX processes: - all processes have the UID field, even if no user exists on the system - the field always has a value / the value is always defined (i.e. any process running on the system has some UID) - the value of the UID field is used when checking permissions (the UID field determines which resources the process can access) - You can replace "UID field" with "namespace" above and it still works! - In other words: even when you don't use containers, there is one namespace of each type, containing all the processes on the system. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Manipulating namespaces - Namespaces are created with two methods: - the `clone()` system call (used when creating new threads and processes), - the `unshare()` system call. - The Linux tool `unshare` allows to do that from a shell. - A new process can re-use none / all / some of the namespaces of its parent. - It is possible to "enter" a namespace with the `setns()` system call. - The Linux tool `nsenter` allows to do that from a shell. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Namespaces lifecycle - When the last process of a namespace exits, the namespace is destroyed. - All the associated resources are then removed. - Namespaces are materialized by pseudo-files in `/proc//ns`. ```bash ls -l /proc/self/ns ``` - It is possible to compare namespaces by checking these files. (This helps to answer the question, "are these two processes in the same namespace?") - It is possible to preserve a namespace by bind-mounting its pseudo-file. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Namespaces can be used independently - As mentioned in the previous slides: *A new process can re-use none / all / some of the namespaces of its parent.* - We are going to use that property in the examples in the next slides. - We are going to present each type of namespace. - For each type, we will provide an example using only that namespace. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## UTS namespace - gethostname / sethostname - Allows to set a custom hostname for a container. - That's (mostly) it! - Also allows to set the NIS domain. (If you don't know what a NIS domain is, you don't have to worry about it!) - If you're wondering: UTS = UNIX time sharing. - This namespace was named like this because of the `struct utsname`, which is commonly used to obtain the machine's hostname, architecture, etc. (The more you know!) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Creating our first namespace Let's use `unshare` to create a new process that will have its own UTS namespace: ```bash $ sudo unshare --uts ``` - We have to use `sudo` for most `unshare` operations. - We indicate that we want a new uts namespace, and nothing else. - If we don't specify a program to run, a `$SHELL` is started. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Demonstrating our uts namespace In our new "container", check the hostname, change it, and check it: ```bash # hostname nodeX # hostname tupperware # hostname tupperware ``` In another shell, check that the machine's hostname hasn't changed: ```bash $ hostname nodeX ``` Exit the "container" with `exit` or `Ctrl-D`. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Net namespace overview - Each network namespace has its own private network stack. - The network stack includes: - network interfaces (including `lo`), - routing table**s** (as in `ip rule` etc.), - iptables chains and rules, - sockets (as seen by `ss`, `netstat`). - You can move a network interface from a network namespace to another: ```bash ip link set dev eth0 netns PID ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Net namespace typical use - Each container is given its own network namespace. - For each network namespace (i.e. each container), a `veth` pair is created. (Two `veth` interfaces act as if they were connected with a cross-over cable.) - One `veth` is moved to the container network namespace (and renamed `eth0`). - The other `veth` is moved to a bridge on the host (e.g. the `docker0` bridge). .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Creating a network namespace Start a new process with its own network namespace: ```bash $ sudo unshare --net ``` See that this new network namespace is unconfigured: ```bash # ping 1.1 connect: Network is unreachable # ifconfig # ip link ls 1: lo: mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Creating the `veth` interfaces In another shell (on the host), create a `veth` pair: ```bash $ sudo ip link add name in_host type veth peer name in_netns ``` Configure the host side (`in_host`): ```bash $ sudo ip link set in_host master docker0 up ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Moving the `veth` interface *In the process created by `unshare`,* check the PID of our "network container": ```bash # echo $$ 533 ``` *On the host*, move the other side (`in_netns`) to the network namespace: ```bash $ sudo ip link set in_netns netns 533 ``` (Make sure to update "533" with the actual PID obtained above!) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Basic network configuration Let's set up `lo` (the loopback interface): ```bash # ip link set lo up ``` Activate the `veth` interface and rename it to `eth0`: ```bash # ip link set in_netns name eth0 up ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Allocating IP address and default route *On the host*, check the address of the Docker bridge: ```bash $ ip addr ls dev docker0 ``` (It could be something like `172.17.0.1`.) Pick an IP address in the middle of the same subnet, e.g. `172.17.0.99`. *In the process created by `unshare`,* configure the interface: ```bash # ip addr add 172.17.0.99/24 dev eth0 # ip route add default via 172.17.0.1 ``` (Make sure to update the IP addresses if necessary.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Validating the setup Check that we now have connectivity: ```bash # ping 1.1 ``` Note: we were able to take a shortcut, because Docker is running, and provides us with a `docker0` bridge and a valid `iptables` setup. If Docker is not running, you will need to take care of this! .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Cleaning up network namespaces - Terminate the process created by `unshare` (with `exit` or `Ctrl-D`). - Since this was the only process in the network namespace, it is destroyed. - All the interfaces in the network namespace are destroyed. - When a `veth` interface is destroyed, it also destroys the other half of the pair. - So we don't have anything else to do to clean up! .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Other ways to use network namespaces - `--net none` gives an empty network namespace to a container. (Effectively isolating it completely from the network.) - `--net host` means "do not containerize the network". (No network namespace is created; the container uses the host network stack.) - `--net container` means "reuse the network namespace of another container". (As a result, both containers share the same interfaces, routes, etc.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Mnt namespace - Processes can have their own root fs (Γ la chroot). - Processes can also have "private" mounts. This allows to: - isolate `/tmp` (per user, per service...) - mask `/proc`, `/sys` (for processes that don't need them) - mount remote filesystems or sensitive data, but make it visible only for allowed processes - Mounts can be totally private, or shared. - At this point, there is no easy way to pass along a mount from a namespace to another. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Setting up a private `/tmp` Create a new mount namespace: ```bash $ sudo unshare --mount ``` In that new namespace, mount a brand new `/tmp`: ```bash # mount -t tmpfs none /tmp ``` Check the content of `/tmp` in the new namespace, and compare to the host. The mount is automatically cleaned up when you exit the process. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## PID namespace - Processes within a PID namespace only "see" processes in the same PID namespace. - Each PID namespace has its own numbering (starting at 1). - When PID 1 goes away, the whole namespace is killed. (When PID 1 goes away on a normal UNIX system, the kernel panics!) - Those namespaces can be nested. - A process ends up having multiple PIDs (one per namespace in which it is nested). .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## PID namespace in action Create a new PID namespace: ```bash $ sudo unshare --pid --fork ``` (We need the `--fork` flag because the PID namespace is special.) Check the process tree in the new namespace: ```bash # ps faux ``` -- class: extra-details, deep-dive π€ Why do we see all the processes?!? .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## PID namespaces and `/proc` - Tools like `ps` rely on the `/proc` pseudo-filesystem. - Our new namespace still has access to the original `/proc`. - Therefore, it still sees host processes. - But it cannot affect them. (Try to `kill` a process: you will get `No such process`.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## PID namespaces, take 2 - This can be solved by mounting `/proc` in the namespace. - The `unshare` utility provides a convenience flag, `--mount-proc`. - This flag will mount `/proc` in the namespace. - It will also unshare the mount namespace, so that this mount is local. Try it: ```bash $ sudo unshare --pid --fork --mount-proc # ps faux ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## OK, really, why do we need `--fork`? *It is not necessary to remember all these details. This is just an illustration of the complexity of namespaces!* The `unshare` tool calls the `unshare` syscall, then `exec`s the new binary. A process calling `unshare` to create new namespaces is moved to the new namespaces... ... Except for the PID namespace. (Because this would change the current PID of the process from X to 1.) The processes created by the new binary are placed into the new PID namespace. The first one will be PID 1. If PID 1 exits, it is not possible to create additional processes in the namespace. (Attempting to do so will result in `ENOMEM`.) Without the `--fork` flag, the first command that we execute will be PID 1 ... ... And once it exits, we cannot create more processes in the namespace! Check `man 2 unshare` and `man pid_namespaces` if you want more details. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## IPC namespace -- - Does anybody know about IPC? -- - Does anybody *care* about IPC? -- - Allows a process (or group of processes) to have own: - IPC semaphores - IPC message queues - IPC shared memory ... without risk of conflict with other instances. - Older versions of PostgreSQL cared about this. *No demo for that one.* .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## User namespace - Allows to map UID/GID; e.g.: - UID 0β1999 in container C1 is mapped to UID 10000β11999 on host - UID 0β1999 in container C2 is mapped to UID 12000β13999 on host - etc. - UID 0 in the container can still perform privileged operations in the container. (For instance: setting up network interfaces.) - But outside of the container, it is a non-privileged user. - It also means that the UID in containers becomes unimportant. (Just use UID 0 in the container, since it gets squashed to a non-privileged user outside.) - Ultimately enables better privilege separation in container engines. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## User namespace challenges - UID needs to be mapped when passed between processes or kernel subsystems. - Filesystem permissions and file ownership are more complicated. .small[(E.g. when the same root filesystem is shared by multiple containers running with different UIDs.)] - With the Docker Engine: - some feature combinations are not allowed (e.g. user namespace + host network namespace sharing) - user namespaces need to be enabled/disabled globally (when the daemon is started) - container images are stored separately (so the first time you toggle user namespaces, you need to re-pull images) *No demo for that one.* .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/distillery-containers.jpg)] --- name: toc-control-groups class: title Control groups .nav[ [Previous section](#toc-namespaces) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-security-features) ] .debug[(automatically generated title slide)] --- # Control groups - Control groups provide resource *metering* and *limiting*. - This covers a number of "usual suspects" like: - memory - CPU - block I/O - network (with cooperation from iptables/tc) - And a few exotic ones: - huge pages (a special way to allocate memory) - RDMA (resources specific to InfiniBand / remote memory transfer) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Crowd control - Control groups also allow to group processes for special operations: - freezer (conceptually similar to a "mass-SIGSTOP/SIGCONT") - perf_event (gather performance events on multiple processes) - cpuset (limit or pin processes to specific CPUs) - There is a "pids" cgroup to limit the number of processes in a given group. - There is also a "devices" cgroup to control access to device nodes. (i.e. everything in `/dev`.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Generalities - Cgroups form a hierarchy (a tree). - We can create nodes in that hierarchy. - We can associate limits to a node. - We can move a process (or multiple processes) to a node. - The process (or processes) will then respect these limits. - We can check the current usage of each node. - In other words: limits are optional (if we only want accounting). - When a process is created, it is placed in its parent's groups. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Example The numbers are PIDs. The names are the names of our nodes (arbitrarily chosen). .small[ ```bash cpu memory βββ batch βββ stateless β βββ cryptoscam β βββ 25 β β βββ 52 β βββ 26 β βββ ffmpeg β βββ 27 β βββ 109 β βββ 52 β βββ 88 β βββ 109 βββ realtime β βββ 88 βββ nginx βββ databases β βββ 25 βββ 1008 β βββ 26 βββ 524 β βββ 27 βββ postgres β βββ 524 βββ redis βββ 1008 ``` ] .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Cgroups v1 vs v2 - Cgroups v1 are available on all systems (and widely used). - Cgroups v2 are a huge refactor. (Development started in Linux 3.10, released in 4.5.) - Cgroups v2 have a number of differences: - single hierarchy (instead of one tree per controller), - processes can only be on leaf nodes (not inner nodes), - and of course many improvements / refactorings. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Memory cgroup: accounting - Keeps track of pages used by each group: - file (read/write/mmap from block devices), - anonymous (stack, heap, anonymous mmap), - active (recently accessed), - inactive (candidate for eviction). - Each page is "charged" to a group. - Pages can be shared across multiple groups. (Example: multiple processes reading from the same files.) - To view all the counters kept by this cgroup: ```bash $ cat /sys/fs/cgroup/memory/memory.stat ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Memory cgroup: limits - Each group can have (optional) hard and soft limits. - Limits can be set for different kinds of memory: - physical memory, - kernel memory, - total memory (including swap). .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Soft limits and hard limits - Soft limits are not enforced. (But they influence reclaim under memory pressure.) - Hard limits *cannot* be exceeded: - if a group of processes exceeds a hard limit, - and if the kernel cannot reclaim any memory, - then the OOM (out-of-memory) killer is triggered, - and processes are killed until memory gets below the limit again. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Avoiding the OOM killer - For some workloads (databases and stateful systems), killing processes because we run out of memory is not acceptable. - The "oom-notifier" mechanism helps with that. - When "oom-notifier" is enabled and a hard limit is exceeded: - all processes in the cgroup are frozen, - a notification is sent to user space (instead of killing processes), - user space can then raise limits, migrate containers, etc., - once the memory usage is below the hard limit, unfreeze the cgroup. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Overhead of the memory cgroup - Each time a process grabs or releases a page, the kernel update counters. - This adds some overhead. - Unfortunately, this cannot be enabled/disabled per process. - It has to be done system-wide, at boot time. - Also, when multiple groups use the same page: - only the first group gets "charged", - but if it stops using it, the "charge" is moved to another group. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Setting up a limit with the memory cgroup Create a new memory cgroup: ```bash $ CG=/sys/fs/cgroup/memory/onehundredmegs $ sudo mkdir $CG ``` Limit it to approximately 100MB of memory usage: ```bash $ sudo tee $CG/memory.memsw.limit_in_bytes <<< 100000000 ``` Move the current process to that cgroup: ```bash $ sudo tee $CG/tasks <<< $$ ``` The current process *and all its future children* are now limited. (Confused about `<<<`? Look at the next slide!) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## What's `<<<`? - This is a "here string". (It is a non-POSIX shell extension.) - The following commands are equivalent: ```bash foo <<< hello ``` ```bash echo hello | foo ``` ```bash foo < $CG/tasks" ``` The following commands, however, would be invalid: ```bash sudo echo $$ > $CG/tasks ``` ```bash sudo -i # (or su) echo $$ > $CG/tasks ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Testing the memory limit Start the Python interpreter: ```bash $ python Python 3.6.4 (default, Jan 5 2018, 02:35:40) [GCC 7.2.1 20171224] on linux Type "help", "copyright", "credits" or "license" for more information. >>> ``` Allocate 80 megabytes: ```python >>> s = "!" * 1000000 * 80 ``` Add 20 megabytes more: ```python >>> t = "!" * 1000000 * 20 Killed ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## CPU cgroup - Keeps track of CPU time used by a group of processes. (This is easier and more accurate than `getrusage` and `/proc`.) - Keeps track of usage per CPU as well. (i.e., "this group of process used X seconds of CPU0 and Y seconds of CPU1".) - Allows to set relative weights used by the scheduler. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Cpuset cgroup - Pin groups to specific CPU(s). - Use-case: reserve CPUs for specific apps. - Warning: make sure that "default" processes aren't using all CPUs! - CPU pinning can also avoid performance loss due to cache flushes. - This is also relevant for NUMA systems. - Provides extra dials and knobs. (Per zone memory pressure, process migration costs...) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Blkio cgroup - Keeps track of I/Os for each group: - per block device - read vs write - sync vs async - Set throttle (limits) for each group: - per block device - read vs write - ops vs bytes - Set relative weights for each group. - Note: most writes go through the page cache. (So classic writes will appear to be unthrottled at first.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Net_cls and net_prio cgroup - Only works for egress (outgoing) traffic. - Automatically set traffic class or priority for traffic generated by processes in the group. - Net_cls will assign traffic to a class. - Classes have to be matched with tc or iptables, otherwise traffic just flows normally. - Net_prio will assign traffic to a priority. - Priorities are used by queuing disciplines. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Devices cgroup - Controls what the group can do on device nodes - Permissions include read/write/mknod - Typical use: - allow `/dev/{tty,zero,random,null}` ... - deny everything else - A few interesting nodes: - `/dev/net/tun` (network interface manipulation) - `/dev/fuse` (filesystems in user space) - `/dev/kvm` (VMs in containers, yay inception!) - `/dev/dri` (GPU) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/lots-of-containers.jpg)] --- name: toc-security-features class: title Security features .nav[ [Previous section](#toc-control-groups) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-copy-on-write-filesystems) ] .debug[(automatically generated title slide)] --- # Security features - Namespaces and cgroups are not enough to ensure strong security. - We need extra mechanisms: capabilities, seccomp, LSMs. - These mechanisms were already used before containers to harden security. - They can be used together with containers. - Good container engines will automatically leverage these features. (So that you don't have to worry about it.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Capabilities - In traditional UNIX, many operations are possible if and only if UID=0 (root). - Some of these operations are very powerful: - changing file ownership, accessing all files ... - Some of these operations deal with system configuration, but can be abused: - setting up network interfaces, mounting filesystems ... - Some of these operations are not very dangerous but are needed by servers: - binding to a port below 1024. - Capabilities are per-process flags to allow these operations individually. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Some capabilities - `CAP_CHOWN`: arbitrarily change file ownership and permissions. - `CAP_DAC_OVERRIDE`: arbitrarily bypass file ownership and permissions. - `CAP_NET_ADMIN`: configure network interfaces, iptables rules, etc. - `CAP_NET_BIND_SERVICE`: bind a port below 1024. See `man capabilities` for the full list and details. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Using capabilities - Container engines will typically drop all "dangerous" capabilities. - You can then re-enable capabilities on a per-container basis, as needed. - With the Docker engine: `docker run --cap-add ...` - If you write your own code to manage capabilities: - make sure that you understand what each capability does, - read about *ambient* capabilities as well. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Seccomp - Seccomp is secure computing. - Achieve high level of security by restricting drastically available syscalls. - Original seccomp only allows `read()`, `write()`, `exit()`, `sigreturn()`. - The seccomp-bpf extension allows to specify custom filters with BPF rules. - This allows to filter by syscall, and by parameter. - BPF code can perform arbitrarily complex checks, quickly, and safely. - Container engines take care of this so you don't have to. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Linux Security Modules - The most popular ones are SELinux and AppArmor. - Red Hat distros generally use SELinux. - Debian distros (in particular, Ubuntu) generally use AppArmor. - LSMs add a layer of access control to all process operations. - Container engines take care of this so you don't have to. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/plastic-containers.JPG)] --- name: toc-copy-on-write-filesystems class: title Copy-on-write filesystems .nav[ [Previous section](#toc-security-features) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-docker-engine-and-other-container-engines) ] .debug[(automatically generated title slide)] --- # Copy-on-write filesystems Container engines rely on copy-on-write to be able to start containers quickly, regardless of their size. We will explain how that works, and review some of the copy-on-write storage systems available on Linux. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## What is copy-on-write? - Copy-on-write is a mechanism allowing to share data. - The data appears to be a copy, but is only a link (or reference) to the original data. - The actual copy happens only when someone tries to change the shared data. - Whoever changes the shared data ends up using their own copy instead of the shared data. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## A few metaphors -- - First metaphor: white board and tracing paper -- - Second metaphor: magic books with shadowy pages -- - Third metaphor: just-in-time house building .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Copy-on-write is *everywhere* - Process creation with `fork()`. - Consistent disk snapshots. - Efficient VM provisioning. - And, of course, containers. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Copy-on-write and containers Copy-on-write is essential to give us "convenient" containers. - Creating a new container (from an existing image) is "free". (Otherwise, we would have to copy the image first.) - Customizing a container (by tweaking a few files) is cheap. (Adding a 1 KB configuration file to a 1 GB container takes 1 KB, not 1 GB.) - We can take snapshots, i.e. have "checkpoints" or "save points" when building images. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## AUFS overview - The original (legacy) copy-on-write filesystem used by first versions of Docker. - Combine multiple *branches* in a specific order. - Each branch is just a normal directory. - You generally have: - at least one read-only branch (at the bottom), - exactly one read-write branch (at the top). (But other fun combinations are possible too!) .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## AUFS operations: opening a file - With `O_RDONLY` - read-only access: - look it up in each branch, starting from the top - open the first one we find - With `O_WRONLY` or `O_RDWR` - write access: - if the file exists on the top branch: open it - if the file exists on another branch: "copy up" (i.e. copy the file to the top branch and open the copy) - if the file doesn't exist on any branch: create it on the top branch That "copy-up" operation can take a while if the file is big! .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## AUFS operations: deleting a file - A *whiteout* file is created. - This is similar to the concept of "tombstones" used in some data systems. ``` # docker run ubuntu rm /etc/shadow # ls -la /var/lib/docker/aufs/diff/$(docker ps --no-trunc -lq)/etc total 8 drwxr-xr-x 2 root root 4096 Jan 27 15:36 . drwxr-xr-x 5 root root 4096 Jan 27 15:36 .. -r--r--r-- 2 root root 0 Jan 27 15:36 .wh.shadow ``` .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## AUFS performance - AUFS `mount()` is fast, so creation of containers is quick. - Read/write access has native speeds. - But initial `open()` is expensive in two scenarios: - when writing big files (log files, databases ...), - when searching many directories (PATH, classpath, etc.) over many layers. - Protip: when we built dotCloud, we ended up putting all important data on *volumes*. - When starting the same container multiple times: - the data is loaded only once from disk, and cached only once in memory; - but `dentries` will be duplicated. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Device Mapper Device Mapper is a rich subsystem with many features. It can be used for: RAID, encrypted devices, snapshots, and more. In the context of containers (and Docker in particular), "Device Mapper" means: "the Device Mapper system + its *thin provisioning target*" If you see the abbreviation "thinp" it stands for "thin provisioning". .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Device Mapper principles - Copy-on-write happens on the *block* level (instead of the *file* level). - Each container and each image get their own block device. - At any given time, it is possible to take a snapshot: - of an existing container (to create a frozen image), - of an existing image (to create a container from it). - If a block has never been written to: - it's assumed to be all zeros, - it's not allocated on disk. (That last property is the reason for the name "thin" provisioning.) .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Device Mapper operational details - Two storage areas are needed: one for *data*, another for *metadata*. - "data" is also called the "pool"; it's just a big pool of blocks. (Docker uses the smallest possible block size, 64 KB.) - "metadata" contains the mappings between virtual offsets (in the snapshots) and physical offsets (in the pool). - Each time a new block (or a copy-on-write block) is written, a block is allocated from the pool. - When there are no more blocks in the pool, attempts to write will stall until the pool is increased (or the write operation aborted). - In other words: when running out of space, containers are frozen, but operations will resume as soon as space is available. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Device Mapper performance - By default, Docker puts data and metadata on a loop device backed by a sparse file. - This is great from a usability point of view, since zero configuration is needed. - But it is terrible from a performance point of view: - each time a container writes to a new block, - a block has to be allocated from the pool, - and when it's written to, - a block has to be allocated from the sparse file, - and sparse file performance isn't great anyway. - If you use Device Mapper, make sure to put data (and metadata) on devices! .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## BTRFS principles - BTRFS is a filesystem (like EXT4, XFS, NTFS...) with built-in snapshots. - The "copy-on-write" happens at the filesystem level. - BTRFS integrates the snapshot and block pool management features at the filesystem level. (Instead of the block level for Device Mapper.) - In practice, we create a "subvolume" and later take a "snapshot" of that subvolume. Imagine: `mkdir` with Super Powers and `cp -a` with Super Powers. - These operations can be executed with the `btrfs` CLI tool. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## BTRFS in practice with Docker - Docker can use BTRFS and its snapshotting features to store container images. - The only requirement is that `/var/lib/docker` is on a BTRFS filesystem. (Or, the directory specified with the `--data-root` flag when starting the engine.) .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- class: extra-details ## BTRFS quirks - BTRFS works by dividing its storage in *chunks*. - A chunk can contain data or metadata. - You can run out of chunks (and get `No space left on device`) even though `df` shows space available. (Because chunks are only partially allocated.) - Quick fix: ``` # btrfs filesys balance start -dusage=1 /var/lib/docker ``` .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Overlay2 - Overlay2 is very similar to AUFS. - However, it has been merged in "upstream" kernel. - It is therefore available on all modern kernels. (AUFS was available on Debian and Ubuntu, but required custom kernels on other distros.) - It is simpler than AUFS (it can only have two branches, called "layers"). - The container engine abstracts this detail, so this is not a concern. - Overlay2 storage drivers generally use hard links between layers. - This improves `stat()` and `open()` performance, at the expense of inode usage. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## ZFS - ZFS is similar to BTRFS (at least from a container user's perspective). - Pros: - high performance - high reliability (with e.g. data checksums) - optional data compression and deduplication - Cons: - high memory usage - not in upstream kernel - It is available as a kernel module or through FUSE. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Which one is the best? - Eventually, overlay2 should be the best option. - It is available on all modern systems. - Its memory usage is better than Device Mapper, BTRFS, or ZFS. - The remarks about *write performance* shouldn't bother you: data should always be stored in volumes anyway! .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-1.jpg)] --- name: toc-docker-engine-and-other-container-engines class: title Docker Engine and other container engines .nav[ [Previous section](#toc-copy-on-write-filesystems) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-orchestration-an-overview) ] .debug[(automatically generated title slide)] --- # Docker Engine and other container engines * We are going to cover the architecture of the Docker Engine. * We will also present other container engines. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- class: pic ## Docker Engine external architecture ![](images/docker-engine-architecture.svg) .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## Docker Engine external architecture * The Engine is a daemon (service running in the background). * All interaction is done through a REST API exposed over a socket. * On Linux, the default socket is a UNIX socket: `/var/run/docker.sock`. * We can also use a TCP socket, with optional mutual TLS authentication. * The `docker` CLI communicates with the Engine over the socket. Note: strictly speaking, the Docker API is not fully REST. Some operations (e.g. dealing with interactive containers and log streaming) don't fit the REST model. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- class: pic ## Docker Engine internal architecture ![](images/dockerd-and-containerd.png) .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## Docker Engine internal architecture * Up to Docker 1.10: the Docker Engine is one single monolithic binary. * Starting with Docker 1.11, the Engine is split into multiple parts: - `dockerd` (REST API, auth, networking, storage) - `containerd` (container lifecycle, controlled over a gRPC API) - `containerd-shim` (per-container; does almost nothing but allows to restart the Engine without restarting the containers) - `runc` (per-container; does the actual heavy lifting to start the container) * Some features (like image and snapshot management) are progressively being pushed from `dockerd` to `containerd`. For more details, check [this short presentation by Phil Estes](https://www.slideshare.net/PhilEstes/diving-through-the-layers-investigating-runc-containerd-and-the-docker-engine-architecture). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## Other container engines The following list is not exhaustive. Furthermore, we limited the scope to Linux containers. We can also find containers (or things that look like containers) on other platforms like Windows, macOS, Solaris, FreeBSD ... .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## LXC * The venerable ancestor (first released in 2008). * Docker initially relied on it to execute containers. * No daemon; no central API. * Each container is managed by a `lxc-start` process. * Each `lxc-start` process exposes a custom API over a local UNIX socket, allowing to interact with the container. * No notion of image (container filesystems have to be managed manually). * Networking has to be setup manually. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## LXD * Re-uses LXC code (through liblxc). * Builds on top of LXC to offer a more modern experience. * Daemon exposing a REST API. * Can manage images, snapshots, migrations, networking, storage. * "offers a user experience similar to virtual machines but using Linux containers instead." .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## rkt * Compares to `runc`. * No daemon or API. * Strong emphasis on security (through privilege separation). * Networking has to be setup separately (e.g. through CNI plugins). * Partial image management (pull, but no push). (Image build is handled by separate tools.) .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## CRI-O * Designed to be used with Kubernetes as a simple, basic runtime. * Compares to `containerd`. * Daemon exposing a gRPC interface. * Controlled using the CRI API (Container Runtime Interface defined by Kubernetes). * Needs an underlying OCI runtime (e.g. runc). * Handles storage, images, networking (through CNI plugins). We're not aware of anyone using it directly (i.e. outside of Kubernetes). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## systemd * "init" system (PID 1) in most modern Linux distributions. * Offers tools like `systemd-nspawn` and `machinectl` to manage containers. * `systemd-nspawn` is "In many ways it is similar to chroot(1), but more powerful". * `machinectl` can interact with VMs and containers managed by systemd. * Exposes a DBUS API. * Basic image support (tar archives and raw disk images). * Network has to be setup manually. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## Kata containers * OCI-compliant runtime. * Fusion of two projects: Intel Clear Containers and Hyper runV. * Run each container in a lightweight virtual machine. * Requires to run on bare metal *or* with nested virtualization. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## gVisor * OCI-compliant runtime. * Implements a subset of the Linux kernel system calls. * Written in go, uses a smaller subset of system calls. * Can be heavily sandboxed. * Can run in two modes: * KVM (requires bare metal or nested virtualization), * ptrace (no requirement, but slower). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## Overall ... * The Docker Engine is very developer-centric: - easy to install - easy to use - no manual setup - first-class image build and transfer * As a result, it is a fantastic tool in development environments. * On servers: - Docker is a good default choice - If you use Kubernetes, the engine doesn't matter .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-2.jpg)] --- name: toc-orchestration-an-overview class: title Orchestration, an overview .nav[ [Previous section](#toc-docker-engine-and-other-container-engines) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-links-and-resources) ] .debug[(automatically generated title slide)] --- # Orchestration, an overview In this chapter, we will: * Explain what is orchestration and why we would need it. * Present (from a high-level perspective) some orchestrators. * Show one orchestrator (Kubernetes) in action. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## What's orchestration? ![Joana Carneiro (orchestra conductor)](images/conductor.jpg) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## What's orchestration? According to Wikipedia: *Orchestration describes the __automated__ arrangement, coordination, and management of complex computer systems, middleware, and services.* -- *[...] orchestration is often discussed in the context of __service-oriented architecture__, __virtualization__, provisioning, Converged Infrastructure and __dynamic datacenter__ topics.* -- What does that really mean? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances -- - Q: do we always use 100% of our servers? -- - A: obviously not! .center[![Daily variations of traffic](images/traffic-graph.png)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances - Every night, scale down (by shutting down extraneous replicated instances) - Every morning, scale up (by deploying new copies) - "Pay for what you use" (i.e. save big $$$ here) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances How do we implement this? - Crontab - Autoscaling (save even bigger $$$) That's *relatively* easy. Now, how are things for our IAAS provider? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - Q: what's the #1 cost in a datacenter? -- - A: electricity! -- - Q: what uses electricity? -- - A: servers, obviously - A: ... and associated cooling -- - Q: do we always use 100% of our servers? -- - A: obviously not! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - If only we could turn off unused servers during the night... - Problem: we can only turn off a server if it's totally empty! (i.e. all VMs on it are stopped/moved) - Solution: *migrate* VMs and shutdown empty servers (e.g. combine two hypervisors with 40% load into 80%+0%, and shutdown the one at 0%) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter How do we implement this? - Shutdown empty hosts (but keep some spare capacity) - Start hosts again when capacity gets low - Ability to "live migrate" VMs (Xen already did this 10+ years ago) - Rebalance VMs on a regular basis - what if a VM is stopped while we move it? - should we allow provisioning on hosts involved in a migration? *Scheduling* becomes more complex. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## What is scheduling? According to Wikipedia (again): *In computing, scheduling is the method by which threads, processes or data flows are given access to system resources.* The scheduler is concerned mainly with: - throughput (total amount of work done per time unit); - turnaround time (between submission and completion); - response time (between submission and start); - waiting time (between job readiness and execution); - fairness (appropriate times according to priorities). In practice, these goals often conflict. **"Scheduling" = decide which resources to use.** .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Exercise 1 - You have: - 5 hypervisors (physical machines) - Each server has: - 16 GB RAM, 8 cores, 1 TB disk - Each week, your team asks: - one VM with X RAM, Y CPU, Z disk Scheduling = deciding which hypervisor to use for each VM. Difficulty: easy! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM. Difficulty: ??? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM. ![Troll face](images/trollface.png) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Exercise 3 - You have machines (physical and/or virtual) - You have containers - You are trying to put the containers on the machines - Sounds familiar? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[![Not-so-good bin packing](images/binpacking-1d-1.gif)] ## We can't fit a job of size 6 :( .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[![Better bin packing](images/binpacking-1d-2.gif)] ## ... Now we can! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with two resources .center[![2D bin packing](images/binpacking-2d.gif)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with three resources .center[![3D bin packing](images/binpacking-3d.gif)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## You need to be good at this .center[![Tangram](images/tangram.gif)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## But also, you must be quick! .center[![Tetris](images/tetris-1.png)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## And be web scale! .center[![Big tetris](images/tetris-2.gif)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## And think outside (?) of the box! .center[![3D tetris](images/tetris-3.png)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## Good luck! .center[![FUUUUUU face](images/fu-face.jpg)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## TL,DR * Scheduling with multiple resources (dimensions) is hard. * Don't expect to solve the problem with a Tiny Shell Script. * There are literally tons of research papers written on this. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## But our orchestrator also needs to manage ... * Network connectivity (or filtering) between containers. * Load balancing (external and internal). * Failure recovery (if a node or a whole datacenter fails). * Rolling out new versions of our applications. (Canary deployments, blue/green deployments...) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Some orchestrators We are going to present briefly a few orchestrators. There is no "absolute best" orchestrator. It depends on: - your applications, - your requirements, - your pre-existing skills... .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Nomad - Open Source project by Hashicorp. - Arbitrary scheduler (not just for containers). - Great if you want to schedule mixed workloads. (VMs, containers, processes...) - Less integration with the rest of the container ecosystem. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Mesos - Open Source project in the Apache Foundation. - Arbitrary scheduler (not just for containers). - Two-level scheduler. - Top-level scheduler acts as a resource broker. - Second-level schedulers (aka "frameworks") obtain resources from top-level. - Frameworks implement various strategies. (Marathon = long running processes; Chronos = run at intervals; ...) - Commercial offering through DC/OS by Mesosphere. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Rancher - Rancher 1 offered a simple interface for Docker hosts. - Rancher 2 is a complete management platform for Docker and Kubernetes. - Technically not an orchestrator, but it's a popular option. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Swarm - Tightly integrated with the Docker Engine. - Extremely simple to deploy and setup, even in multi-manager (HA) mode. - Secure by default. - Strongly opinionated: - smaller set of features, - easier to operate. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Kubernetes - Open Source project initiated by Google. - Contributions from many other actors. - *De facto* standard for container orchestration. - Many deployment options; some of them very complex. - Reputation: steep learning curve. - Reality: - true, if we try to understand *everything*; - false, if we focus on what matters. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks! Questions? ![end](images/end.jpg) .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/shared/thankyou.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/two-containers-on-a-truck.jpg)] --- name: toc-links-and-resources class: title Links and resources .nav[ [Previous section](#toc-orchestration-an-overview) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-) ] .debug[(automatically generated title slide)] --- # Links and resources - [Docker Community Slack](https://community.docker.com/registrations/groups/4316) - [Docker Community Forums](https://forums.docker.com/) - [Docker Hub](https://hub.docker.com) - [Docker Blog](https://blog.docker.com/) - [Docker documentation](https://docs.docker.com/) - [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker) - [Docker on Twitter](https://twitter.com/docker) - [Play With Docker Hands-On Labs](https://training.play-with-docker.com/) .footnote[These slides (and future updates) are on β https://container.training/] .debug[[containers/links.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/links.md)]
RUN CMD, EXPOSE ... ``` * The build fails as soon as an instruction fails * If `RUN ` fails, the build doesn't produce an image * If it succeeds, it produces a clean image (without test libraries and data) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/ShippingContainerSFBay.jpg)] --- name: toc-dockerfile-examples class: title Dockerfile examples .nav[ [Previous section](#toc-tips-for-efficient-dockerfiles) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-exercise--writing-better-dockerfiles) ] .debug[(automatically generated title slide)] --- # Dockerfile examples There are a number of tips, tricks, and techniques that we can use in Dockerfiles. But sometimes, we have to use different (and even opposed) practices depending on: - the complexity of our project, - the programming language or framework that we are using, - the stage of our project (early MVP vs. super-stable production), - whether we're building a final image or a base for further images, - etc. We are going to show a few examples using very different techniques. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## When to optimize an image When authoring official images, it is a good idea to reduce as much as possible: - the number of layers, - the size of the final image. This is often done at the expense of build time and convenience for the image maintainer; but when an image is downloaded millions of time, saving even a few seconds of pull time can be worth it. .small[ ```dockerfile RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd ... RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \ && echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \ && tar -xzf wordpress.tar.gz -C /usr/src/ \ && rm wordpress.tar.gz \ && chown -R www-data:www-data /usr/src/wordpress ``` ] (Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## When to *not* optimize an image Sometimes, it is better to prioritize *maintainer convenience*. In particular, if: - the image changes a lot, - the image has very few users (e.g. only 1, the maintainer!), - the image is built and run on the same machine, - the image is built and run on machines with a very fast link ... In these cases, just keep things simple! (Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ```dockerfile FROM debian:sid RUN apt-get update -q RUN apt-get install -yq build-essential make RUN apt-get install -yq zlib1g-dev RUN apt-get install -yq ruby ruby-dev RUN apt-get install -yq python-pygments RUN apt-get install -yq nodejs RUN apt-get install -yq cmake RUN gem install --no-rdoc --no-ri github-pages COPY . /blog WORKDIR /blog VOLUME /blog/_site EXPOSE 4000 CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"] ``` .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Multi-dimensional versioning systems Images can have a tag, indicating the version of the image. But sometimes, there are multiple important components, and we need to indicate the versions for all of them. This can be done with environment variables: ```dockerfile ENV PIP=9.0.3 \ ZC_BUILDOUT=2.11.2 \ SETUPTOOLS=38.7.0 \ PLONE_MAJOR=5.1 \ PLONE_VERSION=5.1.0 \ PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d ``` (Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Entrypoints and wrappers It is very common to define a custom entrypoint. That entrypoint will generally be a script, performing any combination of: - pre-flights checks (if a required dependency is not available, display a nice error message early instead of an obscure one in a deep log file), - generation or validation of configuration files, - dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`), - and more. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## A typical entrypoint script ```dockerfile #!/bin/sh set -e # first arg is '-f' or '--some-option' # or first arg is 'something.conf' if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then set -- redis-server "$@" fi # allow the container to be started with '--user' if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then chown -R redis . exec su-exec redis "$0" "$@" fi exec "$@" ``` (Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Factoring information To facilitate maintenance (and avoid human errors), avoid to repeat information like: - version numbers, - remote asset URLs (e.g. source tarballs) ... Instead, use environment variables. .small[ ```dockerfile ENV NODE_VERSION 10.2.1 ... RUN ... && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \ && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \ && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \ && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \ && tar -xf "node-v$NODE_VERSION.tar.xz" \ && cd "node-v$NODE_VERSION" \ ... ``` ] (Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Overrides In theory, development and production images should be the same. In practice, we often need to enable specific behaviors in development (e.g. debug statements). One way to reconcile both needs is to use Compose to enable these behaviors. Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Production image This Dockerfile builds an image leveraging gunicorn: ```dockerfile FROM python RUN pip install flask RUN pip install gunicorn RUN pip install redis COPY . /src WORKDIR /src CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app EXPOSE 5000 ``` (Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## Development Compose file This Compose file uses the same image, but with a few overrides for development: - the Flask development server is used (overriding `CMD`), - the `DEBUG` environment variable is set, - a volume is used to provide a faster local development workflow. .small[ ```yaml services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src ``` ] (Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- ## How to know which best practices are better? - The main goal of containers is to make our lives easier. - In this chapter, we showed many ways to write Dockerfiles. - These Dockerfiles use sometimes diametrally opposed techniques. - Yet, they were the "right" ones *for a specific situation.* - It's OK (and even encouraged) to start simple and evolve as needed. - Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration! .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/aerial-view-of-containers.jpg)] --- name: toc-exercise--writing-better-dockerfiles class: title Exercise β writing better Dockerfiles .nav[ [Previous section](#toc-dockerfile-examples) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-naming-and-inspecting-containers) ] .debug[(automatically generated title slide)] --- # Exercise β writing better Dockerfiles Let's update our Dockerfiles to leverage multi-stage builds! The code is at: https://github.com/jpetazzo/wordsmith Use a different tag for these images, so that we can compare their sizes. What's the size difference between single-stage and multi-stage builds? .debug[[intro-fullday.yml](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/intro-fullday.yml)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/blue-containers.jpg)] --- name: toc-naming-and-inspecting-containers class: title Naming and inspecting containers .nav[ [Previous section](#toc-exercise--writing-better-dockerfiles) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-labels) ] .debug[(automatically generated title slide)] --- class: title # Naming and inspecting containers ![Markings on container door](images/title-naming-and-inspecting-containers.jpg) .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Objectives In this lesson, we will learn about an important Docker concept: container *naming*. Naming allows us to: * Reference easily a container. * Ensure unicity of a specific container. We will also see the `inspect` command, which gives a lot of details about a container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Naming our containers So far, we have referenced containers with their ID. We have copy-pasted the ID, or used a shortened prefix. But each container can also be referenced by its name. If a container is named `thumbnail-worker`, I can do: ```bash $ docker logs thumbnail-worker $ docker stop thumbnail-worker etc. ``` .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Default names When we create a container, if we don't give a specific name, Docker will pick one for us. It will be the concatenation of: * A mood (furious, goofy, suspicious, boring...) * The name of a famous inventor (tesla, darwin, wozniak...) Examples: `happy_curie`, `clever_hopper`, `jovial_lovelace` ... .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Specifying a name You can set the name of the container when you create it. ```bash $ docker run --name ticktock jpetazzo/clock ``` If you specify a name that already exists, Docker will refuse to create the container. This lets us enforce unicity of a given resource. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Renaming containers * You can rename containers with `docker rename`. * This allows you to "free up" a name without destroying the associated container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Inspecting a container The `docker inspect` command will output a very detailed JSON map. ```bash $ docker inspect [{ ... (many pages of JSON here) ... ``` There are multiple ways to consume that information. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Parsing JSON with the Shell * You *could* grep and cut or awk the output of `docker inspect`. * Please, don't. * It's painful. * If you really must parse JSON from the Shell, use JQ! (It's great.) ```bash $ docker inspect | jq . ``` * We will see a better solution which doesn't require extra tools. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- ## Using `--format` You can specify a format string, which will be parsed by Go's text/template package. ```bash $ docker inspect --format '{{ json .Created }}' "2015-02-24T07:21:11.712240394Z" ``` * The generic syntax is to wrap the expression with double curly braces. * The expression starts with a dot representing the JSON object. * Then each field or member can be accessed in dotted notation syntax. * The optional `json` keyword asks for valid JSON output. (e.g. here it adds the surrounding double-quotes.) .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Naming_And_Inspecting.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/chinook-helicopter-container.jpg)] --- name: toc-labels class: title Labels .nav[ [Previous section](#toc-naming-and-inspecting-containers) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-getting-inside-a-container) ] .debug[(automatically generated title slide)] --- # Labels * Labels allow to attach arbitrary metadata to containers. * Labels are key/value pairs. * They are specified at container creation. * You can query them with `docker inspect`. * They can also be used as filters with some commands (e.g. `docker ps`). .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Labels.md)] --- ## Using labels Let's create a few containers with a label `owner`. ```bash docker run -d -l owner=alice nginx docker run -d -l owner=bob nginx docker run -d -l owner nginx ``` We didn't specify a value for the `owner` label in the last example. This is equivalent to setting the value to be an empty string. .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Labels.md)] --- ## Querying labels We can view the labels with `docker inspect`. ```bash $ docker inspect $(docker ps -lq) | grep -A3 Labels "Labels": { "maintainer": "NGINX Docker Maintainers ", "owner": "" }, ``` We can use the `--format` flag to list the value of a label. ```bash $ docker inspect $(docker ps -q) --format 'OWNER={{.Config.Labels.owner}}' ``` .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Labels.md)] --- ## Using labels to select containers We can list containers having a specific label. ```bash $ docker ps --filter label=owner ``` Or we can list containers having a specific label with a specific value. ```bash $ docker ps --filter label=owner=alice ``` .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Labels.md)] --- ## Use-cases for labels * HTTP vhost of a web app or web service. (The label is used to generate the configuration for NGINX, HAProxy, etc.) * Backup schedule for a stateful service. (The label is used by a cron job to determine if/when to backup container data.) * Service ownership. (To determine internal cross-billing, or who to page in case of outage.) * etc. .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Labels.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/container-cranes.jpg)] --- name: toc-getting-inside-a-container class: title Getting inside a container .nav[ [Previous section](#toc-labels) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-limiting-resources) ] .debug[(automatically generated title slide)] --- class: title # Getting inside a container ![Person standing inside a container](images/getting-inside.png) .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Objectives On a traditional server or VM, we sometimes need to: * log into the machine (with SSH or on the console), * analyze the disks (by removing them or rebooting with a rescue system). In this chapter, we will see how to do that with containers. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Getting a shell Every once in a while, we want to log into a machine. In an perfect world, this shouldn't be necessary. * You need to install or update packages (and their configuration)? Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...) * You need to view logs and metrics? Collect and access them through a centralized platform. In the real world, though ... we often need shell access! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Not getting a shell Even without a perfect deployment system, we can do many operations without getting a shell. * Installing packages can (and should) be done in the container image. * Configuration can be done at the image level, or when the container starts. * Dynamic configuration can be stored in a volume (shared with another container). * Logs written to stdout are automatically collected by the Docker Engine. * Other logs can be written to a shared volume. * Process information and metrics are visible from the host. _Let's save logging, volumes ... for later, but let's have a look at process information!_ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Viewing container processes from the host If you run Docker on Linux, container processes are visible on the host. ```bash $ ps faux | less ``` * Scroll around the output of this command. * You should see the `jpetazzo/clock` container. * A containerized process is just like any other process on the host. * We can use tools like `lsof`, `strace`, `gdb` ... To analyze them. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- class: extra-details ## What's the difference between a container process and a host process? * Each process (containerized or not) belongs to *namespaces* and *cgroups*. * The namespaces and cgroups determine what a process can "see" and "do". * Analogy: each process (containerized or not) runs with a specific UID (user ID). * UID=0 is root, and has elevated privileges. Other UIDs are normal users. _We will give more details about namespaces and cgroups later._ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a running container * Sometimes, we need to get a shell anyway. * We _could_ run some SSH server in the container ... * But it is easier to use `docker exec`. ```bash $ docker exec -ti ticktock sh ``` * This creates a new process (running `sh`) _inside_ the container. * This can also be done "manually" with the tool `nsenter`. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Caveats * The tool that you want to run needs to exist in the container. * Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time. (This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.) * Most importantly: the container needs to be running. * What if the container is stopped or crashed? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a stopped container * A stopped container is only _storage_ (like a disk drive). * We cannot SSH into a disk drive or USB stick! * We need to connect the disk to a running machine. * How does that translate into the container world? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Analyzing a stopped container As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`. ```bash docker run jpetazzo/crashtest ``` The container starts, but then stops immediately, without any output. What would MacGyver™ do? First, let's check the status of that container. ```bash docker ps -l ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Viewing filesystem changes * We can use `docker diff` to see files that were added / changed / removed. ```bash docker diff ``` * The container ID was shown by `docker ps -l`. * We can also see it with `docker ps -lq`. * The output of `docker diff` shows some interesting log files! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Accessing files * We can extract files with `docker cp`. ```bash docker cp :/var/log/nginx/error.log . ``` * Then we can look at that log file. ```bash cat error.log ``` (The directory `/run/nginx` doesn't exist.) .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- ## Exploring a crashed container * We can restart a container with `docker start` ... * ... But it will probably crash again immediately! * We cannot specify a different program to run with `docker start` * But we can create a new image from the crashed container ```bash docker commit debugimage ``` * Then we can run a new container from that image, with a custom entrypoint ```bash docker run -ti --entrypoint sh debugimage ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- class: extra-details ## Obtaining a complete dump * We can also dump the entire filesystem of a container. * This is done with `docker export`. * It generates a tar archive. ```bash docker export | tar tv ``` This will give a detailed listing of the content of the container. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Getting_Inside.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/container-housing.jpg)] --- name: toc-limiting-resources class: title Limiting resources .nav[ [Previous section](#toc-getting-inside-a-container) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-container-networking-basics) ] .debug[(automatically generated title slide)] --- # Limiting resources - So far, we have used containers as convenient units of deployment. - What happens when a container tries to use more resources than available? (RAM, CPU, disk usage, disk and network I/O...) - What happens when multiple containers compete for the same resource? - Can we limit resources available to a container? (Spoiler alert: yes!) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Container processes are normal processes - Containers are closer to "fancy processes" than to "lightweight VMs". - A process running in a container is, in fact, a process running on the host. - Let's look at the output of `ps` on a container host running 3 containers : ``` 0 2662 0.2 0.3 /usr/bin/dockerd -H fd:// 0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe 0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off; 101 23543 0.0 0.0 | \_ `nginx`: worker process 0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2 0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 0 23725 0.0 0.0 \_ `/bin/sh` ``` - The highlighted processes are containerized processes. (That host is running nginx, elasticsearch, and alpine.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## By default: nothing changes - What happens when a process uses too much memory on a Linux system? -- - Simplified answer: - swap is used (if available); - if there is not enough swap space, eventually, the out-of-memory killer is invoked; - the OOM killer uses heuristics to kill processes; - sometimes, it kills an unrelated process. -- - What happens when a container uses too much memory? - The same thing! (i.e., a process eventually gets killed, possibly in another container.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting container resources - The Linux kernel offers rich mechanisms to limit container resources. - For memory usage, the mechanism is part of the *cgroup* subsystem. - This subsystem allows to limit the memory for a process or a group of processes. - A container engine leverages these mechanisms to limit memory for a container. - The out-of-memory killer has a new behavior: - it runs when a container exceeds its allowed memory usage, - in that case, it only kills processes in that container. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting memory in practice - The Docker Engine offers multiple flags to limit memory usage. - The two most useful ones are `--memory` and `--memory-swap`. - `--memory` limits the amount of physical RAM used by a container. - `--memory-swap` limits the total amount (RAM+swap) used by a container. - The memory limit can be expressed in bytes, or with a unit suffix. (e.g.: `--memory 100m` = 100 megabytes.) - We will see two strategies: limiting RAM usage, or limiting both .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting RAM usage Example: ```bash docker run -ti --memory 100m python ``` If the container tries to use more than 100 MB of RAM, *and* swap is available: - the container will not be killed, - memory above 100 MB will be swapped out, - in most cases, the app in the container will be slowed down (a lot). If we run out of swap, the global OOM killer still intervenes. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting both RAM and swap usage Example: ```bash docker run -ti --memory 100m --memory-swap 100m python ``` If the container tries to use more than 100 MB of memory, it is killed. On the other hand, the application will never be slowed down because of swap. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## When to pick which strategy? - Stateful services (like databases) will lose or corrupt data when killed - Allow them to use swap space, but monitor swap usage - Stateless services can usually be killed with little impact - Limit their mem+swap usage, but monitor if they get killed - Ultimately, this is no different from "do I want swap, and how much?" .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting CPU usage - There are no less than 3 ways to limit CPU usage: - setting a relative priority with `--cpu-shares`, - setting a CPU% limit with `--cpus`, - pinning a container to specific CPUs with `--cpuset-cpus`. - They can be used separately or together. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Setting relative priority - Each container has a relative priority used by the Linux scheduler. - By default, this priority is 1024. - As long as CPU usage is not maxed out, this has no effect. - When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority. - In other words: a container with `--cpu-shares 2048` will receive twice as much than the default. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Setting a CPU% limit - This setting will make sure that a container doesn't use more than a given % of CPU. - The value is expressed in CPUs; therefore: `--cpus 0.1` means 10% of one CPU, `--cpus 1.0` means 100% of one whole CPU, `--cpus 10.0` means 10 entire CPUs. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Pinning containers to CPUs - On multi-core machines, it is possible to restrict the execution on a set of CPUs. - Examples: `--cpuset-cpus 0` forces the container to run on CPU 0; `--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7; `--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11. - This will not reserve the corresponding CPUs! (They might still be used by other containers, or uncontainerized processes.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- ## Limiting disk usage - Most storage drivers do not support limiting the disk usage of containers. (With the exception of devicemapper, but the limit cannot be set easily.) - This means that a single container could exhaust disk space for everyone. - In practice, however, this is not a concern, because: - data files (for stateful services) should reside on volumes, - assets (e.g. images, user-generated content...) should reside on object stores or on volume, - logs are written on standard output and gathered by the container engine. - Container disk usage can be audited with `docker ps -s` and `docker diff`. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Resource_Limits.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/containers-by-the-water.jpg)] --- name: toc-container-networking-basics class: title Container networking basics .nav[ [Previous section](#toc-limiting-resources) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-container-network-drivers) ] .debug[(automatically generated title slide)] --- class: title # Container networking basics ![A dense graph network](images/title-container-networking-basics.jpg) .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Objectives We will now run network services (accepting requests) in containers. At the end of this section, you will be able to: * Run a network service in a container. * Manipulate container networking basics. * Find a container's IP address. We will also explain the different network models used by Docker. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## A simple, static web server Run the Docker Hub image `nginx`, which contains a basic web server: ```bash $ docker run -d -P nginx 66b1ce719198711292c8f34f84a7b68c3876cf9f67015e752b94e189d35a204e ``` * Docker will download the image from the Docker Hub. * `-d` tells Docker to run the image in the background. * `-P` tells Docker to make this service reachable from other computers. (`-P` is the short version of `--publish-all`.) But, how do we connect to our web server now? .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Finding our web server port We will use `docker ps`: ```bash $ docker ps CONTAINER ID IMAGE ... PORTS ... e40ffb406c9e nginx ... 0.0.0.0:32768->80/tcp ... ``` * The web server is running on port 80 inside the container. * This port is mapped to port 32768 on our Docker host. We will explain the whys and hows of this port mapping. But first, let's make sure that everything works properly. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (GUI) Point your browser to the IP address of your Docker host, on the port shown by `docker ps` for container port 80. ![Screenshot](images/welcome-to-nginx.png) .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (CLI) You can also use `curl` directly from the Docker host. Make sure to use the right port number if it is different from the example below: ```bash $ curl localhost:32768 Welcome to nginx! ... ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## How does Docker know which port to map? * There is metadata in the image telling "this image has something on port 80". * We can see that metadata with `docker inspect`: ```bash $ docker inspect --format '{{.Config.ExposedPorts}}' nginx map[80/tcp:{}] ``` * This metadata was set in the Dockerfile, with the `EXPOSE` keyword. * We can see that with `docker history`: ```bash $ docker history nginx IMAGE CREATED CREATED BY 7f70b30f2cc6 11 days ago /bin/sh -c #(nop) CMD ["nginx" "-g" "β¦ 11 days ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM] 11 days ago /bin/sh -c #(nop) EXPOSE 80/tcp ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Why are we mapping ports? * We are out of IPv4 addresses. * Containers cannot have public IPv4 addresses. * They have private addresses. * Services have to be exposed port by port. * Ports have to be mapped to avoid conflicts. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Finding the web server port in a script Parsing the output of `docker ps` would be painful. There is a command to help us: ```bash $ docker port 80 32768 ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Manual allocation of port numbers If you want to set port numbers yourself, no problem: ```bash $ docker run -d -p 80:80 nginx $ docker run -d -p 8000:80 nginx $ docker run -d -p 8080:80 -p 8888:80 nginx ``` * We are running three NGINX web servers. * The first one is exposed on port 80. * The second one is exposed on port 8000. * The third one is exposed on ports 8080 and 8888. Note: the convention is `port-on-host:port-on-container`. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Plumbing containers into your infrastructure There are many ways to integrate containers in your network. * Start the container, letting Docker allocate a public port for it. Then retrieve that port number and feed it to your configuration. * Pick a fixed port number in advance, when you generate your configuration. Then start your container by setting the port numbers manually. * Use a network plugin, connecting your containers with e.g. VLANs, tunnels... * Enable *Swarm Mode* to deploy across a cluster. The container will then be reachable through any node of the cluster. When using Docker through an extra management layer like Mesos or Kubernetes, these will usually provide their own mechanism to expose containers. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Finding the container's IP address We can use the `docker inspect` command to find the IP address of the container. ```bash $ docker inspect --format '{{ .NetworkSettings.IPAddress }}' 172.17.0.3 ``` * `docker inspect` is an advanced command, that can retrieve a ton of information about our containers. * Here, we provide it with a format string to extract exactly the private IP address of the container. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Pinging our container We can test connectivity to the container using the IP address we've just discovered. Let's see this now by using the `ping` tool. ```bash $ ping 64 bytes from : icmp_req=1 ttl=64 time=0.085 ms 64 bytes from : icmp_req=2 ttl=64 time=0.085 ms 64 bytes from : icmp_req=3 ttl=64 time=0.085 ms ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- ## Section summary We've learned how to: * Expose a network port. * Manipulate container networking basics. * Find a container's IP address. In the next chapter, we will see how to connect containers together without exposing their ports. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Networking_Basics.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/distillery-containers.jpg)] --- name: toc-container-network-drivers class: title Container network drivers .nav[ [Previous section](#toc-container-networking-basics) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-the-container-network-model) ] .debug[(automatically generated title slide)] --- # Container network drivers The Docker Engine supports many different network drivers. The built-in drivers include: * `bridge` (default) * `none` * `host` * `container` The driver is selected with `docker run --net ...`. The different drivers are explained with more details on the following slides. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Network_Drivers.md)] --- ## The default bridge * By default, the container gets a virtual `eth0` interface. (In addition to its own private `lo` loopback interface.) * That interface is provided by a `veth` pair. * It is connected to the Docker bridge. (Named `docker0` by default; configurable with `--bridge`.) * Addresses are allocated on a private, internal subnet. (Docker uses 172.17.0.0/16 by default; configurable with `--bip`.) * Outbound traffic goes through an iptables MASQUERADE rule. * Inbound traffic goes through an iptables DNAT rule. * The container can have its own routes, iptables rules, etc. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Network_Drivers.md)] --- ## The null driver * Container is started with `docker run --net none ...` * It only gets the `lo` loopback interface. No `eth0`. * It can't send or receive network traffic. * Useful for isolated/untrusted workloads. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Network_Drivers.md)] --- ## The host driver * Container is started with `docker run --net host ...` * It sees (and can access) the network interfaces of the host. * It can bind any address, any port (for ill and for good). * Network traffic doesn't have to go through NAT, bridge, or veth. * Performance = native! Use cases: * Performance sensitive applications (VOIP, gaming, streaming...) * Peer discovery (e.g. Erlang port mapper, Raft, Serf...) .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Network_Drivers.md)] --- ## The container driver * Container is started with `docker run --net container:id ...` * It re-uses the network stack of another container. * It shares with this other container the same interfaces, IP address(es), routes, iptables rules, etc. * Those containers can communicate over their `lo` interface. (i.e. one can bind to 127.0.0.1 and the others can connect to it.) .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Network_Drivers.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/lots-of-containers.jpg)] --- name: toc-the-container-network-model class: title The Container Network Model .nav[ [Previous section](#toc-container-network-drivers) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-service-discovery-with-containers) ] .debug[(automatically generated title slide)] --- class: title # The Container Network Model ![A denser graph network](images/title-the-container-network-model.jpg) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Objectives We will learn about the CNM (Container Network Model). At the end of this lesson, you will be able to: * Create a private network for a group of containers. * Use container naming to connect services together. * Dynamically connect and disconnect containers to networks. * Set the IP address of a container. We will also explain the principle of overlay networks and network plugins. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## The Container Network Model The CNM was introduced in Engine 1.9.0 (November 2015). The CNM adds the notion of a *network*, and a new top-level command to manipulate and see those networks: `docker network`. ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f blog-dev overlay 228a4355d548 blog-prod overlay ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## What's in a network? * Conceptually, a network is a virtual switch. * It can be local (to a single Engine) or global (spanning multiple hosts). * A network has an IP subnet associated to it. * Docker will allocate IP addresses to the containers connected to a network. * Containers can be connected to multiple networks. * Containers can be given per-network names and aliases. * The names and aliases can be resolved via an embedded DNS server. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Network implementation details * A network is managed by a *driver*. * The built-in drivers include: * `bridge` (default) * `none` * `host` * `macvlan` * A multi-host driver, *overlay*, is available out of the box (for Swarm clusters). * More drivers can be provided by plugins (OVS, VLAN...) * A network can have a custom IPAM (IP allocator). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Differences with the CNI * CNI = Container Network Interface * CNI is used notably by Kubernetes * With CNI, all the nodes and containers are on a single IP network * Both CNI and CNM offer the same functionality, but with very different methods .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: pic ## Single container in a Docker network ![bridge0](images/bridge1.png) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: pic ## Two containers on a single Docker network ![bridge2](images/bridge2.png) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: pic ## Two containers on two Docker networks ![bridge3](images/bridge3.png) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Creating a network Let's create a network called `dev`. ```bash $ docker network create dev 4c1ff84d6d3f1733d3e233ee039cac276f425a9d5228a4355d54878293a889ba ``` The network is now visible with the `network ls` command: ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f dev bridge ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Placing containers on a network We will create a *named* container on this network. It will be reachable with its name, `es`. ```bash $ docker run -d --name es --net dev elasticsearch:2 8abb80e229ce8926c7223beb69699f5f34d6f1d438bfc5682db893e798046863 ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Communication between containers Now, create another container on this network. .small[ ```bash $ docker run -ti --net dev alpine sh root@0ecccdfa45ef:/# ``` ] From this new container, we can resolve and ping the other one, using its assigned name: .small[ ```bash / # ping es PING es (172.18.0.2) 56(84) bytes of data. 64 bytes from es.dev (172.18.0.2): icmp_seq=1 ttl=64 time=0.221 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=2 ttl=64 time=0.114 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=3 ttl=64 time=0.114 ms ^C --- es ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.114/0.149/0.221/0.052 ms root@0ecccdfa45ef:/# ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving container addresses In Docker Engine 1.9, name resolution is implemented with `/etc/hosts`, and updating it each time containers are added/removed. .small[ ```bash [root@0ecccdfa45ef /]# cat /etc/hosts 172.18.0.3 0ecccdfa45ef 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.18.0.2 es 172.18.0.2 es.dev ``` ] In Docker Engine 1.10, this has been replaced by a dynamic resolver. (This avoids race conditions when updating `/etc/hosts`.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/plastic-containers.JPG)] --- name: toc-service-discovery-with-containers class: title Service discovery with containers .nav[ [Previous section](#toc-the-container-network-model) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-ambassadors) ] .debug[(automatically generated title slide)] --- # Service discovery with containers * Let's try to run an application that requires two containers. * The first container is a web server. * The other one is a redis data store. * We will place them both on the `dev` network created before. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Running the web server * The application is provided by the container image `jpetazzo/trainingwheels`. * We don't know much about it so we will try to run it and see what happens! Start the container, exposing all its ports: ```bash $ docker run --net dev -d -P jpetazzo/trainingwheels ``` Check the port that has been allocated to it: ```bash $ docker ps -l ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Test the web server * If we connect to the application now, we will see an error page: ![Trainingwheels error](images/trainingwheels-error.png) * This is because the Redis service is not running. * This container tries to resolve the name `redis`. Note: we're not using a FQDN or an IP address here; just `redis`. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Start the data store * We need to start a Redis container. * That container must be on the same network as the web server. * It must have the right name (`redis`) so the application can find it. Start the container: ```bash $ docker run --net dev --name redis -d redis ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Test the web server again * If we connect to the application now, we should see that the app is working correctly: ![Trainingwheels OK](images/trainingwheels-ok.png) * When the app tries to resolve `redis`, instead of getting a DNS error, it gets the IP address of our Redis container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## A few words on *scope* * What if we want to run multiple copies of our application? * Since names are unique, there can be only one container named `redis` at a time. * However, we can specify the network name of our container with `--net-alias`. * `--net-alias` is scoped per network, and independent from the container name. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Using a network alias instead of a name Let's remove the `redis` container: ```bash $ docker rm -f redis ``` And create one that doesn't block the `redis` name: ```bash $ docker run --net dev --net-alias redis -d redis ``` Check that the app still works (but the counter is back to 1, since we wiped out the old Redis container). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Names are *local* to each network Let's try to ping our `es` container from another container, when that other container is *not* on the `dev` network. ```bash $ docker run --rm alpine ping es ping: bad address 'es' ``` Names can be resolved only when containers are on the same network. Containers can contact each other only when they are on the same network (you can try to ping using the IP address to verify). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases We would like to have another network, `prod`, with its own `es` container. But there can be only one container named `es`! We will use *network aliases*. A container can have multiple network aliases. Network aliases are *local* to a given network (only exist in this network). Multiple containers can have the same network alias (even on the same network). In Docker Engine 1.11, resolving a network alias yields the IP addresses of all containers holding this alias. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Creating containers on another network Create the `prod` network. ```bash $ docker network create prod 5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c ``` We can now create multiple containers with the `es` alias on the new `prod` network. ```bash $ docker run -d --name prod-es-1 --net-alias es --net prod elasticsearch:2 38079d21caf0c5533a391700d9e9e920724e89200083df73211081c8a356d771 $ docker run -d --name prod-es-2 --net-alias es --net prod elasticsearch:2 1820087a9c600f43159688050dcc164c298183e1d2e62d5694fd46b10ac3bc3d ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving network aliases Let's try DNS resolution first, using the `nslookup` tool that ships with the `alpine` image. ```bash $ docker run --net prod --rm alpine nslookup es Name: es Address 1: 172.23.0.3 prod-es-2.prod Address 2: 172.23.0.2 prod-es-1.prod ``` (You can ignore the `can't resolve '(null)'` errors.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Connecting to aliased containers Each ElasticSearch instance has a name (generated when it is started). This name can be seen when we issue a simple HTTP request on the ElasticSearch API endpoint. Try the following command a few times: .small[ ```bash $ docker run --rm --net dev centos curl -s es:9200 { "name" : "Tarot", ... } ``` ] Then try it a few times by replacing `--net dev` with `--net prod`: .small[ ```bash $ docker run --rm --net prod centos curl -s es:9200 { "name" : "The Symbiote", ... } ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Good to know ... * Docker will not create network names and aliases on the default `bridge` network. * Therefore, if you want to use those features, you have to create a custom network first. * Network aliases are *not* unique on a given network. * i.e., multiple containers can have the same alias on the same network. * In that scenario, the Docker DNS server will return multiple records. (i.e. you will get DNS round robin out of the box.) * Enabling *Swarm Mode* gives access to clustering and load balancing with IPVS. * Creation of networks and network aliases is generally automated with tools like Compose. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## A few words about round robin DNS Don't rely exclusively on round robin DNS to achieve load balancing. Many factors can affect DNS resolution, and you might see: - all traffic going to a single instance; - traffic being split (unevenly) between some instances; - different behavior depending on your application language; - different behavior depending on your base distro; - different behavior depending on other factors (sic). It's OK to use DNS to discover available endpoints, but remember that you have to re-resolve every now and then to discover new endpoints. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Custom networks When creating a network, extra options can be provided. * `--internal` disables outbound traffic (the network won't have a default gateway). * `--gateway` indicates which address to use for the gateway (when outbound traffic is allowed). * `--subnet` (in CIDR notation) indicates the subnet to use. * `--ip-range` (in CIDR notation) indicates the subnet to allocate from. * `--aux-address` allows to specify a list of reserved addresses (which won't be allocated to containers). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Setting containers' IP address * It is possible to set a container's address with `--ip`. * The IP address has to be within the subnet used for the container. A full example would look like this. ```bash $ docker network create --subnet 10.66.0.0/16 pubnet 42fb16ec412383db6289a3e39c3c0224f395d7f85bcb1859b279e7a564d4e135 $ docker run --net pubnet --ip 10.66.66.66 -d nginx b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09 ``` *Note: don't hard code container IP addresses in your code!* *I repeat: don't hard code container IP addresses in your code!* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Overlay networks * The features we've seen so far only work when all containers are on a single host. * If containers span multiple hosts, we need an *overlay* network to connect them together. * Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN, *enabled with Swarm Mode*. * Other plugins (Weave, Calico...) can provide overlay networks as well. * Once you have an overlay network, *all the features that we've used in this chapter work identically across multiple hosts.* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (overlay) Out of the scope for this intro-level workshop! Very short instructions: - enable Swarm Mode (`docker swarm init` then `docker swarm join` on other nodes) - `docker network create mynet --driver overlay` - `docker service create --network mynet myimage` See https://jpetazzo.github.io/container.training for all the deets about clustering! .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (plugins) Out of the scope for this intro-level workshop! General idea: - install the plugin (they often ship within containers) - run the plugin (if it's in a container, it will often require extra parameters; don't just `docker run` it blindly!) - some plugins require configuration or activation (creating a special file that tells Docker "use the plugin whose control socket is at the following location") - you can then `docker network create --driver pluginname` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Connecting and disconnecting dynamically * So far, we have specified which network to use when starting the container. * The Docker Engine also allows to connect and disconnect while the container runs. * This feature is exposed through the Docker API, and through two Docker CLI commands: * `docker network connect ` * `docker network disconnect ` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Dynamically connecting to a network * We have a container named `es` connected to a network named `dev`. * Let's start a simple alpine container on the default network: ```bash $ docker run -ti alpine sh / # ``` * In this container, try to ping the `es` container: ```bash / # ping es ping: bad address 'es' ``` This doesn't work, but we will change that by connecting the container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Finding the container ID and connecting it * Figure out the ID of our alpine container; here are two methods: * looking at `/etc/hostname` in the container, * running `docker ps -lq` on the host. * Run the following command on the host: ```bash $ docker network connect dev `` ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Checking what we did * Try again to `ping es` from the container. * It should now work correctly: ```bash / # ping es PING es (172.20.0.3): 56 data bytes 64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms 64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms ^C ``` * Interrupt it with Ctrl-C. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Looking at the network setup in the container We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`: .small[ ```bash / # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 18: eth0@if19: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever 20: eth1@if21: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1 valid_lft forever preferred_lft forever / # ``` ] Each network connection is materialized with a virtual network interface. As we can see, we can be connected to multiple networks at the same time. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- ## Disconnecting from a network * Let's try the symmetrical command to disconnect the container: ```bash $ docker network disconnect dev ``` * From now on, if we try to ping `es`, it will not resolve: ```bash / # ping es ping: bad address 'es' ``` * Trying to ping the IP address directly won't work either: ```bash / # ping 172.20.0.3 ... (nothing happens until we interrupt it with Ctrl-C) ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases are scoped per network * Each network has its own set of network aliases. * We saw this earlier: `es` resolves to different addresses in `dev` and `prod`. * If we are connected to multiple networks, the resolver looks up names in each of them (as of Docker Engine 18.03, it is the connection order) and stops as soon as the name is found. * Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not** give us the addresses of all the `es` services; but only the ones in `dev` or `prod`. * However, we can lookup `es.dev` or `es.prod` if we need to. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Finding out about our networks and names * We can do reverse DNS lookups on containers' IP addresses. * If the IP address belongs to a network (other than the default bridge), the result will be: ``` name-or-first-alias-or-container-id.network-name ``` * Example: .small[ ```bash $ docker run -ti --net prod --net-alias hello alpine / # apk add --no-cache drill ... OK: 5 MiB in 13 packages / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03 inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ... / # drill -t ptr `3.0.21.172`.in-addr.arpa ... ;; ANSWER SECTION: 3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`. ... ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Building with a custom network * We can build a Dockerfile with a custom network with `docker build --network NAME`. * This can be used to check that a build doesn't access the network. (But keep in mind that most Dockerfiles will fail, because they need to install remote packages and dependencies!) * This may be used to access an internal package repository. (But try to use a multi-stage build instead, if possible!) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-1.jpg)] --- name: toc-ambassadors class: title Ambassadors .nav[ [Previous section](#toc-service-discovery-with-containers) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-local-development-workflow-with-docker) ] .debug[(automatically generated title slide)] --- class: title # Ambassadors ![Two serious-looking persons shaking hands](images/title-ambassador.jpg) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## The ambassador pattern Ambassadors are containers that "masquerade" or "proxy" for another service. They abstract the connection details for this services, and can help with: * discovery (where is my service actually running?) * migration (what if my service has to be moved while I use it?) * fail over (how do I know to which instance of a replicated service I should connect?) * load balancing (how to I spread my requests across multiple instances of a service?) * authentication (what if my service requires credentials, certificates, or otherwise?) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Introduction to Ambassadors The ambassador pattern: * Takes advantage of Docker's per-container naming system and abstracts connections between services. * Allows you to manage services without hard-coding connection information inside applications. To do this, instead of directly connecting containers you insert ambassador containers. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- class: pic ![ambassador](images/ambassador-diagram.png) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Interacting with ambassadors * The web container uses normal Docker networking to connect to the ambassador. * The database container also talks with an ambassador. * For both containers, the ambassador is totally transparent. (There is no difference between normal operation and operation with an ambassador.) * If the database container is moved (or a failover happens), its new location will be tracked by the ambassador containers, and the web application container will still be able to connect, without reconfiguration. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Ambassadors for simple service discovery Use case: * my application code connects to `redis` on the default port (6379), * my Redis service runs on another machine, on a non-default port (e.g. 12345), * I want to use an ambassador to let my application connect without modification. The ambassador will be: * a container running right next to my application, * using the name `redis` (or linked as `redis`), * listening on port 6379, * forwarding connections to the actual Redis service. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Ambassadors for service migration Use case: * my application code still connects to `redis`, * my Redis service runs somewhere else, * my Redis service is moved to a different host+port, * the location of the Redis service is given to me via e.g. DNS SRV records, * I want to use an ambassador to automatically connect to the new location, with as little disruption as possible. The ambassador will be: * the same kind of container as before, * running an additional routine to monitor DNS SRV records, * updating the forwarding destination when the DNS SRV records are updated. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Ambassadors for credentials injection Use case: * my application code still connects to `redis`, * my application code doesn't provide Redis credentials, * my production Redis service requires credentials, * my staging Redis service requires different credentials, * I want to use an ambassador to abstract those credentials. The ambassador will be: * a container using the name `redis` (or a link), * passed the credentials to use, * running a custom proxy that accepts connections on Redis default port, * performing authentication with the target Redis service before forwarding traffic. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Ambassadors for load balancing Use case: * my application code connects to a web service called `api`, * I want to run multiple instances of the `api` backend, * those instances will be on different machines and ports, * I want to use an ambassador to abstract those details. The ambassador will be: * a container using the name `api` (or a link), * passed the list of backends to use (statically or dynamically), * running a load balancer (e.g. HAProxy or NGINX), * dispatching requests across all backends transparently. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## "Ambassador" is a *pattern* There are many ways to implement the pattern. Different deployments will use different underlying technologies. * On-premise deployments with a trusted network can track container locations in e.g. Zookeeper, and generate HAproxy configurations each time a location key changes. * Public cloud deployments or deployments across unsafe networks can add TLS encryption. * Ad-hoc deployments can use a master-less discovery protocol like avahi to register and discover services. * It is also possible to do one-shot reconfiguration of the ambassadors. It is slightly less dynamic but has much less requirements. * Ambassadors can be used in addition to, or instead of, overlay networks. .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Service meshes * A service mesh is a configurable network layer. * It can provide service discovery, high availability, load balancing, observability... * Service meshes are particularly useful for microservices applications. * Service meshes are often implemented as proxies. * Applications connect to the service mesh, which relays the connection where needed. *Does that sound familiar?* .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Ambassadors and service meshes * When using a service mesh, a "sidecar container" is often used as a proxy * Our services connect (transparently) to that sidecar container * That sidecar container figures out where to forward the traffic ... Does that sound familiar? (It should, because service meshes are essentially app-wide or cluster-wide ambassadors!) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- ## Section summary We've learned how to: * Understand the ambassador pattern and what it is used for (service portability). For more information about the ambassador pattern, including demos on Swarm and ECS: * AWS re:invent 2015 [DVO317](https://www.youtube.com/watch?v=7CZFpHUPqXw) * [SwarmWeek video about Swarm+Compose](https://youtube.com/watch?v=qbIvUvwa6As) Some services meshes and related projects: * [Istio](https://istio.io/) * [Linkerd](https://linkerd.io/) * [Gloo](https://gloo.solo.io/) .debug[[containers/Ambassadors.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Ambassadors.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-2.jpg)] --- name: toc-local-development-workflow-with-docker class: title Local development workflow with Docker .nav[ [Previous section](#toc-ambassadors) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-windows-containers) ] .debug[(automatically generated title slide)] --- class: title # Local development workflow with Docker ![Construction site](images/title-local-development-workflow-with-docker.jpg) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Objectives At the end of this section, you will be able to: * Share code between container and host. * Use a simple local development workflow. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Local development in a container We want to solve the following issues: - "Works on my machine" - "Not the same version" - "Missing dependency" By using Docker containers, we will get a consistent development environment. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Working on the "namer" application * We have to work on some application whose code is at: https://github.com/jpetazzo/namer. * What is it? We don't know yet! * Let's download the code. ```bash $ git clone https://github.com/jpetazzo/namer ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the code ```bash $ cd namer $ ls -1 company_name_generator.rb config.ru docker-compose.yml Dockerfile Gemfile ``` -- Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe? .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the `Dockerfile` ```dockerfile FROM ruby COPY . /src WORKDIR /src RUN bundler install CMD ["rackup", "--host", "0.0.0.0"] EXPOSE 9292 ``` * This application is using a base `ruby` image. * The code is copied in `/src`. * Dependencies are installed with `bundler`. * The application is started with `rackup`. * It is listening on port 9292. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Building and running the "namer" application * Let's build the application with the `Dockerfile`! -- ```bash $ docker build -t namer . ``` -- * Then run it. *We need to expose its ports.* -- ```bash $ docker run -dP namer ``` -- * Check on which port the container is listening. -- ```bash $ docker ps -l ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Connecting to our application * Point our browser to our Docker node, on the port allocated to the container. -- * Hit "reload" a few times. -- * This is an enterprise-class, carrier-grade, ISO-compliant company name generator! (With 50% more bullshit than the average competition!) (Wait, was that 50% more, or 50% less? *Anyway!*) ![web application 1](images/webapp-in-blue.png) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Making changes to the code Option 1: * Edit the code locally * Rebuild the image * Re-run the container Option 2: * Enter the container (with `docker exec`) * Install an editor * Make changes from within the container Option 3: * Use a *volume* to mount local files into the container * Make changes locally * Changes are reflected into the container .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Our first volume We will tell Docker to map the current directory to `/src` in the container. ```bash $ docker run -d -v $(pwd):/src -P namer ``` * `-d`: the container should run in detached mode (in the background). * `-v`: the following host directory should be mounted inside the container. * `-P`: publish all the ports exposed by this image. * `namer` is the name of the image we will run. * We don't specify a command to run because it is already set in the Dockerfile. Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell). .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Mounting volumes inside containers The `-v` flag mounts a directory from your host into your Docker container. The flag structure is: ```bash [host-path]:[container-path]:[rw|ro] ``` * If `[host-path]` or `[container-path]` doesn't exist it is created. * You can control the write status of the volume with the `ro` and `rw` options. * If you don't specify `rw` or `ro`, it will be `rw` by default. There will be a full chapter about volumes! .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Testing the development container * Check the port used by our new container. ```bash $ docker ps -l CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 045885b68bc5 namer rackup 3 seconds ago Up ... 0.0.0.0:32770->9292/tcp ... ``` * Open the application in your web browser. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Making a change to our application Our customer really doesn't like the color of our text. Let's change it. ```bash $ vi company_name_generator.rb ``` And change ```css color: royalblue; ``` To: ```css color: red; ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Viewing our changes * Reload the application in our browser. -- * The color should have changed. ![web application 2](images/webapp-in-red.png) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Understanding volumes * Volumes are *not* copying or synchronizing files between the host and the container. * Volumes are *bind mounts*: a kernel mechanism associating a path to another. * Bind mounts are *kind of* similar to symbolic links, but at a very different level. * Changes made on the host or on the container will be visible on the other side. (Since under the hood, it's the same file on both anyway.) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Trash your servers and burn your code *(This is the title of a [2013 blog post](http://chadfowler.com/2013/06/23/immutable-deployments.html) by Chad Fowler, where he explains the concept of immutable infrastructure.)* -- * Let's mess up majorly with our container. (Remove files or whatever.) * Now, how can we fix this? -- * Our old container (with the blue version of the code) is still running. * See on which port it is exposed: ```bash docker ps ``` * Point our browser to it to confirm that it still works fine. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Immutable infrastructure in a nutshell * Instead of *updating* a server, we deploy a new one. * This might be challenging with classical servers, but it's trivial with containers. * In fact, with Docker, the most logical workflow is to build a new image and run it. * If something goes wrong with the new image, we can always restart the old one. * We can even keep both versions running side by side. If this pattern sounds interesting, you might want to read about *blue/green deployment* and *canary deployments*. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Recap of the development workflow 1. Write a Dockerfile to build an image containing our development environment. (Rails, Django, ... and all the dependencies for our app) 2. Start a container from that image. Use the `-v` flag to mount our source code inside the container. 3. Edit the source code outside the containers, using regular tools. (vim, emacs, textmate...) 4. Test the application. (Some frameworks pick up changes automatically. Others require you to Ctrl-C + restart after each modification.) 5. Iterate and repeat steps 3 and 4 until satisfied. 6. When done, commit+push source code changes. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Debugging inside the container Docker has a command called `docker exec`. It allows users to run a new process in a container which is already running. If sometimes you find yourself wishing you could SSH into a container: you can use `docker exec` instead. You can get a shell prompt inside an existing container this way, or run an arbitrary process for automation. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## `docker exec` example ```bash $ # You can run ruby commands in the area the app is running and more! $ docker exec -it bash root@5ca27cf74c2e:/opt/namer# irb irb(main):001:0> [0, 1, 2, 3, 4].map {|x| x ** 2}.compact => [0, 1, 4, 9, 16] irb(main):002:0> exit ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Stopping the container Now that we're done let's stop our container. ```bash $ docker stop ``` And remove it. ```bash $ docker rm ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- ## Section summary We've learned how to: * Share code between container and host. * Set our working directory. * Use a simple local development workflow. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Local_Development_Workflow.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/two-containers-on-a-truck.jpg)] --- name: toc-windows-containers class: title Windows Containers .nav[ [Previous section](#toc-local-development-workflow-with-docker) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-working-with-volumes) ] .debug[(automatically generated title slide)] --- class: title # Windows Containers ![Container with Windows](images/windows-containers.jpg) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Objectives At the end of this section, you will be able to: * Understand Windows Container vs. Linux Container. * Know about the features of Docker for Windows for choosing architecture. * Run other container architectures via QEMU emulation. .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Are containers *just* for Linux? Remember that a container must run on the kernel of the OS it's on. - This is both a benefit and a limitation. (It makes containers lightweight, but limits them to a specific kernel.) - At its launch in 2013, Docker did only support Linux, and only on amd64 CPUs. - Since then, many platforms and OS have been added. (Windows, ARM, i386, IBM mainframes ... But no macOS or iOS yet!) -- - Docker Desktop (macOS and Windows) can run containers for other architectures (Check the docs to see how to [run a Raspberry Pi (ARM) or PPC container](https://docs.docker.com/docker-for-mac/multi-arch/)!) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## History of Windows containers - Early 2016, Windows 10 gained support for running Windows binaries in containers. - These are known as "Windows Containers" - Win 10 expects Docker for Windows to be installed for full features - These must run in Hyper-V mini-VM's with a Windows Server x64 kernel - No "scratch" containers, so use "Core" and "Nano" Server OS base layers - Since Hyper-V is required, Windows 10 Home won't work (yet...) -- - Late 2016, Windows Server 2016 ships with native Docker support - Installed via PowerShell, doesn't need Docker for Windows - Can run native (without VM), or with [Hyper-V Isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## LCOW (Linux Containers On Windows) While Docker on Windows is largely playing catch up with Docker on Linux, it's moving fast; and this is one thing that you *cannot* do on Linux! - LCOW came with the [2017 Fall Creators Update](https://blog.docker.com/2018/02/docker-for-windows-18-02-with-windows-10-fall-creators-update/). - It can run Linux and Windows containers side-by-side on Win 10. - It is no longer necessary to switch the Engine to "Linux Containers". (In fact, if you want to run both Linux and Windows containers at the same time, make sure that your Engine is set to "Windows Containers" mode!) -- If you are a Docker for Windows user, start your engine and try this: ```bash docker pull microsoft/nanoserver:1803 ``` (Make sure to switch to "Windows Containers mode" if necessary.) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Run Both Windows and Linux containers - Run a Windows Nano Server (minimal CLI-only server) ```bash docker run --rm -it microsoft/nanoserver:1803 powershell Get-Process exit ``` - Run busybox on Linux in LCOW ```bash docker run --rm --platform linux busybox echo hello ``` (Although you will not be able to see them, this will create hidden Nano and LinuxKit VMs in Hyper-V!) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Did We Say Things Move Fast - Things keep improving. - Now `--platform` defaults to `windows`, some images support both: - golang, mongo, python, redis, hello-world ... and more being added - you should still use `--plaform` with multi-os images to be certain - Windows Containers now support `localhost` accessible containers (July 2018) - Microsoft (April 2018) added Hyper-V support to Windows 10 Home ... ... so stay tuned for Docker support, maybe?!? .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Other Windows container options Most "official" Docker images don't run on Windows yet. Places to Look: - Hub Official: https://hub.docker.com/u/winamd64/ - Microsoft: https://hub.docker.com/r/microsoft/ .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## SQL Server? Choice of Linux or Windows - Microsoft [SQL Server for Linux 2017](https://hub.docker.com/r/microsoft/mssql-server-linux/) (amd64/linux) - Microsoft [SQL Server Express 2017](https://hub.docker.com/r/microsoft/mssql-server-windows-express/) (amd64/windows) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- ## Windows Tools and Tips - PowerShell [Tab Completion: DockerCompletion](https://github.com/matt9ucci/DockerCompletion) - Best Shell GUI: [Cmder.net](https://cmder.net/) - Good Windows Container Blogs and How-To's - Docker DevRel [Elton Stoneman, Microsoft MVP](https://blog.sixeyed.com/) - Docker Captain [Nicholas Dille](https://dille.name/blog/) - Docker Captain [Stefan Scherer](https://stefanscherer.github.io/) .debug[[containers/Windows_Containers.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Windows_Containers.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/wall-of-containers.jpeg)] --- name: toc-working-with-volumes class: title Working with volumes .nav[ [Previous section](#toc-windows-containers) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-compose-for-development-stacks) ] .debug[(automatically generated title slide)] --- class: title # Working with volumes ![volume](images/title-working-with-volumes.jpg) .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Objectives At the end of this section, you will be able to: * Create containers holding volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Working with volumes Docker volumes can be used to achieve many things, including: * Bypassing the copy-on-write system to obtain native disk I/O performance. * Bypassing copy-on-write to leave some files out of `docker commit`. * Sharing a directory between multiple containers. * Sharing a directory between the host and a container. * Sharing a *single file* between the host and a container. * Using remote storage and custom storage with "volume drivers". .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Volumes are special directories in a container Volumes can be declared in two different ways. * Within a `Dockerfile`, with a `VOLUME` instruction. ```dockerfile VOLUME /uploads ``` * On the command-line, with the `-v` flag for `docker run`. ```bash $ docker run -d -v /uploads myapp ``` In both cases, `/uploads` (inside the container) will be a volume. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes bypass the copy-on-write system Volumes act as passthroughs to the host filesystem. * The I/O performance on a volume is exactly the same as I/O performance on the Docker host. * When you `docker commit`, the content of volumes is not brought into the resulting image. * If a `RUN` instruction in a `Dockerfile` changes the content of a volume, those changes are not recorded neither. * If a container is started with the `--read-only` flag, the volume will still be writable (unless the volume is a read-only volume). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes can be shared across containers You can start a container with *exactly the same volumes* as another one. The new container will have the same volumes, in the same directories. They will contain exactly the same thing, and remain in sync. Under the hood, they are actually the same directories on the host anyway. This is done using the `--volumes-from` flag for `docker run`. We will see an example in the following slides. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Sharing app server logs with another container Let's start a Tomcat container: ```bash $ docker run --name webapp -d -p 8080:8080 -v /usr/local/tomcat/logs tomcat ``` Now, start an `alpine` container accessing the same volume: ```bash $ docker run --volumes-from webapp alpine sh -c "tail -f /usr/local/tomcat/logs/*" ``` Then, from another window, send requests to our Tomcat container: ```bash $ curl localhost:8080 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Volumes exist independently of containers If a container is stopped or removed, its volumes still exist and are available. Volumes can be listed and manipulated with `docker volume` subcommands: ```bash $ docker volume ls DRIVER VOLUME NAME local 5b0b65e4316da67c2d471086640e6005ca2264f3... local pgdata-prod local pgdata-dev local 13b59c9936d78d109d094693446e174e5480d973... ``` Some of those volume names were explicit (pgdata-prod, pgdata-dev). The others (the hex IDs) were generated automatically by Docker. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Naming volumes * Volumes can be created without a container, then used in multiple containers. Let's create a couple of volumes directly. ```bash $ docker volume create webapps webapps ``` ```bash $ docker volume create logs logs ``` Volumes are not anchored to a specific path. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Using our named volumes * Volumes are used with the `-v` option. * When a host path does not contain a /, it is considered to be a volume name. Let's start a web server using the two previous volumes. ```bash $ docker run -d -p 1234:8080 \ -v logs:/usr/local/tomcat/logs \ -v webapps:/usr/local/tomcat/webapps \ tomcat ``` Check that it's running correctly: ```bash $ curl localhost:1234 ... (Tomcat tells us how happy it is to be up and running) ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Using a volume in another container * We will make changes to the volume from another container. * In this example, we will run a text editor in the other container. (But this could be a FTP server, a WebDAV server, a Git receiver...) Let's start another container using the `webapps` volume. ```bash $ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp ``` Vandalize the page, save, exit. Then run `curl localhost:1234` again to see your changes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Using custom "bind-mounts" In some cases, you want a specific directory on the host to be mapped inside the container: * You want to manage storage and snapshots yourself. (With LVM, or a SAN, or ZFS, or anything else!) * You have a separate disk with better performance (SSD) or resiliency (EBS) than the system disk, and you want to put important data on that disk. * You want to share your source directory between your host (where the source gets edited) and the container (where it is compiled or executed). Wait, we already met the last use-case in our example development workflow! Nice. ```bash $ docker run -d -v /path/on/the/host:/path/in/container image ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Migrating data with `--volumes-from` The `--volumes-from` option tells Docker to re-use all the volumes of an existing container. * Scenario: migrating from Redis 2.8 to Redis 3.0. * We have a container (`myredis`) running Redis 2.8. * Stop the `myredis` container. * Start a new container, using the Redis 3.0 image, and the `--volumes-from` option. * The new container will inherit the data of the old one. * Newer containers can use `--volumes-from` too. * Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Data migration in practice Let's create a Redis container. ```bash $ docker run -d --name redis28 redis:2.8 ``` Connect to the Redis container and set some data. ```bash $ docker run -ti --link redis28:redis busybox telnet redis 6379 ``` Issue the following commands: ```bash SET counter 42 INFO server SAVE QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Upgrading Redis Stop the Redis container. ```bash $ docker stop redis28 ``` Start the new Redis container. ```bash $ docker run -d --name redis30 --volumes-from redis28 redis:3.0 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Testing the new Redis Connect to the Redis container and see our data. ```bash docker run -ti --link redis30:redis busybox telnet redis 6379 ``` Issue a few commands. ```bash GET counter INFO server QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Volumes lifecycle * When you remove a container, its volumes are kept around. * You can list them with `docker volume ls`. * You can access them by creating a container with `docker run -v`. * You can remove them with `docker volume rm` or `docker system prune`. Ultimately, _you_ are the one responsible for logging, monitoring, and backup of your volumes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes defined by an image Wondering if an image has volumes? Just use `docker inspect`: ```bash $ # docker inspect training/datavol [{ "config": { . . . "Volumes": { "/var/webapp": {} }, . . . }] ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes used by a container To look which paths are actually volumes, and to what they are bound, use `docker inspect` (again): ```bash $ docker inspect [{ "ID": "", . . . "Volumes": { "/var/webapp": "/var/lib/docker/vfs/dir/f4280c5b6207ed531efd4cc673ff620cef2a7980f747dbbcca001db61de04468" }, "VolumesRW": { "/var/webapp": true }, }] ``` * We can see that our volume is present on the file system of the Docker host. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Sharing a single file The same `-v` flag can be used to share a single file (instead of a directory). One of the most interesting examples is to share the Docker control socket. ```bash $ docker run -it -v /var/run/docker.sock:/var/run/docker.sock docker sh ``` From that container, you can now run `docker` commands communicating with the Docker Engine running on the host. Try `docker ps`! .warning[Since that container has access to the Docker socket, it has root-like access to the host.] .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Volume plugins You can install plugins to manage volumes backed by particular storage systems, or providing extra features. For instance: * [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS, EFS). * [Portworx](https://portworx.com/) - provides distributed block store for containers. * [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale to several petabytes. It provides interfaces for object, block and file storage. * and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)! .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Volumes vs. Mounts * Since Docker 17.06, a new options is available: `--mount`. * It offers a new, richer syntax to manipulate data in containers. * It makes an explicit difference between: - volumes (identified with a unique name, managed by a storage plugin), - bind mounts (identified with a host path, not managed). * The former `-v` / `--volume` option is still usable. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## `--mount` syntax Binding a host path to a container path: ```bash $ docker run \ --mount type=bind,source=/path/on/host,target=/path/in/container alpine ``` Mounting a volume to a container path: ```bash $ docker run \ --mount source=myvolume,target=/path/in/container alpine ``` Mounting a tmpfs (in-memory, for temporary files): ```bash $ docker run \ --mount type=tmpfs,destination=/path/in/container,tmpfs-size=1000000 alpine ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- ## Section summary We've learned how to: * Create and manage volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Working_With_Volumes.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/Container-Ship-Freighter-Navigation-Elbe-Romance-1782991.jpg)] --- name: toc-compose-for-development-stacks class: title Compose for development stacks .nav[ [Previous section](#toc-working-with-volumes) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-exercise--writing-a-compose-file) ] .debug[(automatically generated title slide)] --- # Compose for development stacks Dockerfiles are great to build container images. But what if we work with a complex stack made of multiple containers? Eventually, we will want to write some custom scripts and automation to build, run, and connect our containers together. There is a better way: using Docker Compose. In this section, you will use Compose to bootstrap a development environment. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## What is Docker Compose? Docker Compose (formerly known as `fig`) is an external tool. Unlike the Docker Engine, it is written in Python. It's open source as well. The general idea of Compose is to enable a very simple, powerful onboarding workflow: 1. Checkout your code. 2. Run `docker-compose up`. 3. Your app is up and running! .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose overview This is how you work with Compose: * You describe a set (or stack) of containers in a YAML file called `docker-compose.yml`. * You run `docker-compose up`. * Compose automatically pulls images, builds containers, and starts them. * Compose can set up links, volumes, and other Docker options for you. * Compose can run the containers in the background, or in the foreground. * When containers are running in the foreground, their aggregated output is shown. Before diving in, let's see a small example of Compose in action. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic ![composeup](images/composeup.gif) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Checking if Compose is installed If you are using the official training virtual machines, Compose has been pre-installed. If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them. If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`. You can always check that it is installed by running: ```bash $ docker-compose --version ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose First step: clone the source code for the app we will be working on. ```bash $ cd $ git clone https://github.com/jpetazzo/trainingwheels ... $ cd trainingwheels ``` Second step: start your app. ```bash $ docker-compose up ``` Watch Compose build and run your app with the correct parameters, including linking the relevant containers together. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose Verify that the app is running at `http://:8000`. ![composeapp](images/composeapp.png) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Stopping the app When you hit `^C`, Compose tries to gracefully terminate all of the containers. After ten seconds (or if you press `^C` again) it will forcibly kill them. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## The `docker-compose.yml` file Here is the file used in the demo: .small[ ```yaml version: "2" services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src redis: image: redis ``` ] .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose file structure A Compose file has multiple sections: * `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.) * `services` is mandatory. A service is one or more replicas of the same image running as containers. * `networks` is optional and indicates to which networks containers should be connected. (By default, containers will be connected on a private, per-compose-file network.) * `volumes` is optional and can define volumes to be used and/or shared by the containers. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose file versions * Version 1 is legacy and shouldn't be used. (If you see a Compose file without `version` and `services`, it's a legacy v1 file.) * Version 2 added support for networks and volumes. * Version 3 added support for deployment options (scaling, rolling updates, etc). The [Docker documentation](https://docs.docker.com/compose/compose-file/) has excellent information about the Compose file format if you need to know more about versions. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Containers in `docker-compose.yml` Each service in the YAML file must contain either `build`, or `image`. * `build` indicates a path containing a Dockerfile. * `image` indicates an image name (local, or on a registry). * If both are specified, an image will be built from the `build` directory and named `image`. The other parameters are optional. They encode the parameters that you would typically add to `docker run`. Sometimes they have several minor improvements. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Container parameters * `command` indicates what to run (like `CMD` in a Dockerfile). * `ports` translates to one (or multiple) `-p` options to map ports. You can specify local ports (i.e. `x:y` to expose public port `x`). * `volumes` translates to one (or multiple) `-v` options. You can use relative paths here. For the full list, check: https://docs.docker.com/compose/compose-file/ .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose commands We already saw `docker-compose up`, but another one is `docker-compose build`. It will execute `docker build` for all containers mentioning a `build` path. It can also be invoked automatically when starting the application: ```bash docker-compose up --build ``` Another common option is to start containers in the background: ```bash docker-compose up -d ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Check container status It can be tedious to check the status of your containers with `docker ps`, especially when running multiple apps at the same time. Compose makes it easier; with `docker-compose ps` you will see only the status of the containers of the current stack: ```bash $ docker-compose ps Name Command State Ports ---------------------------------------------------------------------------- trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (1) If you have started your application in the background with Compose and want to stop it easily, you can use the `kill` command: ```bash $ docker-compose kill ``` Likewise, `docker-compose rm` will let you remove containers (after confirmation): ```bash $ docker-compose rm Going to remove trainingwheels_redis_1, trainingwheels_www_1 Are you sure? [yN] y Removing trainingwheels_redis_1... Removing trainingwheels_www_1... ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (2) Alternatively, `docker-compose down` will stop and remove containers. It will also remove other resources, like networks that were created for the application. ```bash $ docker-compose down Stopping trainingwheels_www_1 ... done Stopping trainingwheels_redis_1 ... done Removing trainingwheels_www_1 ... done Removing trainingwheels_redis_1 ... done ``` Use `docker-compose down -v` to remove everything including volumes. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Special handling of volumes Compose is smart. If your container uses volumes, when you restart your application, Compose will create a new container, but carefully re-use the volumes it was using previously. This makes it easy to upgrade a stateful service, by pulling its new image and just restarting your stack with Compose. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose project name * When you run a Compose command, Compose infers the "project name" of your app. * By default, the "project name" is the name of the current directory. * For instance, if you are in `/home/zelda/src/ocarina`, the project name is `ocarina`. * All resources created by Compose are tagged with this project name. * The project name also appears as a prefix of the names of the resources. E.g. in the previous example, service `www` will create a container `ocarina_www_1`. * The project name can be overridden with `docker-compose -p`. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Running two copies of the same app If you want to run two copies of the same app simultaneously, all you have to do is to make sure that each copy has a different project name. You can: * copy your code in a directory with a different name * start each copy with `docker-compose -p myprojname up` Each copy will run in a different network, totally isolated from the other. This is ideal to debug regressions, do side-by-side comparisons, etc. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/ShippingContainerSFBay.jpg)] --- name: toc-exercise--writing-a-compose-file class: title Exercise β writing a Compose file .nav[ [Previous section](#toc-compose-for-development-stacks) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-managing-hosts-with-docker-machine) ] .debug[(automatically generated title slide)] --- # Exercise β writing a Compose file Let's write a Compose file for the wordsmith app! The code is at: https://github.com/jpetazzo/wordsmith .debug[[intro-fullday.yml](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/intro-fullday.yml)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/aerial-view-of-containers.jpg)] --- name: toc-managing-hosts-with-docker-machine class: title Managing hosts with Docker Machine .nav[ [Previous section](#toc-exercise--writing-a-compose-file) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-advanced-dockerfiles) ] .debug[(automatically generated title slide)] --- # Managing hosts with Docker Machine - Docker Machine is a tool to provision and manage Docker hosts. - It automates the creation of a virtual machine: - locally, with a tool like VirtualBox or VMware; - on a public cloud like AWS EC2, Azure, Digital Ocean, GCP, etc.; - on a private cloud like OpenStack. - It can also configure existing machines through an SSH connection. - It can manage as many hosts as you want, with as many "drivers" as you want. .debug[[containers/Docker_Machine.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Docker_Machine.md)] --- ## Docker Machine workflow 1) Prepare the environment: setup VirtualBox, obtain cloud credentials ... 2) Create hosts with `docker-machine create -d drivername machinename`. 3) Use a specific machine with `eval $(docker-machine env machinename)`. 4) Profit! .debug[[containers/Docker_Machine.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Docker_Machine.md)] --- ## Environment variables - Most of the tools (CLI, libraries...) connecting to the Docker API can use environment variables. - These variables are: - `DOCKER_HOST` (indicates address+port to connect to, or path of UNIX socket) - `DOCKER_TLS_VERIFY` (indicates that TLS mutual auth should be used) - `DOCKER_CERT_PATH` (path to the keypair and certificate to use for auth) - `docker-machine env ...` will generate the variables needed to connect to a host. - `$(eval docker-machine env ...)` sets these variables in the current shell. .debug[[containers/Docker_Machine.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Docker_Machine.md)] --- ## Host management features With `docker-machine`, we can: - upgrade a host to the latest version of the Docker Engine, - start/stop/restart hosts, - get a shell on a remote machine (with SSH), - copy files to/from remotes machines (with SCP), - mount a remote host's directory on the local machine (with SSHFS), - ... .debug[[containers/Docker_Machine.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Docker_Machine.md)] --- ## The `generic` driver When provisioning a new host, `docker-machine` executes these steps: 1) Create the host using a cloud or hypervisor API. 2) Connect to the host over SSH. 3) Install and configure Docker on the host. With the `generic` driver, we provide the IP address of an existing host (instead of e.g. cloud credentials) and we omit the first step. This allows to provision physical machines, or VMs provided by a 3rd party, or use a cloud for which we don't have a provisioning API. .debug[[containers/Docker_Machine.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Docker_Machine.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/blue-containers.jpg)] --- name: toc-advanced-dockerfiles class: title Advanced Dockerfiles .nav[ [Previous section](#toc-managing-hosts-with-docker-machine) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-application-configuration) ] .debug[(automatically generated title slide)] --- class: title # Advanced Dockerfiles ![construction](images/title-advanced-dockerfiles.jpg) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## Objectives We have seen simple Dockerfiles to illustrate how Docker build container images. In this section, we will see more Dockerfile commands. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## `Dockerfile` usage summary * `Dockerfile` instructions are executed in order. * Each instruction creates a new layer in the image. * Docker maintains a cache with the layers of previous builds. * When there are no changes in the instructions and files making a layer, the builder re-uses the cached layer, without executing the instruction for that layer. * The `FROM` instruction MUST be the first non-comment instruction. * Lines starting with `#` are treated as comments. * Some instructions (like `CMD` or `ENTRYPOINT`) update a piece of metadata. (As a result, each call to these instructions makes the previous one useless.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `RUN` instruction The `RUN` instruction can be specified in two ways. With shell wrapping, which runs the specified command inside a shell, with `/bin/sh -c`: ```dockerfile RUN apt-get update ``` Or using the `exec` method, which avoids shell string expansion, and allows execution in images that don't have `/bin/sh`: ```dockerfile RUN [ "apt-get", "update" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `RUN` instruction `RUN` will do the following: * Execute a command. * Record changes made to the filesystem. * Work great to install libraries, packages, and various files. `RUN` will NOT do the following: * Record state of *processes*. * Automatically start daemons. If you want to start something automatically when the container runs, you should use `CMD` and/or `ENTRYPOINT`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## Collapsing layers It is possible to execute multiple commands in a single step: ```dockerfile RUN apt-get update && apt-get install -y wget && apt-get clean ``` It is also possible to break a command onto multiple lines: ```dockerfile RUN apt-get update \ && apt-get install -y wget \ && apt-get clean ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `EXPOSE` instruction The `EXPOSE` instruction tells Docker what ports are to be published in this image. ```dockerfile EXPOSE 8080 EXPOSE 80 443 EXPOSE 53/tcp 53/udp ``` * All ports are private by default. * Declaring a port with `EXPOSE` is not enough to make it public. * The `Dockerfile` doesn't control on which port a service gets exposed. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## Exposing ports * When you `docker run -p ...`, that port becomes public. (Even if it was not declared with `EXPOSE`.) * When you `docker run -P ...` (without port number), all ports declared with `EXPOSE` become public. A *public port* is reachable from other containers and from outside the host. A *private port* is not reachable from outside. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `COPY` instruction The `COPY` instruction adds files and content from your host into the image. ```dockerfile COPY . /src ``` This will add the contents of the *build context* (the directory passed as an argument to `docker build`) to the directory `/src` in the container. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## Build context isolation Note: you can only reference files and directories *inside* the build context. Absolute paths are taken as being anchored to the build context, so the two following lines are equivalent: ```dockerfile COPY . /src COPY / /src ``` Attempts to use `..` to get out of the build context will be detected and blocked with Docker, and the build will fail. Otherwise, a `Dockerfile` could succeed on host A, but fail on host B. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD` `ADD` works almost like `COPY`, but has a few extra features. `ADD` can get remote files: ```dockerfile ADD http://www.example.com/webapp.jar /opt/ ``` This would download the `webapp.jar` file and place it in the `/opt` directory. `ADD` will automatically unpack zip files and tar archives: ```dockerfile ADD ./assets.zip /var/www/htdocs/assets/ ``` This would unpack `assets.zip` into `/var/www/htdocs/assets`. *However,* `ADD` will not automatically unpack remote archives. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD`, `COPY`, and the build cache * Before creating a new layer, Docker checks its build cache. * For most Dockerfile instructions, Docker only looks at the `Dockerfile` content to do the cache lookup. * For `ADD` and `COPY` instructions, Docker also checks if the files to be added to the container have been changed. * `ADD` always needs to download the remote file before it can check if it has been changed. (It cannot use, e.g., ETags or If-Modified-Since headers.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## `VOLUME` The `VOLUME` instruction tells Docker that a specific directory should be a *volume*. ```dockerfile VOLUME /var/lib/mysql ``` Filesystem access in volumes bypasses the copy-on-write layer, offering native performance to I/O done in those directories. Volumes can be attached to multiple containers, allowing to "port" data over from a container to another, e.g. to upgrade a database to a newer version. It is possible to start a container in "read-only" mode. The container filesystem will be made read-only, but volumes can still have read/write access if necessary. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `WORKDIR` instruction The `WORKDIR` instruction sets the working directory for subsequent instructions. It also affects `CMD` and `ENTRYPOINT`, since it sets the working directory used when starting the container. ```dockerfile WORKDIR /src ``` You can specify `WORKDIR` again to change the working directory for further operations. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENV` instruction The `ENV` instruction specifies environment variables that should be set in any container launched from the image. ```dockerfile ENV WEBAPP_PORT 8080 ``` This will result in an environment variable being created in any containers created from this image of ```bash WEBAPP_PORT=8080 ``` You can also specify environment variables when you use `docker run`. ```bash $ docker run -e WEBAPP_PORT=8000 -e WEBAPP_HOST=www.example.com ... ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `USER` instruction The `USER` instruction sets the user name or UID to use when running the image. It can be used multiple times to change back to root or to another user. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `CMD` instruction The `CMD` instruction is a default command run when a container is launched from the image. ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` Means we don't need to specify `nginx -g "daemon off;"` when running the container. Instead of: ```bash $ docker run /web_image nginx -g "daemon off;" ``` We can just do: ```bash $ docker run /web_image ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `CMD` instruction Just like `RUN`, the `CMD` instruction comes in two forms. The first executes in a shell: ```dockerfile CMD nginx -g "daemon off;" ``` The second executes directly, without shell processing: ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `CMD` instruction The `CMD` can be overridden when you run a container. ```bash $ docker run -it /web_image bash ``` Will run `bash` instead of `nginx -g "daemon off;"`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENTRYPOINT` instruction The `ENTRYPOINT` instruction is like the `CMD` instruction, but arguments given on the command line are *appended* to the entry point. Note: you have to use the "exec" syntax (`[ "..." ]`). ```dockerfile ENTRYPOINT [ "/bin/ls" ] ``` If we were to run: ```bash $ docker run training/ls -l ``` Instead of trying to run `-l`, the container will run `/bin/ls -l`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `ENTRYPOINT` instruction The entry point can be overridden as well. ```bash $ docker run -it training/ls bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr $ docker run -it --entrypoint bash training/ls root@d902fb7b1fc7:/# ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## How `CMD` and `ENTRYPOINT` interact The `CMD` and `ENTRYPOINT` instructions work best when used together. ```dockerfile ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ] ``` The `ENTRYPOINT` specifies the command to be run and the `CMD` specifies its options. On the command line we can then potentially override the options when needed. ```bash $ docker run -d /web_image -t ``` This will override the options `CMD` provided with new flags. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- ## Advanced Dockerfile instructions * `ONBUILD` lets you stash instructions that will be executed when this image is used as a base for another one. * `LABEL` adds arbitrary metadata to the image. * `ARG` defines build-time variables (optional or mandatory). * `STOPSIGNAL` sets the signal for `docker stop` (`TERM` by default). * `HEALTHCHECK` defines a command assessing the status of the container. * `SHELL` sets the default program to use for string-syntax RUN, CMD, etc. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## The `ONBUILD` instruction The `ONBUILD` instruction is a trigger. It sets instructions that will be executed when another image is built from the image being build. This is useful for building images which will be used as a base to build other images. ```dockerfile ONBUILD COPY . /src ``` * You can't chain `ONBUILD` instructions with `ONBUILD`. * `ONBUILD` can't be used to trigger `FROM` instructions. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Advanced_Dockerfiles.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/chinook-helicopter-container.jpg)] --- name: toc-application-configuration class: title Application Configuration .nav[ [Previous section](#toc-advanced-dockerfiles) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-logging) ] .debug[(automatically generated title slide)] --- # Application Configuration There are many ways to provide configuration to containerized applications. There is no "best way" β it depends on factors like: * configuration size, * mandatory and optional parameters, * scope of configuration (per container, per app, per customer, per site, etc), * frequency of changes in the configuration. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Command-line parameters ```bash docker run jpetazzo/hamba 80 www1:80 www2:80 ``` * Configuration is provided through command-line parameters. * In the above example, the `ENTRYPOINT` is a script that will: - parse the parameters, - generate a configuration file, - start the actual service. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Command-line parameters pros and cons * Appropriate for mandatory parameters (without which the service cannot start). * Convenient for "toolbelt" services instantiated many times. (Because there is no extra step: just run it!) * Not great for dynamic configurations or bigger configurations. (These things are still possible, but more cumbersome.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Environment variables ```bash docker run -e ELASTICSEARCH_URL=http://es42:9201/ kibana ``` * Configuration is provided through environment variables. * The environment variable can be used straight by the program, or by a script generating a configuration file. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Environment variables pros and cons * Appropriate for optional parameters (since the image can provide default values). * Also convenient for services instantiated many times. (It's as easy as command-line parameters.) * Great for services with lots of parameters, but you only want to specify a few. (And use default values for everything else.) * Ability to introspect possible parameters and their default values. * Not great for dynamic configurations. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration ``` FROM prometheus COPY prometheus.conf /etc ``` * The configuration is added to the image. * The image may have a default configuration; the new configuration can: - replace the default configuration, - extend it (if the code can read multiple configuration files). .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration pros and cons * Allows arbitrary customization and complex configuration files. * Requires to write a configuration file. (Obviously!) * Requires to build an image to start the service. * Requires to rebuild the image to reconfigure the service. * Requires to rebuild the image to upgrade the service. * Configured images can be stored in registries. (Which is great, but requires a registry.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Configuration volume ```bash docker run -v appconfig:/etc/appconfig myapp ``` * The configuration is stored in a volume. * The volume is attached to the container. * The image may have a default configuration. (But this results in a less "obvious" setup, that needs more documentation.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Configuration volume pros and cons * Allows arbitrary customization and complex configuration files. * Requires to create a volume for each different configuration. * Services with identical configurations can use the same volume. * Doesn't require to build / rebuild an image when upgrading / reconfiguring. * Configuration can be generated or edited through another container. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume * This is a powerful pattern for dynamic, complex configurations. * The configuration is stored in a volume. * The configuration is generated / updated by a special container. * The application container detects when the configuration is changed. (And automatically reloads the configuration when necessary.) * The configuration can be shared between multiple services if needed. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume example In a first terminal, start a load balancer with an initial configuration: ```bash $ docker run --name loadbalancer jpetazzo/hamba \ 80 goo.gl:80 ``` In another terminal, reconfigure that load balancer: ```bash $ docker run --rm --volumes-from loadbalancer jpetazzo/hamba reconfigure \ 80 google.com:80 ``` The configuration could also be updated through e.g. a REST API. (The REST API being itself served from another container.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- ## Keeping secrets .warning[Ideally, you should not put secrets (passwords, tokens...) in:] * command-line or environment variables (anyone with Docker API access can get them), * images, especially stored in a registry. Secrets management is better handled with an orchestrator (like Swarm or Kubernetes). Orchestrators will allow to pass secrets in a "one-way" manner. Managing secrets securely without an orchestrator can be contrived. E.g.: - read the secret on stdin when the service starts, - pass the secret using an API endpoint. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Application_Configuration.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/container-cranes.jpg)] --- name: toc-logging class: title Logging .nav[ [Previous section](#toc-application-configuration) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-deep-dive-into-container-internals) ] .debug[(automatically generated title slide)] --- # Logging In this chapter, we will explain the different ways to send logs from containers. We will then show one particular method in action, using ELK and Docker's logging drivers. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## There are many ways to send logs - The simplest method is to write on the standard output and error. - Applications can write their logs to local files. (The files are usually periodically rotated and compressed.) - It is also very common (on UNIX systems) to use syslog. (The logs are collected by syslogd or an equivalent like journald.) - In large applications with many components, it is common to use a logging service. (The code uses a library to send messages to the logging service.) *All these methods are available with containers.* .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Writing on stdout/stderr - The standard output and error of containers is managed by the container engine. - This means that each line written by the container is received by the engine. - The engine can then do "whatever" with these log lines. - With Docker, the default configuration is to write the logs to local files. - The files can then be queried with e.g. `docker logs` (and the equivalent API request). - This can be customized, as we will see later. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Writing to local files - If we write to files, it is possible to access them but cumbersome. (We have to use `docker exec` or `docker cp`.) - Furthermore, if the container is stopped, we cannot use `docker exec`. - If the container is deleted, the logs disappear. - What should we do for programs who can only log to local files? -- - There are multiple solutions. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Using a volume or bind mount - Instead of writing logs to a normal directory, we can place them on a volume. - The volume can be accessed by other containers. - We can run a program like `filebeat` in another container accessing the same volume. (`filebeat` reads local log files continuously, like `tail -f`, and sends them to a centralized system like ElasticSearch.) - We can also use a bind mount, e.g. `-v /var/log/containers/www:/var/log/tomcat`. - The container will write log files to a directory mapped to a host directory. - The log files will appear on the host and be consumable directly from the host. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Using logging services - We can use logging frameworks (like log4j or the Python `logging` package). - These frameworks require some code and/or configuration in our application code. - These mechanisms can be used identically inside or outside of containers. - Sometimes, we can leverage containerized networking to simplify their setup. - For instance, our code can send log messages to a server named `log`. - The name `log` will resolve to different addresses in development, production, etc. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Using syslog - What if our code (or the program we are running in containers) uses syslog? - One possibility is to run a syslog daemon in the container. - Then that daemon can be setup to write to local files or forward to the network. - Under the hood, syslog clients connect to a local UNIX socket, `/dev/log`. - We can expose a syslog socket to the container (by using a volume or bind-mount). - Then just create a symlink from `/dev/log` to the syslog socket. - VoilΓ ! .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Using logging drivers - If we log to stdout and stderr, the container engine receives the log messages. - The Docker Engine has a modular logging system with many plugins, including: - json-file (the default one) - syslog - journald - gelf - fluentd - splunk - etc. - Each plugin can process and forward the logs to another process or system. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## A word of warning about `json-file` - By default, log file size is unlimited. - This means that a very verbose container *will* use up all your disk space. (Or a less verbose container, but running for a very long time.) - Log rotation can be enabled by setting a `max-size` option. - Older log files can be removed by setting a `max-file` option. - Just like other logging options, these can be set per container, or globally. Example: ```bash $ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch ``` .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Demo: sending logs to ELK - We are going to deploy an ELK stack. - It will accept logs over a GELF socket. - We will run a few containers with the `gelf` logging driver. - We will then see our logs in Kibana, the web interface provided by ELK. *Important foreword: this is not an "official" or "recommended" setup; it is just an example. We used ELK in this demo because it's a popular setup and we keep being asked about it; but you will have equal success with Fluent or other logging stacks!* .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## What's in an ELK stack? - ELK is three components: - ElasticSearch (to store and index log entries) - Logstash (to receive log entries from various sources, process them, and forward them to various destinations) - Kibana (to view/search log entries with a nice UI) - The only component that we will configure is Logstash - We will accept log entries using the GELF protocol - Log entries will be stored in ElasticSearch, and displayed on Logstash's stdout for debugging .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Running ELK - We are going to use a Compose file describing the ELK stack. - The Compose file is in the container.training repository on GitHub. ```bash $ git clone https://github.com/jpetazzo/container.training $ cd container.training $ cd elk $ docker-compose up ``` - Let's have a look at the Compose file while it's deploying. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Our basic ELK deployment - We are using images from the Docker Hub: `elasticsearch`, `logstash`, `kibana`. - We don't need to change the configuration of ElasticSearch. - We need to tell Kibana the address of ElasticSearch: - it is set with the `ELASTICSEARCH_URL` environment variable, - by default it is `localhost:9200`, we change it to `elasticsearch:9200`. - We need to configure Logstash: - we pass the entire configuration file through command-line arguments, - this is a hack so that we don't have to create an image just for the config. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Sending logs to ELK - The ELK stack accepts log messages through a GELF socket. - The GELF socket listens on UDP port 12201. - To send a message, we need to change the logging driver used by Docker. - This can be done globally (by reconfiguring the Engine) or on a per-container basis. - Let's override the logging driver for a single container: ```bash $ docker run --log-driver=gelf --log-opt=gelf-address=udp://localhost:12201 \ alpine echo hello world ``` .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Viewing the logs in ELK - Connect to the Kibana interface. - It is exposed on port 5601. - Browse http://X.X.X.X:5601. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## "Configuring" Kibana - Kibana should offer you to "Configure an index pattern": in the "Time-field name" drop down, select "@timestamp", and hit the "Create" button. - Then: - click "Discover" (in the top-left corner), - click "Last 15 minutes" (in the top-right corner), - click "Last 1 hour" (in the list in the middle), - click "Auto-refresh" (top-right corner), - click "5 seconds" (top-left of the list). - You should see a series of green bars (with one new green bar every minute). - Our 'hello world' message should be visible there. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- ## Important afterword **This is not a "production-grade" setup.** It is just an educational example. Since we have only one node , we did set up a single ElasticSearch instance and a single Logstash instance. In a production setup, you need an ElasticSearch cluster (both for capacity and availability reasons). You also need multiple Logstash instances. And if you want to withstand bursts of logs, you need some kind of message queue: Redis if you're cheap, Kafka if you want to make sure that you don't drop messages on the floor. Good luck. If you want to learn more about the GELF driver, have a look at [this blog post]( https://jpetazzo.github.io/2017/01/20/docker-logging-gelf/). .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Logging.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/container-housing.jpg)] --- name: toc-deep-dive-into-container-internals class: title Deep dive into container internals .nav[ [Previous section](#toc-logging) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-namespaces) ] .debug[(automatically generated title slide)] --- # Deep dive into container internals In this chapter, we will explain some of the fundamental building blocks of containers. This will give you a solid foundation so you can: - understand "what's going on" in complex situations, - anticipate the behavior of containers (performance, security...) in new scenarios, - implement your own container engine. The last item should be done for educational purposes only! .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## There is no container code in the Linux kernel - If we search "container" in the Linux kernel code, we find: - generic code to manipulate data structures (like linked lists, etc.), - unrelated concepts like "ACPI containers", - *nothing* relevant to "our" containers! - Containers are composed using multiple independent features. - On Linux, containers rely on "namespaces, cgroups, and some filesystem magic." - Security also requires features like capabilities, seccomp, LSMs... .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/containers-by-the-water.jpg)] --- name: toc-namespaces class: title Namespaces .nav[ [Previous section](#toc-deep-dive-into-container-internals) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-control-groups) ] .debug[(automatically generated title slide)] --- # Namespaces - Provide processes with their own view of the system. - Namespaces limit what you can see (and therefore, what you can use). - These namespaces are available in modern kernels: - pid - net - mnt - uts - ipc - user (We are going to detail them individually.) - Each process belongs to one namespace of each type. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Namespaces are always active - Namespaces exist even when you don't use containers. - This is a bit similar to the UID field in UNIX processes: - all processes have the UID field, even if no user exists on the system - the field always has a value / the value is always defined (i.e. any process running on the system has some UID) - the value of the UID field is used when checking permissions (the UID field determines which resources the process can access) - You can replace "UID field" with "namespace" above and it still works! - In other words: even when you don't use containers, there is one namespace of each type, containing all the processes on the system. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Manipulating namespaces - Namespaces are created with two methods: - the `clone()` system call (used when creating new threads and processes), - the `unshare()` system call. - The Linux tool `unshare` allows to do that from a shell. - A new process can re-use none / all / some of the namespaces of its parent. - It is possible to "enter" a namespace with the `setns()` system call. - The Linux tool `nsenter` allows to do that from a shell. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Namespaces lifecycle - When the last process of a namespace exits, the namespace is destroyed. - All the associated resources are then removed. - Namespaces are materialized by pseudo-files in `/proc//ns`. ```bash ls -l /proc/self/ns ``` - It is possible to compare namespaces by checking these files. (This helps to answer the question, "are these two processes in the same namespace?") - It is possible to preserve a namespace by bind-mounting its pseudo-file. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Namespaces can be used independently - As mentioned in the previous slides: *A new process can re-use none / all / some of the namespaces of its parent.* - We are going to use that property in the examples in the next slides. - We are going to present each type of namespace. - For each type, we will provide an example using only that namespace. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## UTS namespace - gethostname / sethostname - Allows to set a custom hostname for a container. - That's (mostly) it! - Also allows to set the NIS domain. (If you don't know what a NIS domain is, you don't have to worry about it!) - If you're wondering: UTS = UNIX time sharing. - This namespace was named like this because of the `struct utsname`, which is commonly used to obtain the machine's hostname, architecture, etc. (The more you know!) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Creating our first namespace Let's use `unshare` to create a new process that will have its own UTS namespace: ```bash $ sudo unshare --uts ``` - We have to use `sudo` for most `unshare` operations. - We indicate that we want a new uts namespace, and nothing else. - If we don't specify a program to run, a `$SHELL` is started. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Demonstrating our uts namespace In our new "container", check the hostname, change it, and check it: ```bash # hostname nodeX # hostname tupperware # hostname tupperware ``` In another shell, check that the machine's hostname hasn't changed: ```bash $ hostname nodeX ``` Exit the "container" with `exit` or `Ctrl-D`. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Net namespace overview - Each network namespace has its own private network stack. - The network stack includes: - network interfaces (including `lo`), - routing table**s** (as in `ip rule` etc.), - iptables chains and rules, - sockets (as seen by `ss`, `netstat`). - You can move a network interface from a network namespace to another: ```bash ip link set dev eth0 netns PID ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Net namespace typical use - Each container is given its own network namespace. - For each network namespace (i.e. each container), a `veth` pair is created. (Two `veth` interfaces act as if they were connected with a cross-over cable.) - One `veth` is moved to the container network namespace (and renamed `eth0`). - The other `veth` is moved to a bridge on the host (e.g. the `docker0` bridge). .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Creating a network namespace Start a new process with its own network namespace: ```bash $ sudo unshare --net ``` See that this new network namespace is unconfigured: ```bash # ping 1.1 connect: Network is unreachable # ifconfig # ip link ls 1: lo: mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Creating the `veth` interfaces In another shell (on the host), create a `veth` pair: ```bash $ sudo ip link add name in_host type veth peer name in_netns ``` Configure the host side (`in_host`): ```bash $ sudo ip link set in_host master docker0 up ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Moving the `veth` interface *In the process created by `unshare`,* check the PID of our "network container": ```bash # echo $$ 533 ``` *On the host*, move the other side (`in_netns`) to the network namespace: ```bash $ sudo ip link set in_netns netns 533 ``` (Make sure to update "533" with the actual PID obtained above!) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Basic network configuration Let's set up `lo` (the loopback interface): ```bash # ip link set lo up ``` Activate the `veth` interface and rename it to `eth0`: ```bash # ip link set in_netns name eth0 up ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Allocating IP address and default route *On the host*, check the address of the Docker bridge: ```bash $ ip addr ls dev docker0 ``` (It could be something like `172.17.0.1`.) Pick an IP address in the middle of the same subnet, e.g. `172.17.0.99`. *In the process created by `unshare`,* configure the interface: ```bash # ip addr add 172.17.0.99/24 dev eth0 # ip route add default via 172.17.0.1 ``` (Make sure to update the IP addresses if necessary.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Validating the setup Check that we now have connectivity: ```bash # ping 1.1 ``` Note: we were able to take a shortcut, because Docker is running, and provides us with a `docker0` bridge and a valid `iptables` setup. If Docker is not running, you will need to take care of this! .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## Cleaning up network namespaces - Terminate the process created by `unshare` (with `exit` or `Ctrl-D`). - Since this was the only process in the network namespace, it is destroyed. - All the interfaces in the network namespace are destroyed. - When a `veth` interface is destroyed, it also destroys the other half of the pair. - So we don't have anything else to do to clean up! .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Other ways to use network namespaces - `--net none` gives an empty network namespace to a container. (Effectively isolating it completely from the network.) - `--net host` means "do not containerize the network". (No network namespace is created; the container uses the host network stack.) - `--net container` means "reuse the network namespace of another container". (As a result, both containers share the same interfaces, routes, etc.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Mnt namespace - Processes can have their own root fs (Γ la chroot). - Processes can also have "private" mounts. This allows to: - isolate `/tmp` (per user, per service...) - mask `/proc`, `/sys` (for processes that don't need them) - mount remote filesystems or sensitive data, but make it visible only for allowed processes - Mounts can be totally private, or shared. - At this point, there is no easy way to pass along a mount from a namespace to another. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Setting up a private `/tmp` Create a new mount namespace: ```bash $ sudo unshare --mount ``` In that new namespace, mount a brand new `/tmp`: ```bash # mount -t tmpfs none /tmp ``` Check the content of `/tmp` in the new namespace, and compare to the host. The mount is automatically cleaned up when you exit the process. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## PID namespace - Processes within a PID namespace only "see" processes in the same PID namespace. - Each PID namespace has its own numbering (starting at 1). - When PID 1 goes away, the whole namespace is killed. (When PID 1 goes away on a normal UNIX system, the kernel panics!) - Those namespaces can be nested. - A process ends up having multiple PIDs (one per namespace in which it is nested). .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## PID namespace in action Create a new PID namespace: ```bash $ sudo unshare --pid --fork ``` (We need the `--fork` flag because the PID namespace is special.) Check the process tree in the new namespace: ```bash # ps faux ``` -- class: extra-details, deep-dive π€ Why do we see all the processes?!? .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## PID namespaces and `/proc` - Tools like `ps` rely on the `/proc` pseudo-filesystem. - Our new namespace still has access to the original `/proc`. - Therefore, it still sees host processes. - But it cannot affect them. (Try to `kill` a process: you will get `No such process`.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## PID namespaces, take 2 - This can be solved by mounting `/proc` in the namespace. - The `unshare` utility provides a convenience flag, `--mount-proc`. - This flag will mount `/proc` in the namespace. - It will also unshare the mount namespace, so that this mount is local. Try it: ```bash $ sudo unshare --pid --fork --mount-proc # ps faux ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details ## OK, really, why do we need `--fork`? *It is not necessary to remember all these details. This is just an illustration of the complexity of namespaces!* The `unshare` tool calls the `unshare` syscall, then `exec`s the new binary. A process calling `unshare` to create new namespaces is moved to the new namespaces... ... Except for the PID namespace. (Because this would change the current PID of the process from X to 1.) The processes created by the new binary are placed into the new PID namespace. The first one will be PID 1. If PID 1 exits, it is not possible to create additional processes in the namespace. (Attempting to do so will result in `ENOMEM`.) Without the `--fork` flag, the first command that we execute will be PID 1 ... ... And once it exits, we cannot create more processes in the namespace! Check `man 2 unshare` and `man pid_namespaces` if you want more details. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## IPC namespace -- - Does anybody know about IPC? -- - Does anybody *care* about IPC? -- - Allows a process (or group of processes) to have own: - IPC semaphores - IPC message queues - IPC shared memory ... without risk of conflict with other instances. - Older versions of PostgreSQL cared about this. *No demo for that one.* .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## User namespace - Allows to map UID/GID; e.g.: - UID 0β1999 in container C1 is mapped to UID 10000β11999 on host - UID 0β1999 in container C2 is mapped to UID 12000β13999 on host - etc. - UID 0 in the container can still perform privileged operations in the container. (For instance: setting up network interfaces.) - But outside of the container, it is a non-privileged user. - It also means that the UID in containers becomes unimportant. (Just use UID 0 in the container, since it gets squashed to a non-privileged user outside.) - Ultimately enables better privilege separation in container engines. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## User namespace challenges - UID needs to be mapped when passed between processes or kernel subsystems. - Filesystem permissions and file ownership are more complicated. .small[(E.g. when the same root filesystem is shared by multiple containers running with different UIDs.)] - With the Docker Engine: - some feature combinations are not allowed (e.g. user namespace + host network namespace sharing) - user namespaces need to be enabled/disabled globally (when the daemon is started) - container images are stored separately (so the first time you toggle user namespaces, you need to re-pull images) *No demo for that one.* .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/distillery-containers.jpg)] --- name: toc-control-groups class: title Control groups .nav[ [Previous section](#toc-namespaces) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-security-features) ] .debug[(automatically generated title slide)] --- # Control groups - Control groups provide resource *metering* and *limiting*. - This covers a number of "usual suspects" like: - memory - CPU - block I/O - network (with cooperation from iptables/tc) - And a few exotic ones: - huge pages (a special way to allocate memory) - RDMA (resources specific to InfiniBand / remote memory transfer) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Crowd control - Control groups also allow to group processes for special operations: - freezer (conceptually similar to a "mass-SIGSTOP/SIGCONT") - perf_event (gather performance events on multiple processes) - cpuset (limit or pin processes to specific CPUs) - There is a "pids" cgroup to limit the number of processes in a given group. - There is also a "devices" cgroup to control access to device nodes. (i.e. everything in `/dev`.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Generalities - Cgroups form a hierarchy (a tree). - We can create nodes in that hierarchy. - We can associate limits to a node. - We can move a process (or multiple processes) to a node. - The process (or processes) will then respect these limits. - We can check the current usage of each node. - In other words: limits are optional (if we only want accounting). - When a process is created, it is placed in its parent's groups. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Example The numbers are PIDs. The names are the names of our nodes (arbitrarily chosen). .small[ ```bash cpu memory βββ batch βββ stateless β βββ cryptoscam β βββ 25 β β βββ 52 β βββ 26 β βββ ffmpeg β βββ 27 β βββ 109 β βββ 52 β βββ 88 β βββ 109 βββ realtime β βββ 88 βββ nginx βββ databases β βββ 25 βββ 1008 β βββ 26 βββ 524 β βββ 27 βββ postgres β βββ 524 βββ redis βββ 1008 ``` ] .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Cgroups v1 vs v2 - Cgroups v1 are available on all systems (and widely used). - Cgroups v2 are a huge refactor. (Development started in Linux 3.10, released in 4.5.) - Cgroups v2 have a number of differences: - single hierarchy (instead of one tree per controller), - processes can only be on leaf nodes (not inner nodes), - and of course many improvements / refactorings. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Memory cgroup: accounting - Keeps track of pages used by each group: - file (read/write/mmap from block devices), - anonymous (stack, heap, anonymous mmap), - active (recently accessed), - inactive (candidate for eviction). - Each page is "charged" to a group. - Pages can be shared across multiple groups. (Example: multiple processes reading from the same files.) - To view all the counters kept by this cgroup: ```bash $ cat /sys/fs/cgroup/memory/memory.stat ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Memory cgroup: limits - Each group can have (optional) hard and soft limits. - Limits can be set for different kinds of memory: - physical memory, - kernel memory, - total memory (including swap). .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Soft limits and hard limits - Soft limits are not enforced. (But they influence reclaim under memory pressure.) - Hard limits *cannot* be exceeded: - if a group of processes exceeds a hard limit, - and if the kernel cannot reclaim any memory, - then the OOM (out-of-memory) killer is triggered, - and processes are killed until memory gets below the limit again. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Avoiding the OOM killer - For some workloads (databases and stateful systems), killing processes because we run out of memory is not acceptable. - The "oom-notifier" mechanism helps with that. - When "oom-notifier" is enabled and a hard limit is exceeded: - all processes in the cgroup are frozen, - a notification is sent to user space (instead of killing processes), - user space can then raise limits, migrate containers, etc., - once the memory usage is below the hard limit, unfreeze the cgroup. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Overhead of the memory cgroup - Each time a process grabs or releases a page, the kernel update counters. - This adds some overhead. - Unfortunately, this cannot be enabled/disabled per process. - It has to be done system-wide, at boot time. - Also, when multiple groups use the same page: - only the first group gets "charged", - but if it stops using it, the "charge" is moved to another group. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Setting up a limit with the memory cgroup Create a new memory cgroup: ```bash $ CG=/sys/fs/cgroup/memory/onehundredmegs $ sudo mkdir $CG ``` Limit it to approximately 100MB of memory usage: ```bash $ sudo tee $CG/memory.memsw.limit_in_bytes <<< 100000000 ``` Move the current process to that cgroup: ```bash $ sudo tee $CG/tasks <<< $$ ``` The current process *and all its future children* are now limited. (Confused about `<<<`? Look at the next slide!) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## What's `<<<`? - This is a "here string". (It is a non-POSIX shell extension.) - The following commands are equivalent: ```bash foo <<< hello ``` ```bash echo hello | foo ``` ```bash foo < $CG/tasks" ``` The following commands, however, would be invalid: ```bash sudo echo $$ > $CG/tasks ``` ```bash sudo -i # (or su) echo $$ > $CG/tasks ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: extra-details, deep-dive ## Testing the memory limit Start the Python interpreter: ```bash $ python Python 3.6.4 (default, Jan 5 2018, 02:35:40) [GCC 7.2.1 20171224] on linux Type "help", "copyright", "credits" or "license" for more information. >>> ``` Allocate 80 megabytes: ```python >>> s = "!" * 1000000 * 80 ``` Add 20 megabytes more: ```python >>> t = "!" * 1000000 * 20 Killed ``` .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## CPU cgroup - Keeps track of CPU time used by a group of processes. (This is easier and more accurate than `getrusage` and `/proc`.) - Keeps track of usage per CPU as well. (i.e., "this group of process used X seconds of CPU0 and Y seconds of CPU1".) - Allows to set relative weights used by the scheduler. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Cpuset cgroup - Pin groups to specific CPU(s). - Use-case: reserve CPUs for specific apps. - Warning: make sure that "default" processes aren't using all CPUs! - CPU pinning can also avoid performance loss due to cache flushes. - This is also relevant for NUMA systems. - Provides extra dials and knobs. (Per zone memory pressure, process migration costs...) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Blkio cgroup - Keeps track of I/Os for each group: - per block device - read vs write - sync vs async - Set throttle (limits) for each group: - per block device - read vs write - ops vs bytes - Set relative weights for each group. - Note: most writes go through the page cache. (So classic writes will appear to be unthrottled at first.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Net_cls and net_prio cgroup - Only works for egress (outgoing) traffic. - Automatically set traffic class or priority for traffic generated by processes in the group. - Net_cls will assign traffic to a class. - Classes have to be matched with tc or iptables, otherwise traffic just flows normally. - Net_prio will assign traffic to a priority. - Priorities are used by queuing disciplines. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Devices cgroup - Controls what the group can do on device nodes - Permissions include read/write/mknod - Typical use: - allow `/dev/{tty,zero,random,null}` ... - deny everything else - A few interesting nodes: - `/dev/net/tun` (network interface manipulation) - `/dev/fuse` (filesystems in user space) - `/dev/kvm` (VMs in containers, yay inception!) - `/dev/dri` (GPU) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/lots-of-containers.jpg)] --- name: toc-security-features class: title Security features .nav[ [Previous section](#toc-control-groups) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-copy-on-write-filesystems) ] .debug[(automatically generated title slide)] --- # Security features - Namespaces and cgroups are not enough to ensure strong security. - We need extra mechanisms: capabilities, seccomp, LSMs. - These mechanisms were already used before containers to harden security. - They can be used together with containers. - Good container engines will automatically leverage these features. (So that you don't have to worry about it.) .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Capabilities - In traditional UNIX, many operations are possible if and only if UID=0 (root). - Some of these operations are very powerful: - changing file ownership, accessing all files ... - Some of these operations deal with system configuration, but can be abused: - setting up network interfaces, mounting filesystems ... - Some of these operations are not very dangerous but are needed by servers: - binding to a port below 1024. - Capabilities are per-process flags to allow these operations individually. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Some capabilities - `CAP_CHOWN`: arbitrarily change file ownership and permissions. - `CAP_DAC_OVERRIDE`: arbitrarily bypass file ownership and permissions. - `CAP_NET_ADMIN`: configure network interfaces, iptables rules, etc. - `CAP_NET_BIND_SERVICE`: bind a port below 1024. See `man capabilities` for the full list and details. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Using capabilities - Container engines will typically drop all "dangerous" capabilities. - You can then re-enable capabilities on a per-container basis, as needed. - With the Docker engine: `docker run --cap-add ...` - If you write your own code to manage capabilities: - make sure that you understand what each capability does, - read about *ambient* capabilities as well. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Seccomp - Seccomp is secure computing. - Achieve high level of security by restricting drastically available syscalls. - Original seccomp only allows `read()`, `write()`, `exit()`, `sigreturn()`. - The seccomp-bpf extension allows to specify custom filters with BPF rules. - This allows to filter by syscall, and by parameter. - BPF code can perform arbitrarily complex checks, quickly, and safely. - Container engines take care of this so you don't have to. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- ## Linux Security Modules - The most popular ones are SELinux and AppArmor. - Red Hat distros generally use SELinux. - Debian distros (in particular, Ubuntu) generally use AppArmor. - LSMs add a layer of access control to all process operations. - Container engines take care of this so you don't have to. .debug[[containers/Namespaces_Cgroups.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Namespaces_Cgroups.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/plastic-containers.JPG)] --- name: toc-copy-on-write-filesystems class: title Copy-on-write filesystems .nav[ [Previous section](#toc-security-features) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-docker-engine-and-other-container-engines) ] .debug[(automatically generated title slide)] --- # Copy-on-write filesystems Container engines rely on copy-on-write to be able to start containers quickly, regardless of their size. We will explain how that works, and review some of the copy-on-write storage systems available on Linux. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## What is copy-on-write? - Copy-on-write is a mechanism allowing to share data. - The data appears to be a copy, but is only a link (or reference) to the original data. - The actual copy happens only when someone tries to change the shared data. - Whoever changes the shared data ends up using their own copy instead of the shared data. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## A few metaphors -- - First metaphor: white board and tracing paper -- - Second metaphor: magic books with shadowy pages -- - Third metaphor: just-in-time house building .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Copy-on-write is *everywhere* - Process creation with `fork()`. - Consistent disk snapshots. - Efficient VM provisioning. - And, of course, containers. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Copy-on-write and containers Copy-on-write is essential to give us "convenient" containers. - Creating a new container (from an existing image) is "free". (Otherwise, we would have to copy the image first.) - Customizing a container (by tweaking a few files) is cheap. (Adding a 1 KB configuration file to a 1 GB container takes 1 KB, not 1 GB.) - We can take snapshots, i.e. have "checkpoints" or "save points" when building images. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## AUFS overview - The original (legacy) copy-on-write filesystem used by first versions of Docker. - Combine multiple *branches* in a specific order. - Each branch is just a normal directory. - You generally have: - at least one read-only branch (at the bottom), - exactly one read-write branch (at the top). (But other fun combinations are possible too!) .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## AUFS operations: opening a file - With `O_RDONLY` - read-only access: - look it up in each branch, starting from the top - open the first one we find - With `O_WRONLY` or `O_RDWR` - write access: - if the file exists on the top branch: open it - if the file exists on another branch: "copy up" (i.e. copy the file to the top branch and open the copy) - if the file doesn't exist on any branch: create it on the top branch That "copy-up" operation can take a while if the file is big! .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## AUFS operations: deleting a file - A *whiteout* file is created. - This is similar to the concept of "tombstones" used in some data systems. ``` # docker run ubuntu rm /etc/shadow # ls -la /var/lib/docker/aufs/diff/$(docker ps --no-trunc -lq)/etc total 8 drwxr-xr-x 2 root root 4096 Jan 27 15:36 . drwxr-xr-x 5 root root 4096 Jan 27 15:36 .. -r--r--r-- 2 root root 0 Jan 27 15:36 .wh.shadow ``` .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## AUFS performance - AUFS `mount()` is fast, so creation of containers is quick. - Read/write access has native speeds. - But initial `open()` is expensive in two scenarios: - when writing big files (log files, databases ...), - when searching many directories (PATH, classpath, etc.) over many layers. - Protip: when we built dotCloud, we ended up putting all important data on *volumes*. - When starting the same container multiple times: - the data is loaded only once from disk, and cached only once in memory; - but `dentries` will be duplicated. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Device Mapper Device Mapper is a rich subsystem with many features. It can be used for: RAID, encrypted devices, snapshots, and more. In the context of containers (and Docker in particular), "Device Mapper" means: "the Device Mapper system + its *thin provisioning target*" If you see the abbreviation "thinp" it stands for "thin provisioning". .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Device Mapper principles - Copy-on-write happens on the *block* level (instead of the *file* level). - Each container and each image get their own block device. - At any given time, it is possible to take a snapshot: - of an existing container (to create a frozen image), - of an existing image (to create a container from it). - If a block has never been written to: - it's assumed to be all zeros, - it's not allocated on disk. (That last property is the reason for the name "thin" provisioning.) .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Device Mapper operational details - Two storage areas are needed: one for *data*, another for *metadata*. - "data" is also called the "pool"; it's just a big pool of blocks. (Docker uses the smallest possible block size, 64 KB.) - "metadata" contains the mappings between virtual offsets (in the snapshots) and physical offsets (in the pool). - Each time a new block (or a copy-on-write block) is written, a block is allocated from the pool. - When there are no more blocks in the pool, attempts to write will stall until the pool is increased (or the write operation aborted). - In other words: when running out of space, containers are frozen, but operations will resume as soon as space is available. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Device Mapper performance - By default, Docker puts data and metadata on a loop device backed by a sparse file. - This is great from a usability point of view, since zero configuration is needed. - But it is terrible from a performance point of view: - each time a container writes to a new block, - a block has to be allocated from the pool, - and when it's written to, - a block has to be allocated from the sparse file, - and sparse file performance isn't great anyway. - If you use Device Mapper, make sure to put data (and metadata) on devices! .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## BTRFS principles - BTRFS is a filesystem (like EXT4, XFS, NTFS...) with built-in snapshots. - The "copy-on-write" happens at the filesystem level. - BTRFS integrates the snapshot and block pool management features at the filesystem level. (Instead of the block level for Device Mapper.) - In practice, we create a "subvolume" and later take a "snapshot" of that subvolume. Imagine: `mkdir` with Super Powers and `cp -a` with Super Powers. - These operations can be executed with the `btrfs` CLI tool. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## BTRFS in practice with Docker - Docker can use BTRFS and its snapshotting features to store container images. - The only requirement is that `/var/lib/docker` is on a BTRFS filesystem. (Or, the directory specified with the `--data-root` flag when starting the engine.) .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- class: extra-details ## BTRFS quirks - BTRFS works by dividing its storage in *chunks*. - A chunk can contain data or metadata. - You can run out of chunks (and get `No space left on device`) even though `df` shows space available. (Because chunks are only partially allocated.) - Quick fix: ``` # btrfs filesys balance start -dusage=1 /var/lib/docker ``` .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Overlay2 - Overlay2 is very similar to AUFS. - However, it has been merged in "upstream" kernel. - It is therefore available on all modern kernels. (AUFS was available on Debian and Ubuntu, but required custom kernels on other distros.) - It is simpler than AUFS (it can only have two branches, called "layers"). - The container engine abstracts this detail, so this is not a concern. - Overlay2 storage drivers generally use hard links between layers. - This improves `stat()` and `open()` performance, at the expense of inode usage. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## ZFS - ZFS is similar to BTRFS (at least from a container user's perspective). - Pros: - high performance - high reliability (with e.g. data checksums) - optional data compression and deduplication - Cons: - high memory usage - not in upstream kernel - It is available as a kernel module or through FUSE. .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- ## Which one is the best? - Eventually, overlay2 should be the best option. - It is available on all modern systems. - Its memory usage is better than Device Mapper, BTRFS, or ZFS. - The remarks about *write performance* shouldn't bother you: data should always be stored in volumes anyway! .debug[[containers/Copy_On_Write.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Copy_On_Write.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-1.jpg)] --- name: toc-docker-engine-and-other-container-engines class: title Docker Engine and other container engines .nav[ [Previous section](#toc-copy-on-write-filesystems) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-orchestration-an-overview) ] .debug[(automatically generated title slide)] --- # Docker Engine and other container engines * We are going to cover the architecture of the Docker Engine. * We will also present other container engines. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- class: pic ## Docker Engine external architecture ![](images/docker-engine-architecture.svg) .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## Docker Engine external architecture * The Engine is a daemon (service running in the background). * All interaction is done through a REST API exposed over a socket. * On Linux, the default socket is a UNIX socket: `/var/run/docker.sock`. * We can also use a TCP socket, with optional mutual TLS authentication. * The `docker` CLI communicates with the Engine over the socket. Note: strictly speaking, the Docker API is not fully REST. Some operations (e.g. dealing with interactive containers and log streaming) don't fit the REST model. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- class: pic ## Docker Engine internal architecture ![](images/dockerd-and-containerd.png) .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## Docker Engine internal architecture * Up to Docker 1.10: the Docker Engine is one single monolithic binary. * Starting with Docker 1.11, the Engine is split into multiple parts: - `dockerd` (REST API, auth, networking, storage) - `containerd` (container lifecycle, controlled over a gRPC API) - `containerd-shim` (per-container; does almost nothing but allows to restart the Engine without restarting the containers) - `runc` (per-container; does the actual heavy lifting to start the container) * Some features (like image and snapshot management) are progressively being pushed from `dockerd` to `containerd`. For more details, check [this short presentation by Phil Estes](https://www.slideshare.net/PhilEstes/diving-through-the-layers-investigating-runc-containerd-and-the-docker-engine-architecture). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## Other container engines The following list is not exhaustive. Furthermore, we limited the scope to Linux containers. We can also find containers (or things that look like containers) on other platforms like Windows, macOS, Solaris, FreeBSD ... .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## LXC * The venerable ancestor (first released in 2008). * Docker initially relied on it to execute containers. * No daemon; no central API. * Each container is managed by a `lxc-start` process. * Each `lxc-start` process exposes a custom API over a local UNIX socket, allowing to interact with the container. * No notion of image (container filesystems have to be managed manually). * Networking has to be setup manually. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## LXD * Re-uses LXC code (through liblxc). * Builds on top of LXC to offer a more modern experience. * Daemon exposing a REST API. * Can manage images, snapshots, migrations, networking, storage. * "offers a user experience similar to virtual machines but using Linux containers instead." .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## rkt * Compares to `runc`. * No daemon or API. * Strong emphasis on security (through privilege separation). * Networking has to be setup separately (e.g. through CNI plugins). * Partial image management (pull, but no push). (Image build is handled by separate tools.) .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## CRI-O * Designed to be used with Kubernetes as a simple, basic runtime. * Compares to `containerd`. * Daemon exposing a gRPC interface. * Controlled using the CRI API (Container Runtime Interface defined by Kubernetes). * Needs an underlying OCI runtime (e.g. runc). * Handles storage, images, networking (through CNI plugins). We're not aware of anyone using it directly (i.e. outside of Kubernetes). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## systemd * "init" system (PID 1) in most modern Linux distributions. * Offers tools like `systemd-nspawn` and `machinectl` to manage containers. * `systemd-nspawn` is "In many ways it is similar to chroot(1), but more powerful". * `machinectl` can interact with VMs and containers managed by systemd. * Exposes a DBUS API. * Basic image support (tar archives and raw disk images). * Network has to be setup manually. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## Kata containers * OCI-compliant runtime. * Fusion of two projects: Intel Clear Containers and Hyper runV. * Run each container in a lightweight virtual machine. * Requires to run on bare metal *or* with nested virtualization. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## gVisor * OCI-compliant runtime. * Implements a subset of the Linux kernel system calls. * Written in go, uses a smaller subset of system calls. * Can be heavily sandboxed. * Can run in two modes: * KVM (requires bare metal or nested virtualization), * ptrace (no requirement, but slower). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- ## Overall ... * The Docker Engine is very developer-centric: - easy to install - easy to use - no manual setup - first-class image build and transfer * As a result, it is a fantastic tool in development environments. * On servers: - Docker is a good default choice - If you use Kubernetes, the engine doesn't matter .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Container_Engines.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-2.jpg)] --- name: toc-orchestration-an-overview class: title Orchestration, an overview .nav[ [Previous section](#toc-docker-engine-and-other-container-engines) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-links-and-resources) ] .debug[(automatically generated title slide)] --- # Orchestration, an overview In this chapter, we will: * Explain what is orchestration and why we would need it. * Present (from a high-level perspective) some orchestrators. * Show one orchestrator (Kubernetes) in action. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## What's orchestration? ![Joana Carneiro (orchestra conductor)](images/conductor.jpg) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## What's orchestration? According to Wikipedia: *Orchestration describes the __automated__ arrangement, coordination, and management of complex computer systems, middleware, and services.* -- *[...] orchestration is often discussed in the context of __service-oriented architecture__, __virtualization__, provisioning, Converged Infrastructure and __dynamic datacenter__ topics.* -- What does that really mean? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances -- - Q: do we always use 100% of our servers? -- - A: obviously not! .center[![Daily variations of traffic](images/traffic-graph.png)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances - Every night, scale down (by shutting down extraneous replicated instances) - Every morning, scale up (by deploying new copies) - "Pay for what you use" (i.e. save big $$$ here) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances How do we implement this? - Crontab - Autoscaling (save even bigger $$$) That's *relatively* easy. Now, how are things for our IAAS provider? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - Q: what's the #1 cost in a datacenter? -- - A: electricity! -- - Q: what uses electricity? -- - A: servers, obviously - A: ... and associated cooling -- - Q: do we always use 100% of our servers? -- - A: obviously not! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - If only we could turn off unused servers during the night... - Problem: we can only turn off a server if it's totally empty! (i.e. all VMs on it are stopped/moved) - Solution: *migrate* VMs and shutdown empty servers (e.g. combine two hypervisors with 40% load into 80%+0%, and shutdown the one at 0%) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter How do we implement this? - Shutdown empty hosts (but keep some spare capacity) - Start hosts again when capacity gets low - Ability to "live migrate" VMs (Xen already did this 10+ years ago) - Rebalance VMs on a regular basis - what if a VM is stopped while we move it? - should we allow provisioning on hosts involved in a migration? *Scheduling* becomes more complex. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## What is scheduling? According to Wikipedia (again): *In computing, scheduling is the method by which threads, processes or data flows are given access to system resources.* The scheduler is concerned mainly with: - throughput (total amount of work done per time unit); - turnaround time (between submission and completion); - response time (between submission and start); - waiting time (between job readiness and execution); - fairness (appropriate times according to priorities). In practice, these goals often conflict. **"Scheduling" = decide which resources to use.** .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Exercise 1 - You have: - 5 hypervisors (physical machines) - Each server has: - 16 GB RAM, 8 cores, 1 TB disk - Each week, your team asks: - one VM with X RAM, Y CPU, Z disk Scheduling = deciding which hypervisor to use for each VM. Difficulty: easy! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM. Difficulty: ??? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM. ![Troll face](images/trollface.png) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Exercise 3 - You have machines (physical and/or virtual) - You have containers - You are trying to put the containers on the machines - Sounds familiar? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[![Not-so-good bin packing](images/binpacking-1d-1.gif)] ## We can't fit a job of size 6 :( .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[![Better bin packing](images/binpacking-1d-2.gif)] ## ... Now we can! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with two resources .center[![2D bin packing](images/binpacking-2d.gif)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with three resources .center[![3D bin packing](images/binpacking-3d.gif)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## You need to be good at this .center[![Tangram](images/tangram.gif)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## But also, you must be quick! .center[![Tetris](images/tetris-1.png)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## And be web scale! .center[![Big tetris](images/tetris-2.gif)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## And think outside (?) of the box! .center[![3D tetris](images/tetris-3.png)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: pic ## Good luck! .center[![FUUUUUU face](images/fu-face.jpg)] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## TL,DR * Scheduling with multiple resources (dimensions) is hard. * Don't expect to solve the problem with a Tiny Shell Script. * There are literally tons of research papers written on this. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## But our orchestrator also needs to manage ... * Network connectivity (or filtering) between containers. * Load balancing (external and internal). * Failure recovery (if a node or a whole datacenter fails). * Rolling out new versions of our applications. (Canary deployments, blue/green deployments...) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Some orchestrators We are going to present briefly a few orchestrators. There is no "absolute best" orchestrator. It depends on: - your applications, - your requirements, - your pre-existing skills... .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Nomad - Open Source project by Hashicorp. - Arbitrary scheduler (not just for containers). - Great if you want to schedule mixed workloads. (VMs, containers, processes...) - Less integration with the rest of the container ecosystem. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Mesos - Open Source project in the Apache Foundation. - Arbitrary scheduler (not just for containers). - Two-level scheduler. - Top-level scheduler acts as a resource broker. - Second-level schedulers (aka "frameworks") obtain resources from top-level. - Frameworks implement various strategies. (Marathon = long running processes; Chronos = run at intervals; ...) - Commercial offering through DC/OS by Mesosphere. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Rancher - Rancher 1 offered a simple interface for Docker hosts. - Rancher 2 is a complete management platform for Docker and Kubernetes. - Technically not an orchestrator, but it's a popular option. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Swarm - Tightly integrated with the Docker Engine. - Extremely simple to deploy and setup, even in multi-manager (HA) mode. - Secure by default. - Strongly opinionated: - smaller set of features, - easier to operate. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- ## Kubernetes - Open Source project initiated by Google. - Contributions from many other actors. - *De facto* standard for container orchestration. - Many deployment options; some of them very complex. - Reputation: steep learning curve. - Reality: - true, if we try to understand *everything*; - false, if we focus on what matters. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/Orchestration_Overview.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks! Questions? ![end](images/end.jpg) .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/shared/thankyou.md)] --- class: pic .interstitial[![Image separating from the next chapter](https://gallant-turing-d0d520.netlify.com/containers/two-containers-on-a-truck.jpg)] --- name: toc-links-and-resources class: title Links and resources .nav[ [Previous section](#toc-orchestration-an-overview) | [Back to table of contents](#toc-chapter-9) | [Next section](#toc-) ] .debug[(automatically generated title slide)] --- # Links and resources - [Docker Community Slack](https://community.docker.com/registrations/groups/4316) - [Docker Community Forums](https://forums.docker.com/) - [Docker Hub](https://hub.docker.com) - [Docker Blog](https://blog.docker.com/) - [Docker documentation](https://docs.docker.com/) - [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker) - [Docker on Twitter](https://twitter.com/docker) - [Play With Docker Hands-On Labs](https://training.play-with-docker.com/) .footnote[These slides (and future updates) are on β https://container.training/] .debug[[containers/links.md](https://github.com/jpetazzo/container.training/tree/intro-2019-04/slides/containers/links.md)]