Building good docker images

The docker registry is bursting at the seams. At the time of this writing, a search for "node" gets just under 1000 hits. How does one choose?

What constitutes a good docker image?

This is a subjective matter, but I have some criteria for a docker image that I consider good:

  • working. Some examples:

    • an Android SDK image should be able to compile a project without first applying updates to the container.
    • a MySQL container should expose a way to bootstrap the server with a database and user.
  • minimal. The beauty of containers is the ability to sandbox an application (if not for security, then to avoid clutter on the host file system). Whereas I could install node.js on my host system or pollute it with a Java Development Kit, I would rather pay a slight premium in disk space or performance to keep them cordoned off from the rest of my files. With that said, it is obviously preferable that these penalties be as small as possible. The docker image should serve its purpose, having exactly what‘s necessary for it to function but nothing else. Following this principle, the image is more extensible and has fewer things that can break.
  • whitebox. In the case of docker images, this means having a published Dockerfile. That way I can evaluate what went into creating the image and tinker with it if I want to.

Unfortunately the docker registry does not make it easy to discover "good" images or even to judge any particular image. It‘s often a matter of docker pull <...> and then wondering why the 10 megabyte node binary needs 10 file system layers and, ultimately, a 700 megabyte virtual environment.

Building good docker images

Because there is no consensus on "good" docker images, and because the barrier to entry for adding images to the docker registry is very low, the situation is straight out of xkcd #927: everybody just does his or her own thing. The introduction of "official" language-specific docker development environments is a good start. I was happy to see that some of my pet practices (listed below) showed up in those images. However, the "thousand node images" situation probably won‘t improve much until the docker registry works on its discovery and evaluation mechanisms.

With that said, here are the Dockerfile practices which I‘ve settled on as best. I am no expert (I don‘t think anybody is at this early point in docker‘s lifetime), so discussion and feedback are welcome.

  • Base images off of debian

    At the time of this writing, ubuntu:14.04 is 195 MB while debian:wheezy is 85 MB, but the extra hundred megabytes of Ubuntu doesn‘t buy you anything of value (that I‘m aware of). In some extreme cases, it may even be possible to base your image off of 2 MB busybox. This is probably only practical with a statically linked binary. An example of a busybox-based docker image is progrium/logspout (link) which clocks in at a respectable 14 MB.

  • Don‘t install build tools without good reason

    Build tools take up a lot of space, and building from source is often slow. If you‘re just installing somebody else‘s software, it‘s usually not necessary to build from source and it should be avoided. For instance, it is not necessary to install python, gcc, etc. to get the latest version of node.js up and running on a Debian host. There is a binary tarball available on the node.js downloads page. Similarly, redis can be installed through the package manager.

    There are at least a few good reasons to have build tools:

    • you need a specific version (e.g. redis is pretty old in the Debian repositories).
    • you need to compile with specific options.
    • you will need to npm install (or equivalent) some modules which compile to binary.

    In the second case, think really hard about whether you should be doing that. In the third case, I suggest installing the build tools in another "npm installer" image, based on the minimal node.js image.

  • Don‘t leave temporary files lying around

    The following Dockerfile results in an image size of 109 MB:

    FROM debian:wheezy
    RUN apt-get update && apt-get install -y wget
    RUN wget http://cachefly.cachefly.net/10mb.test
    RUN rm 10mb.test
    

    On the other hand, this seemingly-equivalent Dockerfile results in an image size of 99 MB:

    FROM debian:wheezy
    RUN apt-get update && apt-get install -y wget
    RUN wget http://cachefly.cachefly.net/10mb.test && rm 10mb.test
    

    Thus it seems that if you leave a file on disk between steps in your Dockerfile, the space will not be reclaimed when you delete the file. It is also often possible to avoid a temporary file entirely, just piping output between commands. For instance,

    wget -O - http://nodejs.org/dist/v0.10.32/node-v0.10.32-linux-x64.tar.gz | tar zxf -
    

    will extract the tarball without putting it on the file system.

  • Clean up after the package manager

    If you run apt-get update in setting up your container, it populates /var/lib/apt/lists/ with data that‘s not needed once the image is finalized. You can safely clear out that directory to save a few megabytes.

    This Dockerfile generates a 99 MB image:

    FROM debian:wheezy
    RUN apt-get update && apt-get install -y wget
    

    while this one generates a 90 MB image:

    FROM debian:wheezy
    RUN apt-get update && apt-get install -y wget && rm -rf /var/lib/apt/lists/*
    
  • Pin package versions

    While a docker image is immutable (and that‘s great), a Dockerfile is not guaranteed to produce the same output when run at different times. The problem, of course, is external state, and we have little control over it. It‘s best to minimize the impact of external state on your Dockerfile to the extent that it‘s possible. One simple way to do that is to pin package versions when updating through a package manager. Here‘s an example of how to do that:

    # apt-get update
    # apt-cache showpkg redis-server
    Package: redis-server
    Versions:
    2:2.4.14-1
    ...
    
    # apt-get install redis-server=2:2.4.14-1
    

    We can hope, but there is no guarantee, that the package repositories will still serve this version a year from now. However, it‘s undeniably valuable to explicitly show what version of the software your image depends on.

  • Combine commands

    If you have a sequence of related commands, it is best to chain them into one RUN command. This makes for a more meaningful build cache (logically grouped steps are lumped into one cache step) and keeps the number of file system layers down (I consider this generally desirable but I don‘t know that it‘s objectively better).

    Backslashes \ help you out here for readability:

    RUN apt-get update &&     apt-get install -y         wget=1.13.4-3+deb7u1         ca-certificates=20130119         ...
    
  • Use environment variables to avoid repeating yourself

    This is a trick I picked up from reading the Dockerfile (link) of the "official" node.js docker image. As an aside, thisDockerfile is great. My only criticism is that it sits on top of a huge buildpack-deps (link) image, with all sorts of things I don‘t want or need.

    You can define environment variables with ENV and then reference them in subsequent RUN commands. Below, I‘ve paraphrased an excerpt from the linked Dockerfile:

    ENV NODE_VERSION 0.10.32
    
    RUN curl -SLO "http://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.gz"     && tar -xzf "node-v$NODE_VERSION-linux-x64.tar.gz" -C /usr/local --strip-components=1     && rm "node-v$NODE_VERSION-linux-x64.tar.gz"
    


This article is being discussed further in this Hacker News post.

转自http://jonathan.bergknoff.com/journal/building-good-docker-images

时间: 2024-11-09 02:07:47

Building good docker images的相关文章

在Linux和Windows的Docker容器中运行ASP.NET Core

(此文章同时发表在本人微信公众号"dotNET每日精华文章",欢迎右边二维码来关注.) 译者序:其实过去这周我都在研究这方面的内容,结果周末有事没有来得及总结为文章,Scott Hanselman就捷足先登了.那么我就来翻译一下这篇文章,让更多的中文读者看到.当然Scott遇到的坑我也遇到了. 不过首先,对于不熟悉的朋友我还是来解释一下Linux容器和Windows容器的概念. 由于容器成为虚拟化和应用托管的一种不可避免的选项,Windows也开始为公众提供容器功能(其实微软具备和使用

Docker容器中运行ASP.NET Core

在Linux和Windows的Docker容器中运行ASP.NET Core 译者序:其实过去这周我都在研究这方面的内容,结果周末有事没有来得及总结为文章,Scott Hanselman就捷足先登了.那么我就来翻译一下这篇文章,让更多的中文读者看到.当然Scott遇到的坑我也遇到了. 不过首先,对于不熟悉的朋友我还是来解释一下Linux容器和Windows容器的概念. 由于容器成为虚拟化和应用托管的一种不可避免的选项,Windows也开始为公众提供容器功能(其实微软具备和使用容器技术很久了).这

理解Docker技术

什么是docker Docker is an open platform for developing,shipping, and running applications. Docker是PaaS提供商dotCloud开源的基于LXC的,源代码托管在Github上的,基于go语言开发的,遵循Appache2.0协议的容器引擎. Docker允许打包程序并附带它所有的依赖成标准化的单元来进行部署软件. Docker具有轻量.开源.安全的特点. Docker VS VM Docker容器在资源隔离

Docker Resources

Menu Main Resources Books Websites Documents Archives Community Blogs Personal Blogs Videos Related Projects OS Virtual Machine Competitors Management Tools Paas Platforms Integration Projects Monitoring Networking Continuous Integration Development

kubernetes 实战6_命令_Share Process Namespace between Containers in a Pod&amp;Translate a Docker Compose File to Kubernetes Resources

Share Process Namespace between Containers in a Pod how to configure process namespace sharing for a pod. When process namespace sharing is enabled, processes in a container are visible to all other containers in that pod. You can use this feature to

Docker packaging guide for Python

以下是一些关于python 集成docker 的文章,很不错 The basics Broken by default: why you should avoid most Dockerfile examplesMost Dockerfile examples you’ll find on the Web are broken. And that’s a problem. A review of the official Dockerfile best practices: good, bad,

容器私有云和持续发布都要解决哪些基础问题 第二集

郑昀编著 创建于2015/10/30 最后更新于2015/11/20 关键词:Docker,容器,持续集成,持续发布,私有云,Jenkins,Mesos,Marathon 本文档适用人员:广义上的技术人员 提纲: 集装箱还是卷挂载? Host Networking 还是 Bridge Networking? 容器要固定IP吗? 容器内部如何获取宿主机的IP? 容器日志如何收集? Apache Mesos 还是 Google K8s? 如何保证 Registry 镜像Pull/Push安全? 如何

[eShopOnContainers 学习系列] - 02 - vs 2017 开发环境配置

https://github.com/dotnet-architecture/eShopOnContainers/wiki/02.-Setting-eShopOnContainers-in-a-Visual-Studio-2017-environment 一. 核心步骤 Git clone https://github.com/dotnet-architecture/eShopOnContainers.git git 克隆 项目 Open solution eShopOnContainers-S

基于 Consul 实现 MagicOnion(GRpc) 服务注册与发现

0.简介 0.1 什么是 Consul Consul是HashiCorp公司推出的开源工具,用于实现分布式系统的服务发现与配置. 这里所谓的服务,不仅仅包括常用的 Api 这些服务,也包括软件开发过程当中所需要的诸如 Rpc.Redis.Mysql 等需要调用的资源. 简而言之 Consul 就是根据 Key/Value 存储了一套所有服务的 IP/Port 集合,当你 Grpc 客户端需要请求某种服务的时候,具体的 IP 与端口不需要你自己来进行指定,而是通过与 Consul Agent 通信