Containers In
Production


Daniel J Walsh

Consulting Engineer

Twitter: @rhatdan

Blog: danwalsh.livejournal.com

Email: dwalsh@redhat.com

DevOps

Most container tools concentrate on developers

Container Image Development

Most container work concentrated on developer

docker build, docker commit

Standardization of OCI Image Format

New tools to build container images can arise

Container Image

Development Tools

We don't care how you build a OCI Container image

docker build

Ansible-Containers

OpenShift S2I

Dockeramp

coreutils-containers?

Don’t care as long as the output is an OCI Image!

Problem these tools building images?

Where do they get the storage?

Storage

Copy On Write (COW)
File Systems

COWs have some problems.

DeviceMapper, BTRFS Break Shared Memory

10 containers using huge Java JRE - 10 JRE’s in memory!

COW

OverlayFS

Fixes Shared Memory

SELinux support in Fedora coming soon to RHEL

Not a posix compatible file system

COW

Write performance suffers, especially at larger scales

In production, most images should be immutable

Read/Only images offer better security

No support network storage.

Each container server downloads gigabytes of images

Improved Storage

Shared file systems.
Run same images on multiple servers

Instantaneous Updates, containers running same code

Get rid of COW file systems when not necessary?

Atomic/OpenShift Registry

Service watching for container images arriving at registry

Explode image onto ostree disk

Atomic Scan image for vulnerabilities

Approve or Block image for sharing for docker pull

Share exploded image via network protocol NFS, Cephs, Gluster, Ostree?

System Containers

On atomic host, software shipped as container image

kubernetes requires etcd, flanneld

etcd, flanneld start before docker to setup network

These containers can be run with read/only images

docker has problems with container priority

systemd doesn't

System Containers

atomic command installs system containers

Uses skopeo to pull image from registry

Uses ostree to store image layers on disk

Create systemd unit file and uses runc to run container

Optionally use runc to wrap processes in a container

Run as a chroot? Or just use system containers to install content on host

		  atomic install --system etcd
		  systemctl enable etcd
		  systemctl start etcd
		  
		  atomic install --system flannel
		  systemctl enable flannel
		  systemctl start flannel
		

Systemd can make sure etcd starts before flannel, which starts before docker

Even docker will run as a
System Containers

		  atomic install --system system-docker
		  systemctl enable system-docker
		  systemctl start system-docker
		

Simple Image Signing Goals

Allow users to PGP sign images in look-aside cache

Multiple people/companies can sign same image

Signatures can be stored in Atomic/OpenShift Registry

Signatures can be stored in any web server

Allow user to "sign" content from docker.io

Build policy/rules engine to control which images/registries are trusted

atomic CLI, skopeo and docker only pulls containers that you trust

Signing Images
https://youtu.be/0yoQu-YylwA

Managing Trust
https://youtu.be/93-71phWiOg

Introducing the OCID effort

Components needed by OpenShift to run containers
Kubernetes work flow

Container image transport

Storage of Images/COW

Open Container Initiative Runtime

Container Management API

Container image transport

Skopeo

Greek for remote viewing

Used by atomic CLI
View container image json data on registry

Added ability to pull and push images from registry

Pull/Verify "Simple Signing” signatures

Worked with CoreOS splitting out go library from CLI.
https://github.com/containers/image

CoreOS investigating using it with RKT?

projectatomic/docker uses library for verification os simple signing

Storage of Images/COW

atomic mount fedora /mnt

Allow other tools to work with storage besides docker

graphc, graphtool, cowman, storetool

Create docker graphdriver code independent library/CLI
https://github.com/containers/storage

Add support OStree and Network storage (NFS, Gluster, Ceph)

CoreOS potentially use storage for its back end.

Open Container Initiative
Runtime

Furthest along feature of OCID

OCI Specification

runc - default implementation

support other runtimes as the develop

docker 1.11 uses runc as the default back end

ocitool https://github.com/opencontainers/ocitools

OCI runtime specification tools

ocitool generate a specification/launch runc

OCID Container Management API

Open Container Initiative Daemon
(I prefer to call it OCD :^).

Renamed to CRI-O

https://github.com/kubernetes-incubator/cri-o

Implements Kubelet Container Runtime Interface
Kubernetes server interface launching/running pods

CRI-O

OpenShift tells Kubernetes to execute pod

Kubernetes communicates with ocid

ocid pull image using skopeo/image

ocid stores image on disk using storage

ocid runs container/pod using runc

Standards based alternative to docker/rkt

CRI-O

kpod - management tools for adminstrating CRI-O storage and pods

Plan to move OpenShift to OCID by default

ocid package now available in Fedora Rawhide (f26)

Conclusion

Containers in production will focus on system containers for system services, and kubernetes for clustered applications.

Don't let this be you.

Questions?