Playing with kaniko and kubernetes internal docker registry

General idea here is to deploy a docker resgistry inside kubernetes and to use it just for putting the images that we build in the cluster, and to serve them to the same cluster.

Few considerations

  • We are not going to secure the registry, the presumtions will be that your k8s cluster nodes run in a private network. If you want to secure pulling images in a secure way, you have to consider few more steps.

  • Docker resgirstry will be deployed in stateful set. Make sure that your PVC with persistence class are working in your cluster.

  • For building the images we will use Kaniko

  • For deploying the docker registry, I would prefer to use my own config, rather than using Helm chart for this.

Setting up private docker registry in kubernetes

Lets first create a new namespace, to keep docker registry separated:

000-namespace.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: registry
kubectl apply -f 000-namespace.yaml

Then we can setup the configurations and the secret we are going to use for the registry

001-config.yaml

---
apiVersion: v1
data:
  config.yml: |-
    health:
      storagedriver:
        enabled: true
        interval: 10s
        threshold: 3
    http:
      addr: :5000
      headers:
        X-Content-Type-Options:
        - nosniff
    log:
      fields:
        service: registry
    storage:
      cache:
        blobdescriptor: inmemory
    version: 0.1
kind: ConfigMap
metadata:
  labels:
    app: docker-registry
  name: docker-registry-config
  namespace: registry
---
apiVersion: v1
data:
  haSharedSecret: U29tZVZlcnlTdHJpbmdTZWNyZXQK
kind: Secret
metadata:
  labels:
    app: docker-registry
    chart: docker-registry-1.4.3
  name: docker-registry-secret
  namespace: registry
type: Opaque
kubectl apply -f 001-config.yaml

Now it is time to deploy docker registry in stateful set:

002-registry-statefulset.yaml

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: docker-registry
  namespace: registry
spec:
  selector:
    matchLabels:
      app: docker-registry
  serviceName: "registry"
  replicas: 1
  template:
    metadata:
      labels:
        app: docker-registry
      annotations:
    spec:
      terminationGracePeriodSeconds: 30
      containers:
      - command:
        - /bin/registry
        - serve
        - /etc/docker/registry/config.yml
        env:
        - name: REGISTRY_HTTP_SECRET
          valueFrom:
            secretKeyRef:
              key: haSharedSecret
              name: docker-registry-secret
        - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
          value: /var/lib/registry
        image: registry:2.6.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 5000
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: docker-registry
        ports:
        - containerPort: 5000
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 5000
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/registry/
          name: data
        - mountPath: /etc/docker/registry
          name: docker-registry-config
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: docker-registry
      - configMap:
          name: docker-registry-config
        name: docker-registry-config

  volumeClaimTemplates:
  - metadata:
      name: data
      annotations:
        volume.beta.kubernetes.io/storage-class: fast
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: generic
      resources:
        requests:
          storage: 20Gi

Make sure you have defined storage class fast or replace volume.beta.kubernetes.io/storage-class: fast in the stateful set descriptor.

kubectl apply -f 002-registry-statefulset.yaml

One last step for the registry will be to define the service and expose it to a node port, so that node dockre can pull the images on localhost:NODE_PORT

003-registry-svc.yaml

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: docker-registry
  name: docker-registry
  namespace: registry
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: registry
    nodePort: 31500
    port: 5000
    protocol: TCP
    targetPort: 5000
  selector:
    app: docker-registry
  sessionAffinity: None
  type: NodePort
kubectl apply -f 003-registry-svc.yaml

Here we have exposed the registry on nodeport 31500

Configuring kaniko build

Lets now build an image without exposing docker socket. For this we will use Project Kaniko.We will use preconfigured namespace playground to run the build. First we will define few config maps that we will use for the build. We will play with small application written in Go. So we will use a base golang image, and we will execute docker multistage build, where we first compile the application, and then use alpine image to run it.

000-Dockerfile-configmap.yaml

---
apiVersion: v1
data:
  Dockerfile: |-
    FROM golang:latest
    WORKDIR /go/src
    RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .

    FROM alpine:latest  
    RUN apk --no-cache add ca-certificates
    WORKDIR /root/
    COPY --from=0 /go/src/app .
    EXPOSE 8080
    CMD ["./app"]  
kind: ConfigMap
metadata:
  name: dockerfile
  namespace: playground
kubectl apply -f 000-Dockerfile-configmap.yaml

The real source we will also define in config map, that we will mount in the builder pod:

001-maingo-configmap.yaml

---
apiVersion: v1
data:
  main.go: |-
    package main

    import (
      "net/http"
      "strings"
    )

    func sayHello(w http.ResponseWriter, r *http.Request) {
      message := r.URL.Path
      message = strings.TrimPrefix(message, "/")
      message = "Hello " + message
      w.Write([]byte(message))
    }
    func main() {
      http.HandleFunc("/", sayHello)
      if err := http.ListenAndServe(":8080", nil); err != nil {
        panic(err)
      }
    }

kind: ConfigMap
metadata:
  name: maingo
  namespace: playground
kubectl apply -f 001-maingo-configmap.yaml

Now lets run the build:

002-kaniko-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kaniko
  namespace: playground
spec:
  containers:
  - args:
    - --dockerfile=/go/src/Dockerfile
    - --context=/go/src
    - --destination=docker-registry.registry.svc.cluster.local:5000/test/app:1.0
    image: gcr.io/kaniko-project/executor:latest
    imagePullPolicy: Always
    name: kaniko
    volumeMounts:
    - mountPath: /go/src/Dockerfile
      name: dockerfile
      subPath: Dockerfile
    - mountPath: /go/src/main.go
      name: maingo
      subPath: main.go
  dnsPolicy: ClusterFirst
  restartPolicy: Never
  volumes:
  - configMap:
      name: dockerfile
    name: dockerfile     
  - configMap:
      name: maingo
    name: maingo     
kubectl apply -f 002-kaniko-pod.yaml

This will deploy the pod and run the build. You can monitor the build process by running:

kubectl logs -n playground -f kaniko 

After everything is executed successfuly, the builder pod will push the image to the local registry: docker-registry.registry.svc.cluster.local:5000/test/app:1.0

The way afterwards to use the same image, when deploying the pod is:

...
  image: localhost:31500/test/app:1.0
...

~Enjoy