4 min read

Jenkins on Kubernetes - Building Docker Images

inception

When I first started migrating Jenkins to run on Kubernetes, I used the “official” Helm chart, and it got a little confusing.

One would assume that if you are planning to run Jenkins on Kubernetes, you may be in the business of building Docker images. /sarcasm

I expected the slave pod to allow me to build docker images out of the box, which it didn’t. However, after giving it some thought, it makes sense why you wouldn’t want automatically allow it. The solution is not 100% elegant and may raise a few security concerns, but it’s what we have to work with at the moment.

whoever wrote the chart, and those who created the Kubernetes plugin for Jenkins, deserve a golden star sticker for making it secure by default. __

How to give access to docker from within a running container

To build Docker images from within a Docker container, you have to mount the docker socket as a volume.

docker run --name myjenkins -p 8080:8080 -p 50000:50000 \
         -v /var/jenkins_home \
         jenkins -v /var/run/docker.sock:/var/run/docker.sock

The idea is to give the container access to the host’s Docker daemon. When running in a Kubernetes cluster, this means access to the node’s docker service

Mounting the Docker socket from the Jenkins agent Pods

When you work with dynamic Jenkins agents in Kubernetes, you have to specify the pod template for the build. Each pod will run the jnlp-slave container along with any additional containers that you may want to add for the build.

Here is an example of a Jenkins file that includes the pod template:

pipeline {
  agent {
    kubernetes {
      label 'spring-petclinic-demo'
      defaultContainer 'jnlp'
      yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
  component: ci
spec:
  # Use service account that can deploy to all namespaces
  serviceAccountName: cd-jenkins
  containers:
  - name: maven
    image: maven:latest
    command:
    - cat
    tty: true
    volumeMounts:
      - mountPath: "/root/.m2"
        name: m2
  volumes:
    - name: m2
      persistentVolumeClaim:
        claimName: m2
"""
}
   }
  stages {
    stage('Build') {
      steps {
        container('maven') {
          sh """
                        mvn package -DskipTests
                                                """
        }
      }
    }
    stage('Test') {
      steps {
        container('maven') {
          sh """
             mvn test
          """
        }
      }
    }
  }
}

The Pod template includes jnlp as the defaultContainer, and then the “containers” section includes other containers that we need for our build pipeline.

To be able to build docker images, you need to add two things:

  1. A container that has the docker command installed.
  2. Mount the now infamous /var/run/docker.sock file

The first thing that we need to add is the hostPath volume to the pod template:

  - name: docker-sock
    hostPath:
      path: /var/run/docker.sock

Next, we need a new container that will builds the Dockerfile. For simplicity, I chose the “docker:latest” image, and set the mountPath to use the docker-sock volume:

  - name: docker
    image: docker:latest
    command:
    - cat
    tty: true
    volumeMounts:
    - mountPath: /var/run/docker.sock
      name: docker-sock

Lastly, I added a simple build step to build the image:

    stage('Push') {
      steps {
        container('docker') {
          sh """
             docker build -t spring-petclinic-demo:$BUILD_NUMBER .
          """
        }
      }
    }

The complete Jenkinsfile now looks like this:

pipeline {
  agent {
    kubernetes {
      label 'spring-petclinic-demo'
      defaultContainer 'jnlp'
      yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
  component: ci
spec:
  # Use service account that can deploy to all namespaces
  serviceAccountName: cd-jenkins
  containers:
  - name: maven
    image: maven:latest
    command:
    - cat
    tty: true
    volumeMounts:
      - mountPath: "/root/.m2"
        name: m2
  - name: docker
    image: docker:latest
    command:
    - cat
    tty: true
    volumeMounts:
    - mountPath: /var/run/docker.sock
      name: docker-sock
  volumes:
    - name: docker-sock
      hostPath:
        path: /var/run/docker.sock
    - name: m2
      persistentVolumeClaim:
        claimName: m2
"""
}
   }
  stages {
    stage('Build') {
      steps {
        container('maven') {
          sh """
                        mvn package -DskipTests
                                                """
        }
      }
    }
    stage('Test') {
      steps {
        container('maven') {
          sh """
             mvn test
          """
        }
      }
    }
    stage('Push') {
      steps {
        container('docker') {
          sh """
             docker build -t spring-petclinic-demo:$BUILD_NUMBER .
          """
        }
      }
    }
  }
}

We now have a successful build with a fresh new docker image.

[Pipeline] sh
+ docker build -t spring-petclinic-demo:9 .
Sending build context to Docker daemon  56.09MB

Step 1/4 : FROM openjdk:10
 ---> b11e88dd885d
Step 2/4 : RUN export
 ---> Using cache
 ---> a235bc6529b1
Step 3/4 : WORKDIR /app
 ---> Using cache
 ---> 4ec333c10fa5
Step 4/4 : COPY target/*.jar ./
 ---> bc94e33c33be
Successfully built bc94e33c33be
Successfully tagged spring-petclinic-demo:9

How to get up and running?

The above solution only takes you a few minutes to implement, so if you decided to go this route, you are pretty much all set.

There are other options out there, like GCP’s gcloud builds command, but they are usually platform-specific and may not be available for you.

Things are moving fast in DevOps? FOMO is creeping in?

An email that dives deep into subjects that are all DevOps

    We won't send you spam. Unsubscribe at any time.
    Powered By ConvertKit