Dynamic Jenkins Cloud configuration using Docker and Deployment of a website in k8s

Adarsh Saxena
EntwicklersX
Published in
7 min readNov 3, 2020

--

Dynamic Jenkins Cluster setup [header image]

About Dynamic Jenkins Slaves

Static systems are very bad at maintaining resources. When you create the static Jenkins cluster, there is a wastage of a lot of computing resources. Hence, we prefer dynamic slaves, which are launched and run only when they are needed. This saves a lot of computing resources and hence money for an organization or a company.

But why we actually need different slaves?

Because we can’t install and run everything in one OS. Like we can’t run Maven, Jenkins, Scala, Node apps, Testing system, etc. in one OS only. If we run everything in one OS, there can be conflicts between apps, or maybe the proper division of computing resources and storage is not possible.

Now, how do dynamic slaves work?

First of all, all the dynamic slaves work on one principle, “Whenever you need it, a dynamic slave is created at that time only”. In reference, to Jenkins's dynamic Cluster, whenever you run the job, it creates a new slave (mostly using containerization technology) at that time only to run the Jenkins job.

Jenkins cluster representation

In the image above, we can see the Jenkins Master and 2 static slaves, and 1 dynamic slave. Jenkins master is the one in which Jenkins is running and it is controlling all the slave nodes from there. Static nodes are the one, who continuously runs. Dynamic Slaves are the one which is being created only when they are needed. E.g. to run a specific job, we can create the dynamic slave and run the job using its computing resources and then delete the dynamic slave.

Task Intro

  • Create a container image that has Linux and other basic configuration required to run Slave for Jenkins. ( example here we require kubectl to be configured)
  • When we launch the job it should automatically start the job on slaves based on the label provided for a dynamic approach.
  • Create a job chain of job1 & job2 using the build pipeline plugin in Jenkins
  • Job1: Pull the GitHub repo automatically when some developers push the repo to GitHub and perform the following operations as:
  • 1. Create the new image dynamically for the application and copy the application code into that corresponding docker image
  • 2. Push that image to the docker hub (Public repository) (GitHub code contain the application code and Dockerfile to create a new image )
  • Job2 ( Should be run on the dynamic slave f Jenkins configured with Kubernetes kubectl command): Launch the application on the top of Kubernetes cluster performing the following operations:
  • 1.If launching the first time then create a deployment of the pod using the image created in the previous job. Else if deployment already exists then do a rollout of the existing pod making zero downtime for the user.
  • 2. If the Application created the first time, then Expose the application. Else don’t expose it.

Task Description/Working

Step 1: Configure the Docker Host and Client, useful for creating dynamic slaves

To configure the Docker server, you need to configure the /usr/lib/systemd/system/docker.service file.

$ vim /usr/lib/systemd/system/docker.service

In the above file, we have added the “-H tcp://0.0.0.0:4243” line. After doing the change in this file, you need to restart the service. To do that, run the following two commands:

systemctl daemon-reloadsystemctl restart docker

Now, the docker server is ready to accept requests. Now, what you need to do is to configure the Docker client.

In the client system, you need to export a variable for the IP of the docker host. That variable is DOCKER_HOST.

export DOCKER_HOST=192.168.43.233:4243

Remember that in the client system, the docker services are stopped, so, that it can act as the client. Now, every time you run any docker command, it first checks the DOCKER_HOST variable and then, contact the respective HOST.

Now, it’s good practice to first check if our docker host and client are properly set up and working correctly. To check their connectivity and correctness, run any docker command in the client system, and you will realize the effect on the Docker host system.

Also, note that sometimes the firewall allows the connection between the docker host and the client, so, you might need to add the rule in the firewall for that. Or, at least for now, just stop the firewall using the command: systemctl stop firewalld. But I don’t recommend stopping the firewall since it might create security issues.

Step 2: Configure the Cloud in Jenkins

Before configuring the Dynamic cloud in Jenkins, you need to download the Docker plugin in Jenkins.

Docker Plugin in Jenkins

Note: The system where Jenkins is running is acting as the Docker client.

To configure Clouds in Jenkins, we need to go “Manage Jenkins” > “Manage Nodes and Clouds” > “Configure Clouds”

Configure Clouds in Jenkins

Note that in the above image, (while configuring the clouds) the Docker Host URI is the IP of the Docker host that we have created in the first step. In that, we have also specified the protocol to be TCP and the port as 4243 which we have already set in the docker host.

Adding the Docker agent template

In the above image, you can see that we have used a docker image: theadarshsaxena/jenssh. This image I have created just for this practical and the image is configured with ssh and Kubernetes. You can also create your own image using Dockerfile which is configured with the Kubernetes, and you can also configure the Docker image with the Kubernetes according to your use case, such as, k8s from EKS or GKE.

SSH credentials are the credentials of the Docker Image template that you are using

Creating the Job1 in Jenkins

The role of this job is to pull the GitHub repo automatically when someone pushes it to the GitHub and perform the following operations:

  1. Create new image dynamically for the application and copy the application code into that corresponding docker image.
  2. Push that image to the hub.docker.com (public repository)

So, let’s start with creating the Job1 in Jenkins

Specify the GitHub Repository where the Dockerfile is present

In the GitHub repo (Jenkins-cloud-setup), we have stored the Dockerfile for creating the docker image dynamically using Jenkins.

Enable GITScm Polling

By enabling GITScm polling, the job will be triggered each time the code is pushed to the repository. Also, remember to enable to GitHub hook triggers, which I have not shown here.

After successful execution of the job1, you can check in the console.

Hence, the image is built and pushed successfully. You can also check in your docker repository whether the image is there or not.

Here also you can see the image comes up

Creating Job2 in Jenkins

The work of this job is to:

Launch the application on the top of K8s cluster performing following operations:

  1. If launching the first time then create a deployment of the pod using the image created in the previous job. Else if the deployment is already present then just do a rollout of the existing pod making zero downtime for the user.
  2. If Application created the first time, then, Expose the application. Else don’t expose it.
Select the mycloud option that comes up when you start typing label

On successful deployment, you will see the page something like:

Future Scope

Note: In the Docker image that I have used: https://hub.docker.com/r/theadarshsaxena/jenssh, is preconfigured with Kubernetes according to my use case. For your use case, you can configure the image, according to your use case. Such as if you want to do it on large scale or at a production level, you can configure the image with the K8s as a service provider such as GKE, EKS, or AKS.

For more such articles, follow me on Medium or

Connect me on LinkedIn:

--

--

Adarsh Saxena
EntwicklersX

Hey Everone, I am DevOps Practitioner, Cloud Computing, BigData, Machine Learning are my favorite parts. Connect me on LinkedIn to know more about me.