Creating Dynamic CI Jobs

How Jenkins Job Builder tool can create and provide jobs to Jenkins using simple and concise YAML templates

Cristian

Cristian

DevOps at Softvision
Cristi joined Softvision in the summer of 2007 as a junior QA engineer. After two years of doing QA stuff he decided that a major change was needed and opted for a sysadmin (and DevOps afterwards) position and started working on the things that were most appealing: scripting and everything that could be automated in order to maintain systems and help developers/QA teams to do their job more quickly and more efficient. He enjoys reading about new tools and means to automate stuff in the DevOps world and in his free time he likes tinkering with embedded systems, both hardware, and software.
Cristian

Latest posts by Cristian

In an ideal world, we would like to automate everything so that human interaction is reduced to a minimum maybe even to only touching a single button. In our nomenclature we call jobs as being dynamic when we refer to CI builds triggered automatically by developers code changes pushed to an SCM repository. 

This is not the only dynamic part when we talk about CI but more important is that each job is magically created when a new branch is pushed to the SCM repository AND it’s magically destroyed when the aforementioned repository gets deleted from the remote source.

Jenkins dynamic jobs engine overview

 

How does it work?

The whole process is powered by the Jenkins Job Builder (JJB) tool which can create and provide jobs to Jenkins using simple and concise YAML templates.

The process of dynamic jobs creation is pretty straightforward and it consists of a cron job installed on the Jenkins master machine which runs the JJB script engine using a custom bash script and performs the following actions:

  1. Pulls the JJB templates git repository which contains the source templates for all Jenkins jobs creation (for both static and dynamic jobs).
  2. Pulls the predefined list of git repositories for which we need dynamic jobs to be created.
  3. Scans for active branches in the git repositories mentioned above and computes their age.
  4. Based on set thresholds (which can be changed at any time), it determines which branches match the retention policy.
  5. If valid branches are found, the dynamic jobs YAML templates from point 1. are sourced in and transformed using detected branches metadata, then moved into a staging area directory.
  6. Templates for static jobs are managed by this script too, but they are just passed directly into the staging area without any transformations applied to them.
  7. Job views are processed next based on the same mechanism with git repositories names used to create the appropriate filtering.
  8. Finally, when everything was processed and put into the staging area, the JJB tool is invoked against it.
  9. Next, JJB takes action and it will do the following:
    1. Parse all the YAML files found in the staging area directory and create jobs in Jenkins
    2. Build a cache for all the above data so that every time something gets changed it knows to update that/those specific item(s) only.
    3. Based on the cache, it will delete old/obsolete Jenkins jobs when required – this may happen when branches get old (no recent activity based on the retention policy thresholds) or are deleted.

The JJB engine also tries to detect the technology used in the project git repository by using this simple rules:

  1. If a pom.xml or build.gradle file is found then the project is classified as being a Java-based one
  2. If a package.json or gulpfile.js file is found then the project is classified as being a NodeJS based one
  3. If none of the above then it’s classified as being generic

Based on the above feature it will use special templates with special requirements so this helps us to better organize jobs structure.
More rules can be added if needed in the future!

What’s the result?

  1. Jenkins views are generated per git repository name (in the future folders will be used too to group related objects)
  2. Jobs will be created on the fly as changes are detected on git repositories and each job will be named like <git_repo_name>_<branch_name> and filtered accordingly for the corresponding view

What do developers need to do in order to benefit from this?

In order for something to happen inside the assigned Jenkins job for the current branch that the developer is working on, he/she needs to add a file named jenkins.sh which will include all the build steps. The Jenkins job template is designed to call this script thus providing build control over to the development team.

Snippet from the Dynamic Java Template


This file is a simpleshell script (bash) that can contain any valid commands to be executed. For example, to build and test a Java project using maven one could simply put this inside the jenkins.sh file.

Java Example

 NodeJS Example

What happens next?

  1. When the new branch or its changes are detected then a new Jenkins job will be created if it wasn’t before
  2. The Jenkins job will run from the first time if it wasn’t before or at the next SCM polling event (if changes are detected on the assigned branch)
  3. The job will run whatever is stated in the jenkins.sh file
  4. It can notify the corresponding communication channel about the Jenkins job run state (failed, success, unstable)

Other things worth mentioning about the internal workings of this setup

There are two main things involved in this process:

  • The Jenkins Job Builder tool: https://docs.openstack.org/infra/jenkins-job-builder/
  • A custom-made shell script which uses the Jenkins Job Builder tool and some extra glue logic
  • Puppet is also used to manage some of the shell script internal variables which control some of its behavior

Let’s take each of the above and discuss further.

Jenkins Job Builder

This takes care of all the intricate details of constructing the internal XML structure which renders the real job that can be seen by the end users. It’s built using the famous python scripting language and it exposes a very simple yet pretty powerful YAML interface so no coding is involved. The official documentation is pretty good and it contains examples also so it won’t be detailed here.

What’s worth mentioning here is the real advantage of using an external tool for creating and maintaining all of the Jenkins jobs configurations (and not in XML format which gets really ugly over time). This tool also maintains a local cache for all the created jobs and based on that it can determine if new ones are added or if old ones should be deleted.

Custom shell script

This takes care of gluing all things together:

  1. Pulls/fetches latest changes from required git repositories locally
  2. Pulls/fetches latest changes from the Jenkins Job Builder tool dedicated git repository where the YAML template files are kept (this should be maintained by the DevOps team only)
  3. Git repositories scanning to determine which branch (based on age) is to be accepted or not in order to create a job for it
  4. As each git repository is scanned, the project type can be determined also based on some simple rules (for example if the project source directory contains pom.xml files it could be Java based etc.)
  5. Based on the above metadata (branch name, project type etc) the YAML templates can be further transformed and populated with required data before being passed to the Jenkins Job Builder tool
  6. Invoke the Jenkins Job Builder tool and pass it the required metadata obtained above in order to create the real jobs with the correct configuration

All of the above are controlled by a set of variables (which are further managed by Puppet or other CM tool if desired): one contains the list of git branches to be scanned, one is for holding the age threshold to decide which git branch gets in or not etc.

Configuration management (Puppet)

All of the above can be further managed via a configuration management tool like Puppet in order to obtain even more fine control over all the processes. The custom shell script behavior can be altered via a CM tool for example or other metadata which needs to be passed internally to the script.

 

 

 

 

 

 

Share This Article


Cristian

Cristian

DevOps at Softvision
Cristi joined Softvision in the summer of 2007 as a junior QA engineer. After two years of doing QA stuff he decided that a major change was needed and opted for a sysadmin (and DevOps afterwards) position and started working on the things that were most appealing: scripting and everything that could be automated in order to maintain systems and help developers/QA teams to do their job more quickly and more efficient. He enjoys reading about new tools and means to automate stuff in the DevOps world and in his free time he likes tinkering with embedded systems, both hardware, and software.
Cristian

Latest posts by Cristian

No Comments

Post A Comment