Pages

Monday, July 13, 2015

Jenkins DSL scripting - Part 2 - environment setup /TL;DR/

In my first article I wrote about the basics of the Jenkins DSL scripting. I explained my motivation to switch from static Jenkins configuration to a more dynamic one and tried to show the benefits via small examples.
In this part I'm talking about some interesting side effects of dynamic pipeline generation, the role of seed jobs, my development setup and finally I define a basic pipeline as a blueprint for other's project. I hope this entry will be shorter then the first one :)


Side effects


Let's start with some side effect explanation! Comparing with the static way, the DSL scripting can cause some side effects or behaves differently. None of them has serious impact, but good to know about them and do some preparation. First of all the management of the artifacts could be a bit different and you could loose your build history regularly. The setup of the development environment is a bit more complex than the traditional way, but you could earn versioned and standardized pipelines.

Build history


With the DSL plugin we are using script(s) to (re)generate our pipeline configuration. We could setup the plugin to leave existing jobs untouched, but I think this is a bit unrealistic, because we are continuously improving our pipeline code, so we need to regularly override them. Business as usual. The only problem with the deletion and recreation is the loss of history of the builds. When you delete a job, you'll loose all artifacts, saved workspaces related to the job and also the build history will go. In the early times I found it a bit frustrating, but later I recognized the importance of build history is quite low. Usually everyone is interested only the last few results to see how the process behaved in the past, but I'm quite rarely checking any build logs from the last year. You should get used with the fact, the build history, artifacts, etc.. could disappear from the Jenkins space.

Artifacts


The loss of generated artifacts history could be a problem if you want to use them. I recommend to install a repository hosting application to store the pipeline's outcome, Artifactory is a great tool for this, but you could choose what you want, then deploy your stuff to that server. The point is to not relying on Jenkins workspaces anymore and consider them a temporary datastore between the stages.

Pipeline versioning


If you store your pipeline in the scm (why not?) you have the benefit to roll-back any mistakes made on the code and also you could introduce versioning on the pipeline. With a versioned pipeline you could regenerate the exactly same binary you made in the past, not only the source code can be rolled back.

Environments for autogenerated Jenkins


The constant change is the nature of the applications and scripts. The development of new features or steps to your pipeline can't be done on the project's main Jenkins server to slow down or block your colleagues, so you need a development server. You need at least two environments for pipeline development.

The developer enviroment is providing a production like configuration for the pipeline developer without disturbing the development workflow. The pipeline code and configuration could be changed freely by the developers to test their implementation or act as a sandbox for new solution ideas.

The production environment is less often changing and the manual intervention is not recommended. Once you implemented, verified and pushed the pipeline code to the repository you could update the production environment too to improve the delivery process.

I recommend to create the same configuration for your dev Jenkins like the prod, to make sure all changes behaving exactly same on both servers. Important to understand the dev Jenkins is not for developers, but development of pipeline(s). At the moment we are using prod/dev-xx-jenkins-yy naming pattern to separate the reliable and non-reliable Jenkins instances where xx is the project code and yy the number of the instance within it's own pool.

The Seed/Bootstrap job


To start your own DSL pipeline development the recommended way is the creation of a seed/bootstrap job. This could be created manually or you could inject the configuration into the JENKINS_HOME as starting point of the pipeline development activity. I also recommend to always dedicate one repository to your project(s)'s pipeline to keep it separated and follow the Single Responsibility Principle. The seed job is always there as a static entry point and responsible for invocation of all pipeline generation scripts. Also could invoke common libraries (will be explained in a later post) and prepare sub jobs for creating other pipelines.

Seed job script parts


The seed job could have the following parts:

  1. clean-up
  2. initialization
  3. pipeline generation
  4. views definitions

Clean-up


Before you (re)generate your jobs you need to purge all related ones and their views based on a pattern definition. You should pick carefully the naming pattern itself. You'll find quite useful when you don't need to manually purge some jobs, because you changed the name, numbering or the order of your jobs. Please take care about the seed job and default maintenance/tooling/etc jobs and don't delete everything or you could quickly face with some angry devs at your desk :)
In this example I'm deleting all jobs, but the job named 'bootstrap' and all jobs containing 'Jenkins' and doing the same for the views:


Initialization


In this part you could setup some global settings for your pipeline(s): credential references, name patterns, numbering, etc. Initialize everything in one place that you need to reuse later. Some examples:

  • credentials stored in Jenkins
  • SCM urls
  • target servers/endpoints

Pipeline generation


The real thing! The generation of the pipeline(s) could be quite complex on flow level and some tricky plugins can make you sweat to reimplement them in DSL. For the better visibility I recommend to use a good naming pattern and fear not to define parallel job runs for better performance.

View(s) definition


Once you defined and connected all jobs you could define the visual representation of their network. You could choose a simple listview or more complex, but nicer looking build or pipeline views. With a good naming pattern you could easily define a simple regex pattern to collect all related jobs in one place.

Development environment setup


Now you know enough about the start, but how could you try it out? For basic definitions you could use the Job DSL Playground on Heroku. It works very well with simple stuffs and you could get familiar with the syntax quickly. In the user power moves documentation the DSL creators are explaining how could you setup a local script running environment, but I failed every time when tried to follow. I hope once I understand what I'm doing wrong. After the series of my failuers I invented my own way to test scripts on my machine with a local Jenkins instance. A bit hackish, but it works as expected and I can develop my pipeline(s) quickly with it.

In step zero (I assume this is already done) you need to setup a local Jenkins on your machine and specify the work directory with the JENKINS_HOME variable.

In the first step create your bootstrap project manually as a freestyle project. No trigger, no SCM, nothing.


Specify a build step "Process Job DSLs" with Look on Filesystem option and set the name. I'm always using bootstrap.groovy. Save and run it. The build ends with success without doing anything, but now you have a workspace folder.


Go to the <jenkins home>/jobs//workspace and create a bootstrap.groovy file with println "Hello" to test. Run the job again and you should  see a big Hello on the console output. Now you can open the folder in your favourite IDE or do a git init/commit/push to store in scm,

Demo pipeline project


I created this script as part of an integration challenge solution for my HelloWorld app called Imaginarium (hehe). In the first part I'm cleaning up the existing stuff to prepare for regeneration of jobs.  Then you could see the initialization of globally used variables, names and patterns. In the main part I'm putting together a pipeline with the following steps:

  1. Checkout with shallow clone for faster run using Github hook as starting trigger and I'm using the 'git' credentials saved in Jenkins
  2. Quick compile as a health check of the source code
  3. Deploy as a full build with all unit tests,etc. usual stuff. Except the Maven3 plugin invocation
  4. I'm generating a Docker image and pushing into the public docker hub as releasing artifact
  5. Quality check with Sonar. Usual stuff++
  6. SSH-ing into the target server, redeploying the Docker image and starting the application
In the last part I take care for some eye-candy to generate visual representation of the pipeline. As you could see the credentials are not specified or referred as a label, because Jenkins provided all sensitive data internally (except the free mongolabs access hehe, but who cares?). This is a nice solution to hide all sensitive data on a public Jenkins server, but you need some extra investigation to find the label itself. This is what I cover (with other tricks) in the next part of the series. Also you could see some 'configure' blocks as advanced customization. They'll be explained in the 3rd part too.


Summary

Congratulations again! Finally you arrived to the end of the second part of my DSL scripting series. I hope you didn't fall asleep yet and learned after the basics how to think about your seed project and what is different from the original, static process. You could set up a development environment for pipeline coding (when you find a better solution please share with me!) and you have a skeleton project with basic structure to support your build and release efforts.
In the next article I'm talking about the modularization options to make your pipeline more readable and structured, I'll explain the configure block usage with examples to add non-existing functionality to the DSL scripting (For example a docker image creation...) and trying to share some daily operation experiences to keep you.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.