As the Continuous Delivery is spreading, more and more CI servers are working on daily builds, tests and releases. With the growth of the companies' codebase we are repeating almost the same steps with slight changes to implement CI/CD coverage for the new modules and projects, but the maintenance or update of these steps is getting harder. In a monolithic environment the growth is not so remarkable, but as we are moving toward the microservice paradigm with containerization, the maintenance effort of these configurations could jump quickly.
The majority of the servers I saw was run the same sequence of checkout, build, package, deploy, tests steps and finally marked artifact as releasable or released it into the production environment.
Also all of these servers were guarded by intensively and only some chosen ones were able to change configs or could consider to upgrade to a more recent version with shaky hands and unpredictable compatibility issues. We could agree on the high importance of these servers and their unique role, but how could we mitigate the risk and dependency on these servers? How could we repeat pipeline assembling steps in a reliable way and make predictable the outcome?
Purpose of this document
This is the first part of the series of tutorials about Jenkins scripting to achieve a fully automated, decentralized and replaceable delivery pipeline architecture with better flexibility than the monolithic, centralized Jenkins setup. In the series I’m talking about the scripting basics (here), some intermediate steps (xml configuration, modularization, common tasks, DslFactory, etc.) and the extension of the plugin function with own library routines. In this part I’m explaining only the basics of job and view creation, the syntax and a basic pipeline generation with some recommendations.