Switching from Scripted to Declarative Pipeline

"A long time ago in a galaxy far, far away, a group of Jenkins contributors decided to reinvent the way Jenkins jobs are defined and how they operate." (A couple of years in software terms is a lot, and Jenkins contributors are indeed spread throughout the galaxy).

The new type of jobs became known as Jenkins pipeline. It was received well by the community, and the adoption started almost instantly. Everything was excellent, and the benefits of using Pipeline compared to FreeStyle jobs were evident from the start. However, it wasn't easy for everyone to adopt Pipeline. Those who were used to scripting, and especially those familiar with Groovy, had no difficulties to switch. But, many used Jenkins without being coders. They did not find Pipeline to be as easy as we thought it would be.

While I do believe that there is no place in the software industry for those who do not know how to code, it was still evident that something needed to be done to simplify Pipeline syntax even more. So, a new flavor of Pipeline syntax was born. We renamed the existing Pipeline flavor to Scripted Pipeline and created a new one called Declarative Pipeline.

Declarative Pipeline forces more simplified and more opinionated syntax. Its goal is to provide an easier way to define pipelines, to make them more readable, and to lower the entry bar. You can think of the Scripted Pipeline being initially aimed at power users and Declarative Pipeline for everyone else. In the meantime, Declarative Pipeline started getting more and more attention, and today such a separation is not necessarily valid anymore. In some ways, Declarative Pipeline is more advanced, and it is recommended for all users except when one needs something that cannot be (easily) done without switching to Scripted.

The recommendation is to always start with Declarative Pipeline and switch to Scripted only if you need to accomplish something that is not currently supported. Even then, you might be trying to do something you shouldn't.

Right now, you might be asking yourself something along the following lines. "Why did Viktor make us use Scripted Pipeline if Declarative is better?" The previous pipeline required two features that are not yet supported by Declarative. We wanted to use podTemplate for most of the process with an occasional jump into agents based on VMs for building Docker images. That is not yet supported with Declarative Pipeline. However, since we will now switch to using Docker socket to build images inside the nodes of the cluster, that is not an issue anymore. The second reason lies in the inability to define Namespace inside podTemplate. That also is not an issue anymore since we'll switch to the model of defining a separate Kubernetes cloud for each Namespace where builds should run. You'll see both changes in action soon when we start exploring the continuous delivery pipeline used for go-demo-5.

Before we jump into defining the pipeline for the go-demo-5 application, we'll briefly explore the general structure of a Declarative pipeline.

The snippet that follows represents a skeleton of a Declarative pipeline.

 1  pipeline { 
 2    agent { 
 3      ... 
 4    } 
 5    environment { 
 6      ... 
 7    } 
 8    options { 
 9      ... 
10    } 
11    parameters { 
12      ... 
13    } 
14    triggers { 
15      ... 
16    } 
17    tools { 
18      ... 
19    } 
20    stages { 
21      ... 
22    } 
23    post { 
24      ... 
25    } 
26  } 

A Declarative Pipeline is always enclosed in a pipeline block. That allows Jenkins to distinguish the Declarative from the Scripted flavor. Inside it are different sections, each with a specific purpose.

The agent section specifies where the entire Pipeline, or a specific stage, will execute in the Jenkins environment depending on where the agent section is placed. The section must be defined at the top-level inside the pipeline block, but stage-level usage is optional. We can define different types of agents inside this block. In our case, we'll use kubernetes type which translates to podTemplate we used before. The agent section is mandatory.

The post section defines one or more additional steps that are run upon the completion of a Pipeline's or stage's run (depending on the location of the post section within the Pipeline). It supports any of the following post-condition blocks: always, changed, fixed, regression, aborted, failure, success, unstable, and cleanup. These condition blocks allow the execution of steps inside each condition depending on the completion status of the Pipeline or stage. The stages block is where most of the action is happening. It contains a sequence of one or more stage directives inside of which are the steps which constitute the bulk of our pipeline. The stages section is mandatory.

The environment directive specifies a sequence of key-value pairs which will be defined as environment variables for the all steps, or stage-specific steps, depending on where the environment directive is located within the Pipeline. This directive supports a special helper method credentials() which can be used to access pre-defined credentials by their identifier in the Jenkins environment.

The options directive allows configuring Pipeline-specific options from within the Pipeline itself. Pipeline provides a number of these options, such as buildDiscarder, but they may also be provided by plugins, such as timestamps.

The parameters directive provides a list of parameters which a user should provide when triggering the Pipeline. The values for these user-specified parameters are made available to Pipeline steps via the params object.

The triggers directive defines automated ways in which the Pipeline should be re-triggered. In most cases, we should trigger a build through a Webhook. In such situations, triggers block does not provide any value.

Finally, the last section is tools. It allows us to define tools to auto-install and put on the PATH. Since we're using containers, tools are pointless. The tools we need are already defined as container images and accessible through containers of the build Pod. Even if we'd use a VM for parts of our pipeline, like in the previous chapter, we should still bake the tools we need inside VM images instead of wasting our time installing them at runtime.

You can find much more info about the Declarative Pipeline in Pipeline Syntax (https://jenkins.io/doc/book/pipeline/syntax/) page. As a matter of fact, parts of the descriptions you just read are from that page.

You probably got bored to death with the previous explanations. If you didn't, the chances are that they were insufficient. We'll fix that by going through an example that will much better illustrate how Declarative Pipeline works. We'll use most of those blocks in the example that follows. The exceptions are parameters (we don't have a good use case for them), triggers (useless when we're using Webhooks), and tools (a silly feature in the era of containers and tools for building VM images). Once we're finished exploring the pipeline of the go-demo-5 project, you'll have enough experience to get you started with your own Declarative Pipelines, if you choose to use them.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset