Jenkins Pipeline: Declarative and Scripted

Jenkins evolved from simple freestyle jobs configured through the UI to Pipeline as Code, where your entire CI/CD workflow lives in a `Jenkinsfile` committed to your repository. This shift brought...

Key Insights

  • Declarative pipelines provide a structured, opinionated syntax that covers 95% of CI/CD use cases with better error validation and easier maintenance—start here unless you have specific reasons not to.
  • Scripted pipelines offer full Groovy programming capabilities for complex workflows requiring dynamic stage generation, advanced conditionals, or intricate error handling that declarative syntax cannot express.
  • You can embed script blocks within declarative pipelines to handle edge cases, giving you the best of both worlds without abandoning declarative’s structure and validation benefits.

Introduction to Jenkins Pipelines

Jenkins evolved from simple freestyle jobs configured through the UI to Pipeline as Code, where your entire CI/CD workflow lives in a Jenkinsfile committed to your repository. This shift brought version control, code review, and repeatability to build pipelines—no more clicking through UI forms to recreate a lost configuration.

Jenkins offers two pipeline syntaxes: Declarative and Scripted. Declarative provides an opinionated, structured approach with built-in validation. Scripted gives you raw Groovy power with minimal guardrails. Here’s the same simple pipeline in both syntaxes:

// Declarative
pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                echo 'Building application'
            }
        }
    }
}

// Scripted
node {
    stage('Build') {
        echo 'Building application'
    }
}

The declarative version requires a pipeline block, explicit agent declaration, and steps wrapper. Scripted is more concise but provides less structure.

Declarative Pipeline Fundamentals

Declarative pipelines follow a strict hierarchy: pipeline contains agent, stages, and optional directives. Each stage contains steps where actual work happens.

Here’s a complete declarative pipeline demonstrating common patterns:

pipeline {
    agent {
        label 'docker'
    }
    
    environment {
        DOCKER_REGISTRY = 'registry.example.com'
        APP_NAME = 'my-service'
        VERSION = "${env.BUILD_NUMBER}"
    }
    
    parameters {
        choice(name: 'ENVIRONMENT', choices: ['dev', 'staging', 'prod'], description: 'Deployment target')
        booleanParam(name: 'RUN_TESTS', defaultValue: true, description: 'Execute test suite')
    }
    
    stages {
        stage('Build') {
            steps {
                sh 'mvn clean package -DskipTests'
                archiveArtifacts artifacts: 'target/*.jar', fingerprint: true
            }
        }
        
        stage('Test') {
            when {
                expression { params.RUN_TESTS == true }
            }
            steps {
                sh 'mvn test'
                junit 'target/surefire-reports/*.xml'
            }
        }
        
        stage('Docker Build') {
            steps {
                script {
                    docker.build("${DOCKER_REGISTRY}/${APP_NAME}:${VERSION}")
                }
            }
        }
        
        stage('Deploy') {
            when {
                branch 'main'
            }
            steps {
                sh "kubectl set image deployment/${APP_NAME} ${APP_NAME}=${DOCKER_REGISTRY}/${APP_NAME}:${VERSION} -n ${params.ENVIRONMENT}"
            }
        }
    }
    
    post {
        always {
            cleanWs()
        }
        success {
            slackSend color: 'good', message: "Build ${env.BUILD_NUMBER} succeeded"
        }
        failure {
            slackSend color: 'danger', message: "Build ${env.BUILD_NUMBER} failed"
        }
    }
}

The post section handles cleanup and notifications regardless of build outcome. Use always for cleanup, success and failure for conditional actions.

Declarative syntax excels at standard workflows: build, test, deploy. The structure is self-documenting and Jenkins validates your pipeline before execution, catching syntax errors early.

Scripted Pipeline Fundamentals

Scripted pipelines are Groovy scripts with full programming language features. You write imperative code using node to allocate an executor and stage for logical grouping.

Here’s a scripted pipeline with conditional logic:

node('docker') {
    def appName = 'my-service'
    def dockerRegistry = 'registry.example.com'
    def version = env.BUILD_NUMBER
    
    try {
        stage('Checkout') {
            checkout scm
        }
        
        stage('Build') {
            sh 'mvn clean package -DskipTests'
        }
        
        if (env.BRANCH_NAME == 'main' || env.BRANCH_NAME.startsWith('release/')) {
            stage('Integration Tests') {
                sh 'mvn verify -Pintegration-tests'
            }
            
            stage('Security Scan') {
                def scanResult = sh(script: 'trivy image --severity HIGH,CRITICAL --exit-code 1 ${dockerRegistry}/${appName}:${version}', returnStatus: true)
                if (scanResult != 0) {
                    error "Security vulnerabilities found"
                }
            }
        }
        
        currentBuild.result = 'SUCCESS'
    } catch (Exception e) {
        currentBuild.result = 'FAILURE'
        throw e
    } finally {
        cleanWs()
        notifyBuild(currentBuild.result)
    }
}

def notifyBuild(String buildStatus) {
    def color = buildStatus == 'SUCCESS' ? 'good' : 'danger'
    slackSend color: color, message: "Build ${env.BUILD_NUMBER}: ${buildStatus}"
}

Scripted pipelines shine when you need dynamic behavior. Here’s generating stages based on a list:

node {
    def services = ['api', 'worker', 'scheduler']
    
    stage('Checkout') {
        checkout scm
    }
    
    def buildStages = [:]
    for (service in services) {
        def serviceName = service // Closure capture
        buildStages[serviceName] = {
            stage("Build ${serviceName}") {
                sh "docker build -t ${serviceName}:${env.BUILD_NUMBER} ./${serviceName}"
            }
        }
    }
    
    parallel buildStages
}

This pattern—dynamically creating parallel stages—is difficult or impossible in pure declarative syntax.

Key Differences and Comparison

The fundamental difference: declarative is a DSL with constraints, scripted is unrestricted Groovy.

Syntax structure: Declarative requires specific sections in order. Scripted lets you write arbitrary code. Declarative validates your pipeline structure before running; scripted fails at runtime.

Error handling: Declarative uses post conditions. Scripted uses try-catch-finally blocks, giving finer control but requiring more code.

Here’s the same workflow in both syntaxes:

// Declarative
pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                sh 'make build'
            }
        }
        stage('Test') {
            steps {
                sh 'make test'
            }
        }
    }
    post {
        failure {
            mail to: 'team@example.com', subject: 'Build failed'
        }
    }
}

// Scripted
node {
    try {
        stage('Build') {
            sh 'make build'
        }
        stage('Test') {
            sh 'make test'
        }
    } catch (Exception e) {
        mail to: 'team@example.com', subject: 'Build failed'
        throw e
    }
}

Performance is comparable—both compile to the same pipeline execution engine. The choice is about developer experience and maintainability.

Advanced Patterns and Best Practices

The most powerful pattern: declarative pipelines with embedded script blocks for complex logic.

pipeline {
    agent any
    stages {
        stage('Deploy') {
            steps {
                script {
                    def environments = ['dev', 'staging']
                    if (env.BRANCH_NAME == 'main') {
                        environments.add('prod')
                    }
                    
                    for (env in environments) {
                        echo "Deploying to ${env}"
                        sh "kubectl apply -f k8s/${env}/"
                    }
                }
            }
        }
    }
}

This gives you declarative’s structure with scripted’s flexibility when needed.

For code reuse, create shared libraries. In vars/buildAndDeploy.groovy:

def call(Map config) {
    pipeline {
        agent any
        stages {
            stage('Build') {
                steps {
                    sh "${config.buildCommand}"
                }
            }
            stage('Deploy') {
                when {
                    expression { config.deploy }
                }
                steps {
                    sh "${config.deployCommand}"
                }
            }
        }
    }
}

Use it in your Jenkinsfile:

@Library('my-shared-library') _
buildAndDeploy(
    buildCommand: 'mvn package',
    deployCommand: 'kubectl apply -f deployment.yaml',
    deploy: env.BRANCH_NAME == 'main'
)

For credentials, always use the credentials plugin:

pipeline {
    agent any
    stages {
        stage('Deploy') {
            steps {
                withCredentials([usernamePassword(credentialsId: 'docker-registry', usernameVariable: 'USER', passwordVariable: 'PASS')]) {
                    sh 'docker login -u $USER -p $PASS registry.example.com'
                }
            }
        }
    }
}

Never hardcode secrets in your Jenkinsfile.

Migration and Decision Guide

Choose declarative when:

  • Your workflow fits the build-test-deploy pattern
  • You want Jenkins to validate your pipeline structure
  • Team members are less familiar with Groovy
  • You value maintainability over flexibility

Choose scripted when:

  • You need complex conditional logic across stages
  • Dynamic stage generation is required
  • You’re integrating with complex Groovy libraries
  • You need fine-grained error handling control

Converting scripted to declarative:

// Before (Scripted)
node {
    stage('Build') {
        if (env.BRANCH_NAME == 'main') {
            sh 'make release'
        } else {
            sh 'make build'
        }
    }
}

// After (Declarative)
pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                script {
                    if (env.BRANCH_NAME == 'main') {
                        sh 'make release'
                    } else {
                        sh 'make build'
                    }
                }
            }
        }
    }
}

Common pitfalls: forgetting the steps block in declarative, not understanding closure capture in scripted loops, and mixing syntaxes incorrectly.

Conclusion

Start with declarative pipelines. They cover the vast majority of CI/CD scenarios with cleaner syntax, better validation, and easier maintenance. When you hit a wall—complex conditionals, dynamic stages, intricate error handling—either embed a script block or switch to fully scripted syntax.

The Jenkins community and CloudBees are investing heavily in declarative syntax. New features typically target declarative first. Scripted isn’t deprecated, but it’s the escape hatch, not the primary path.

Master declarative syntax, understand when to use script blocks, and keep scripted pipelines in your toolkit for the 5% of cases that truly need it. Your future self—and your teammates—will thank you for the readable, maintainable pipelines.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.