GitLab CI/CD: Pipeline Configuration
GitLab CI/CD automates your software delivery process through pipelines defined in a `.gitlab-ci.yml` file at your repository root. When you push commits or create merge requests, GitLab reads this...
Key Insights
- GitLab CI/CD pipelines execute through stages defined in
.gitlab-ci.yml, with jobs running sequentially by stage but in parallel within stages—use theneedskeyword to create directed acyclic graphs (DAGs) for faster execution - Docker integration is first-class: specify different images per job, cache layers effectively, and use artifacts to pass build outputs between stages rather than rebuilding
- Rules-based conditional execution (
rules:) replaces the legacyonly/exceptsyntax and provides precise control over when jobs run based on branches, merge requests, tags, or variable values
Introduction to GitLab CI/CD Pipelines
GitLab CI/CD automates your software delivery process through pipelines defined in a .gitlab-ci.yml file at your repository root. When you push commits or create merge requests, GitLab reads this configuration and spawns runners—agents that execute your pipeline jobs. Each pipeline progresses through stages, executing jobs that build code, run tests, perform security scans, and deploy applications.
The pipeline configuration uses YAML syntax with a straightforward structure: define stages, then define jobs that belong to those stages. Jobs specify what commands to run, which Docker image to use, and how to handle outputs.
Here’s a minimal pipeline:
stages:
- build
- test
build-job:
stage: build
script:
- echo "Compiling the code..."
- gcc -o myapp main.c
artifacts:
paths:
- myapp
test-job:
stage: test
script:
- echo "Running tests..."
- ./myapp --test
This pipeline has two stages that run sequentially. The build-job compiles code and saves the binary as an artifact. The test-job then downloads that artifact and runs tests. If any job fails, the pipeline stops.
Pipeline Stages and Jobs
Stages execute in the order you define them. All jobs within a stage run in parallel (if you have multiple runners available), but stages themselves are sequential. This default behavior works well for traditional build-test-deploy workflows.
Here’s a more realistic multi-stage pipeline:
stages:
- build
- test
- deploy
build-app:
stage: build
script:
- npm install
- npm run build
artifacts:
paths:
- dist/
unit-tests:
stage: test
script:
- npm run test:unit
integration-tests:
stage: test
script:
- npm run test:integration
deploy-production:
stage: deploy
script:
- ./deploy.sh production
only:
- main
Both test jobs run simultaneously, but the deploy stage won’t start until both complete successfully.
For faster pipelines, use the needs keyword to create a directed acyclic graph (DAG). This allows jobs to start as soon as their dependencies complete, rather than waiting for an entire stage:
stages:
- build
- test
- deploy
build-backend:
stage: build
script:
- cargo build --release
artifacts:
paths:
- target/release/
build-frontend:
stage: build
script:
- npm run build
artifacts:
paths:
- dist/
test-backend:
stage: test
needs: [build-backend]
script:
- cargo test
test-frontend:
stage: test
needs: [build-frontend]
script:
- npm test
deploy:
stage: deploy
needs: [test-backend, test-frontend]
script:
- ./deploy.sh
Now test-backend starts immediately after build-backend finishes, without waiting for build-frontend. This can significantly reduce total pipeline time.
Variables and Environment Configuration
GitLab provides predefined variables for every pipeline run. These include commit information, pipeline metadata, and environment details. Use them to make your pipelines dynamic:
deploy:
stage: deploy
script:
- echo "Deploying commit $CI_COMMIT_SHORT_SHA"
- echo "Pipeline ID: $CI_PIPELINE_ID"
- echo "Branch: $CI_COMMIT_BRANCH"
- ./deploy.sh --version=$CI_COMMIT_TAG
Define custom variables at the global level or per-job:
variables:
DATABASE_URL: "postgres://localhost/myapp"
NODE_ENV: "production"
build:
stage: build
variables:
NODE_ENV: "development"
script:
- echo "Building with NODE_ENV=$NODE_ENV"
For sensitive data like API keys and passwords, define variables in GitLab’s UI under Settings > CI/CD > Variables. Mark them as “Protected” (only available to protected branches) and “Masked” (hidden in job logs):
deploy:
stage: deploy
script:
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws s3 sync ./dist s3://my-bucket
Never commit credentials to .gitlab-ci.yml. Always use the UI-defined variables for secrets.
Docker Integration and Artifacts
Every job runs inside a Docker container. Specify which image to use globally or per-job:
default:
image: node:18-alpine
build:
image: node:18
script:
- npm install
- npm run build
test:
image: node:18-alpine
script:
- npm test
Building and pushing Docker images is a common pipeline task:
build-image:
stage: build
image: docker:24
services:
- docker:24-dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
The docker:dind (Docker-in-Docker) service allows you to run Docker commands within a job.
Artifacts pass data between jobs. Define what to save and for how long:
build:
stage: build
script:
- make build
artifacts:
paths:
- build/
- dist/
expire_in: 1 week
test:
stage: test
dependencies:
- build
script:
- ./build/test-runner
The dependencies keyword specifies which artifacts to download. Without it, jobs download artifacts from all previous stages.
Advanced Pipeline Features
Rules provide fine-grained control over job execution. They replace the older only/except syntax:
build:
stage: build
script:
- make build
rules:
- if: $CI_COMMIT_BRANCH == "main"
- if: $CI_MERGE_REQUEST_IID
- if: $CI_COMMIT_TAG
deploy-staging:
stage: deploy
script:
- ./deploy.sh staging
rules:
- if: $CI_COMMIT_BRANCH == "develop"
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop"
deploy-production:
stage: deploy
script:
- ./deploy.sh production
rules:
- if: $CI_COMMIT_TAG
when: manual
Reuse configuration with include and extends:
include:
- local: '/templates/docker-build.yml'
- remote: 'https://example.com/ci-templates/security.yml'
.deploy-template:
stage: deploy
script:
- ./deploy.sh $ENVIRONMENT
rules:
- if: $CI_COMMIT_BRANCH == $DEPLOY_BRANCH
deploy-staging:
extends: .deploy-template
variables:
ENVIRONMENT: staging
DEPLOY_BRANCH: develop
deploy-production:
extends: .deploy-template
variables:
ENVIRONMENT: production
DEPLOY_BRANCH: main
Templates (jobs starting with .) aren’t executed directly—they’re only used for inheritance.
Deployment Strategies and Environments
GitLab tracks deployments to named environments, providing deployment history and rollback capabilities:
deploy-staging:
stage: deploy
script:
- ./deploy.sh staging
environment:
name: staging
url: https://staging.example.com
rules:
- if: $CI_COMMIT_BRANCH == "develop"
deploy-production:
stage: deploy
script:
- ./deploy.sh production
environment:
name: production
url: https://example.com
when: manual
rules:
- if: $CI_COMMIT_BRANCH == "main"
The when: manual directive creates a deployment gate—the job won’t run until someone clicks “Play” in the GitLab UI.
Review apps create temporary environments for merge requests:
review:
stage: deploy
script:
- ./deploy-review.sh $CI_MERGE_REQUEST_IID
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://review-$CI_MERGE_REQUEST_IID.example.com
on_stop: stop-review
rules:
- if: $CI_MERGE_REQUEST_IID
stop-review:
stage: deploy
script:
- ./cleanup-review.sh $CI_MERGE_REQUEST_IID
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
when: manual
rules:
- if: $CI_MERGE_REQUEST_IID
Monitoring and Optimization
Cache dependencies to avoid downloading them repeatedly:
default:
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- node_modules/
- .npm/
build:
script:
- npm ci --cache .npm
- npm run build
Use cache:key strategically. Per-branch caching ($CI_COMMIT_REF_SLUG) works well for feature branches. For dependencies that rarely change, use a static key.
Set timeouts and retry logic for flaky jobs:
integration-tests:
stage: test
script:
- npm run test:integration
timeout: 15 minutes
retry:
max: 2
when:
- runner_system_failure
- stuck_or_timeout_failure
Use before_script and after_script to reduce duplication:
default:
before_script:
- apt-get update -qq
- apt-get install -y -qq make gcc
build:
stage: build
script:
- make build
test:
stage: test
script:
- make test
after_script:
- ./upload-coverage.sh
The after_script runs even if the job fails, making it ideal for cleanup tasks or uploading test results.
Monitor pipeline duration in GitLab’s analytics. Look for jobs that consistently take longest and optimize those first. Parallelization, better caching, and DAG pipelines with needs typically provide the biggest improvements.