When setting up a CI/CD pipeline, you need to choose the right tools. Things you should take into consideration include your specific needs, abilities and ease of use. In my last blog post, I recommended different tools for building a CI/CD pipeline. Today, I will explain how to install and configure them with Ansible. In the end, I will provide resources that explain how to configure them to work together.
The tools we will show are:
- GitLab: A source code repository
- Jenkins: Continuous integration software
- SonarQube: A code quality tool
Let’s start with the installation. You can install all the tools manually, but it would be more effective to use a script or a tool for infrastructure auto creation. I love Ansible: it is a free agentless configuration management system, and it only needs ssh access to provision a machine.
A Few Words About the Installation Process with Ansible
To install tools through Ansible, you need Playbooks. A Playbook is a YAML file that describes exactly what should be installed on a server (node). You also need Inventory files that describe the nodes (its IP, SSH login/pass or SSH-key and so on). You can run Playbooks from your local PC. Just make sure you install Ansible first.
Let’s look at an example Ansible command:
The ansible-playbook command will find an inventory file for node1, take its IP and SSH credentials, open a SSH connection to node1 and install everything described in playbook.yml.
CI/CD Pipeline Tools Installation
So to install everything we need to our servers (nodes) we need to use playbook’s files. We can create them on our own or take ones that are already created, as I show in this blog post, and modify them for our needs if required.
Here is a nice Ansible script for installing GitLab. If you need to customize it, you can copy the sources and modify it to your needs (MIT / BSD allows this). After the script execution you will have an empty GitLab with only one admin user (it is recommended to change it immediately).
Now you need to import repositories (if you already have some).
How to import repositories: option 1
If you have a folder that contains all the reports you want to import, you can upload them on the same server GitLab is installed. Then, on the server, execute:
Where /var/opt/gitlab/git-data/repositories is the path to folder with repositories you uploaded.
This step can also be easily automated with Ansible.
But now you only have the repositories you imported. You still need to add users, configure their groups and get access to each repository manually.
However there is a problem with this option. Let's assume you installed your GitLab in this way on your server and your team has been working on it really hard: pushing code to GitLab repositories, creating merge requests, putting comments on it and so on. Suddenly, your server is crushed and all those changes are lost.
Sure, you can easily reinstall GitLab by Ansible on another server, it will be in the exact same state as in the beginning and your team will have to redo all its work again. Sounds scary. That's why we need to have backups and store them outside the server in some durable storage like Amazon S3. Backup creation can be scheduled once per day for example. Now if your GitLab crashes you can restore it from a backup with Ansible (please see option 2 about this).
How to import repositories: option 2
I personally love this way, because it is so easy. If you’ve been previously using GitLab, you can just backup all of your GitLab data, with repositories, users, wiki, Scrum boards and more, to one zip file. Then you can use this backup zip file in your Ansible script to import everything at once to your freshly installed GitLab.
Here is an example of how to create a backup:
After than you can find all backups in the /var/opt/gitlab/backups folder.
If you need restore your backup, just run:
You can find more details here.
But you should not do this manually. You can copy Ansible sources from ansible-role-gitlab to your code and update it a bit, so you not only install GitLab but also restore its state so your GitLab will be ready for use.
Now that we have the sources repository, we can take the sources and build them automatically in Jenkins. So let’s automate the Jenkins installation with Ansible. You can use this link as a very good starting point: copy the link to your sources and adjust it according to your needs.
Specify the Correct Jenkins and Jenkins Plugins Versions
When installing, you can specify particular versions for Jenkins and for its plugins. It is very important to specify these particular versions because this is the only way to install exactly the same environment for each Ansible script execution.
For example, let’s say you have a server that was configured with Ansible but crashed, and you need to recreate exactly the same software and configurations on another server. To do that you can run the Ansible script you used for the previous configuration, but you need to be sure that the script will recreate the exact same environment.
This will guarantee the Ansible script will work exactly the same on each installation. If versions are not specified, the script will install the latest version, which changes over time. So, the script might install different versions during each script execution. This may lead to component integration failures, because some versions of one component may not work with some versions of another.
Also, your Jobs might be configured with the assumption that a particular version of a particular module is installed. For example, the Job DSL Plugin might have different syntax for Jobs between versions. If the version is wrong, you will have issues.
How to Copy an Existing Jenkins Installation
After the Ansible scripts execution, we get a totally empty Jenkins. No Jobs / users / access is configured, so those need to be automated as well.
If you already have Jenkins (maybe it was installed manually or by another tool) and want to create an Ansible script that will install Jenkins with exactly the same configuration as what you have, you can create a zip file from the /var/lib/jenkins folder. Then, you can use this zip in your Ansible script to unpack the zip file to the same location.
The zip file, which might get pretty big because it contains Jenkins plugins (jar files), will be stored in the same repository with your Ansible scripts. It is not a good idea to store big files in your repository, because it slows down the source code checkout process. To decrease the size of the zip file you can remove jars from it because they will be installed by the Ansible script anyway.
On the same Jenkins, you may want to build many different projects and each project may require additional software to install. For example, to build Java projects you need to install a particular version of JDK or different projects may require different versions of JDK. Also, you may need to install GIT and Maven to checkout and build code, for NodeJS project you may need to install NodeJS and so on.
This might pollute your Jenkins server really quickly. Moreover, if you need to run builds in parallel with some end2end tests that require running your application on some port, applications from different builds will conflict because it is not possible to run more than one application on the same port. Also, two applications may try to write the same log file and this also leads to conflict.
To kill all birds with one stone, you can use Docker containers and perform each build of each project inside a Docker container. Then, you can install all the required software to build the particular application inside the Docker container. All you need to install on your Jenkins server is Jenkins with the required plugins and Docker (you can use this link for Docker installation).
Defining Your CI/CD Pipeline with the Jenkins Declarative Pipeline
Jenkins Declarative Pipeline uses DSL and can be used for describing the whole CI/CD pipeline. This includes where to take source code from, how to build it, how to test it, how to deploy it and so on. The Declarative Pipeline is the best and most modern way to define the CI/CD pipeline on Jenkins.
To use a Declarative Pipeline all you need is to:
- put a Jenkinsfile in the root of your project in source repository
- create a Multibranch Pipeline Project on Jenkins and specify a url to repository with your project.
The Multibranch Pipeline Project will automatically scan branches and create a separate job for each branch with a Jenkins file in the root.
An example of a Jenkinsfile (in Declarative Pipeline)
This script will run Maven inside a Docker container and will use the publicly available Docker image 'maven:3-alpine' with installed JDK and Maven. You can learn more from this blog post “How to Use the Jenkins Declarative Pipeline”. And also from here and here.
Now let’s look at the different integrations.
CI/CD Pipeline Tools Integration
After the installation, the tools need to be configured to work together. GitLab should trigger builds on Jenkins CI after each push, Jenkins CI should have access to GitLab to checkout the code and to set the build status on the build commits and Jenkins CI should be configured to send the build code quality statistic to SonarQube.
SonarQube can be used in the Jenkins Declarative Pipeline. This script runs Maven inside the Docker container, checkouts your application inside it, builds sources with Maven, performs SonarQube analysis and uploads the results to the SonarQube server. Get more details here.
SonarQube - IDE integration
SonarQube can be configured according to the code quality rules that apply to your projects. SonarQube also has great integrations with many IDEs. So, it is possible to configure your IDE to see code violations in the IDE in real time. Learn how from here.
Jenkins - GitLab integration
If a developer is changing code, it is important to know the result of the changes as soon as possible. If the developer broke something it is simpler to fix it sooner than later, while the changes context is fresh in their mind.
To achieve this:
- The Jenkins build should run as soon as changes are pushed to the source repository.
- The build result of each commit should be shown GitLab, so it’s clear which exact commit broke the code.
- Requests shouldn’t be merged on GitLab if the build for at least one branch in the merge request has a broken build.
Learn how to configure this from here.
Make sure you have a performance testing tool in your pipeline, for running functional and load tests every time you change your code, to ensure system performance. BlazeMeter enables you to run open-source based performance tests, with additional abilities: massive scaling, automation and advanced analytics. To try out BlazeMeter, put your URL in the box below. Or, request a live demo from one of our performance engineers.
You might also find these useful:
Interested in writing for our Blog? Send us a pitch!