Tired of Jenkins? Always keeping an eye on all those new kids on the block with their super cool and simple Continuous Integration Pipeline files? Here´s a guide on how to fire up a fully functional GitLab Continuous Integration/Delivery pipeline with Let´s Encrypt, Docker Container Registry and Runners in no time.
The problem with Jenkins
There are many reasons to stick with Jenkins . It´s a mature Continuous Integration Server and it has a big market share. Everybody uses Jenkins. So why should you bother about something different? Well, I´am quite a Jenkins fanboy. As a consultant I used it so often in many projects and it always felt like a good choice.
Always? Well, just until the concept of Pipeline as Code arose and the Jenkins Pipeline Plugin was proposed as the answer to that concept in Jenkins 2.x. Together with a smart colleague we setup a new Jenkins server and started to rewrite all our existing Jenkins Jobs in the Jenkins Pipeline way… And it wasn´t easy! We felt that this approach was missing many things we´d already done before and now needed to implement into our Jenkinsfiles
, which took much more time than we intended in the first place. At that time we had a really good standing in the project and the customer was on our side. We somehow managed to put everything together – but it didn´t feel finished. And it was way too verbose! And I don´t really know how we convinced our customer to not just scream at us about that decision (I think that was all about all the other architectural decisions that were pretty good 🙂 ).
At the same time I was heavily using Open Source projects and also started to contribute to some, including building my first own projects on GitHub. The “standard way” to do Continuous Integration there is to use TravisCI . And all you have to do to configure your pipeline is to create a simple file called .travis.yml
(see an example here ). Comparing these files to the big pipelines of our customer projects is of course inappropriate. But the thought that everything should work in a much easier way remained.
Thinking about all of this, I went to the codecentric coffee kitchen. Well, maybe you already know what happened then. 🙂 Many colleagues there saying
“Hey Jonas, you Jenkins fanboy. Check out all those cool new CI servers like Concourse, Circle CI or even GitLab CI! We don´t know why you´re still messing around with Jenkins…” .
With a fresh coffee in my hand, I opened Google and found what my stomach was telling me all the time : “Jenkins 2.0 tries to address this by promoting a Pipeline plugin (plus another plugin to visualize it), but it kind of misses the point.”
That also reminded me of other pain points. Ever tried to keep all those Jenkins Plugins updated? Why the heck do I need all those Plugins actually?!! And why is Jenkins so hard to set up in a fully automated way that my colleague Reinhard needed to give deep-dive talks on this (I really recommend them!)?!!
Now I was ready to switch my CI fanboy server! And as there are many good rumors about GitLab CI, I wanted to give it a try. And that should be no problem, right? It´s just one of thoose new and easy to setup tools!
A GitLab CI real life setup
Installing and configuring GitLab CI isn´t always as easy as one could think in the first place. Yeah I know, there are those tutorials that present you a docker-compose up
and you´re already 80 % there. But in the end you´ll see that you just achieved maybe 10 %. 🙂 Why is that? Well, if we want to set up a modern CI Pipeline, we for sure want to use Docker somewhere. It simplifies the effort to test, build and run our applications and also prevents us from getting into trouble with unmatched build-requirements on our CI server itself: everything needed is just already there inside the matching Docker images. No matter what kind of software you´re building or what programming language you´re using!. The GitLab CI docs propose this strategy also :
One of the new trends in Continuous Integration/Deployment is to:
1. Create an application image
2. Run tests against the created image
3. Push the image to a remote registry
4. Deploy to a server from the pushed image
This means we need a working Docker installation on our Pipeline server as a prerequisite for the GitLab configuration. And as this post will show, there are more prerequsites. So it turns out to be a good idea to leave the simple path with docker-compose up
and to shift to a much more comprehensible setup here. This also has another advantage: Every step described could be used inside your companies infrastructure and on your servers! It´s also a good idea to strive for a fully automated setup of our CI Pipeline – having all the steps available in automatically executable code, checked in to version control.
To achieve a fully comprehensible setup, we use some Infrastructure-as-Code tools. The Ansible Playbooks will contain every step necessary to provision a GitLab server. There’s also great documentation about what´s needed to set up everything from the ground up – even if you don´t want to use Ansible! And with the help of Vagrant we´ll define our infrastructure inside a Vagrantfile . Now we can easily fire up a server locally that is based on a certain OS. And switching to your company’s GitLab server is extremely easy: Just edit the Ansible inventory file and add [yourcompany-gitlab-server]
including its IP.
Prerequisites
For the sake of comprehensibility, every Ansible Playbook and Vagrant file used in this post is available inside the example project on GitHub . To run this post´s setup, you need a running installation of Ansible and Vagrant together with a Virtualization provider like VirtualBox . On a Mac, this is just a few homebrew commands away:
1brew install ansible 2brew cask install virtualbox 3brew cask install vagrant
To really achieve a comprehensible setup, we also need the vagrant-dns Plugin (we´ll talk about that in a second). Just install it with:
1vagrant plugin install vagrant-dns
Now we´re ready to get our hands dirty and clone the github.com/jonashackt/gitlab-ci-stack . Now be sure to add your domain name into the Vagrantfile . As I own the domain jonashackt.io and later want GitLab to be available on gitlab.jonashackt.io
, I added the following:
1config.vm.hostname = "jonashackt" 2 config.dns.tld = "io"
After a vagrant dns --install
, we´re ready to fire up our server! Just go right into the gitlab-ci-stack
directory and fire up our Vagrant Box with the common vagrant up
:
Depending on your internet connection, this can take some time – especially if the command is executed for the first time. As soon as our Vagrant Box is running, we have everything set up to run our Ansible Playbooks on. Let´s do a connection check first:
1ansible gitlab-ci-stack -i hostsfile -m ping
If this returns a SUCCESS
, we can move on to really execute our Ansible playbooks.
One command to install & configure full GitLab CI
There are basically two options to install GitLab. The Omnibus way and from source . We´re using Omnibus here, because it makes life much easier.
Everything you need to install a fully functional GitLab instance is done by the Playbook prepare-gitlab.yml . Before we execute it, we´ll need to check two things. First make sure the domain name your GitLab instance should answer on is provided inside the prepare-gitlab.yml . In my case this is gitlab.jonashackt.io
:
1vars: 2 gitlab_domain: "gitlab.jonashackt.io"
The second part depends on your preferences. If you use this setup together with the provided Vagrant Box, you´ll need to have API access to your DNS provider. This is because our Vagrant Box isn´t accessible from the Let´s Encrypt servers directly (we´ll also talk about the “why” in a second, I promise). For now just provide providername
, providerusername
and providertoken
for your DNS provider´s API in the extra-vars
. In some cases you also need to add your current IP (check a site like whatsmyip.org) to the DNS provider´s IP whitelist. Now we´re ready to execute our Playbook:
1ansible-playbook -i hostsfile prepare-gitlab.yml --extra-vars "providername=yourProviderNameHere providerusername=yourUserNameHere providertoken=yourProviderTokenHere"
Only if you don´t use the Vagrant Box of our current setup and your server is publicly accessible, you can safely ignore these extra-vars
. GitLab will handle everything for you. Just execute:
1ansible-playbook -i hostsfile prepare-gitlab.yml
Ansible will now install and configure a fully functional GitLab CI for you. If you don´t want to know anything else, that´s perfectly fine! Just wait for the Playbook to complete, open up your browser and enter your domain name. This should look like this somehow:
But feel free to read on if you want to know about the how and the whys 🙂
Five steps from zero to GitLab CI platform
As already mentioned, Ansible provides us with a perfect (and up-to-date) documentation on how to install everything. So let´s have a look into the GitLab installation process. The main Playbook prepare-gitlab.yml is structured into five tasks:
1- hosts: all 2 become: true 3 4 vars: 5 gitlab_domain: "gitlab.jonashackt.io" 6 gitlab_url: "https://{{ gitlab_domain }}" 7 gitlab_registry_url: "{{ gitlab_url }}:4567" 8 9 tasks: 10 11 - name: 1. Prepare Docker on Linux node 12 include_tasks: prepare-docker-ubuntu.yml 13 tags: install_docker 14 15 - name: 2. Prepare Let´s Encrypt certificates for GitLab if we setup an internal server like Vagrant (you have to provide providername, providerusername & providertoken as extra-vars!) 16 include_tasks: letsencrypt.yml 17 when: providername is defined 18 tags: letsencrypt 19 20 - name: 3. Install GitLab on Linux node 21 include_tasks: install-gitlab.yml 22 tags: install_gitlab 23 24 - name: 4. Configure GitLab Container Registry 25 include_tasks: configure-gitlab-registry.yml 26 tags: configure_registry 27 28 - name: 5. Install & Register GitLab Runner for Docker 29 include_tasks: gitlab-runner.yml 30 tags: gitlab_runner
We need to (1.) install Docker on our machine and (2.) fetch proper Let´s Encrypt certificates for our not publicly accessible Vagrant Box. Then everything needed for the (3.) GitLab Omnibus installation is done in the next task, followed by a Playbook on how to (4.) configure the GitLab Container Registry. The fifth Playbook then finally (5.) register our GitLab Runners that will be able to interact with the server´s Docker engine.
The full setup will look like this in the end:
logo sources: GitLab icon , Ubuntu logo , Let´s Encrypt icon, Vagrant logo , VirtualBox logo , Ansible logo , Docker logo
Install & configure Docker
The first included task list prepare-docker-ubuntu.yml simply walks you through the standard guide on how to install Docker on Ubuntu . If you use a different distro, you can simply change modules etc. to match your Linux version.
There´s really nothing special here – except the way we install Docker Compose . The path proposed in the Docs unappealingly uses a hard-coded version inside the needed curl command. Therefore the docs need to add the following hint:
Use the latest Compose release number in the download command.
But there´s a much nicer way! Because the Python package manager PIP always provides us with the current Docker Compose package. So all we have to do is the following:
1- name: Install pip 2 apt: 3 name: python3-pip 4 state: latest 5 6 - name: Install Docker Compose 7 pip: 8 name: docker-compose
Now we don´t need to mess with maintaining the Docker Compose version number and are able to use the smooth upgrade process of a package manager.
Don´t go without HTTPS and domain!
As mentioned before, we want to achieve a real life GitLab CI setup here. What we therefore don´t want is to access GitLab via an URL like http://localhost:30080
, which would be the standard way with a Vagrant port forwarding and without HTTPS in place. A central point about the usage of GitLab CI with Docker incl. the GitLab Container Registry and the Docker Runners is to use a valid domain name and HTTPS configured properly. Trust me. You don´t want to start without that! There will be so many error messages waiting for you. From a simple failing Git push like:
1$ git push 2fatal: unable to access 'https://gitlab.jonashackt.io/root/yourRepoNameHere/': SSL certificate problem: self signed certificate
to errors while trying to register GitLab Runners:
1ERROR: Registering runner... failed 2runner=gyy8axxP status=couldn't execute POST against https://gitlab.jonashackt.io/api/v4/runners: Post https://gitlab.jonashackt.io/api/v4/runners: x509: certificate signed by unknown authority 3PANIC: Failed to register this runner. Perhaps you are having network problems
up to problems while trying to push into the GitLab Container Registry:
1Error response from daemon: Get https://gitlab.jonashackt.io:4567/v2/: x509: certificate signed by unknown authority 2ERROR: Job failed: exit status 1
I think there are many more stumbling blocks on the way to a properly configured GitLab CI Platform. To avoid most of them, let´s configure proper HTTPS!
Using domain names for Vagrant Boxes
Let´s start the journey by configuring a domain name for our Vagrant Box. After that step, we should be able to access our Box with an address like http://gitlab.jonashackt.io
. Luckily this is easily achievable with the help of vagrant-dns Plugin . Remember, I promised to tell you why you´d already installed the Plugin?! There we go 🙂
We already configured config.vm.hostname = "jonashackt"
and config.dns.tld = "io"
inside our Vagrantfile . Now we´re able to configure our top-level domain on our host machine io
with the help of the vagrant-dns Plugin . Just execute the following:
1vagrant dns --install
To check if anything went right and our top-level domain will be resolvable, we use our host´s appropriate tooling. On a Mac this is scutil --dns
. Using this, we see if the resolver is part of your DNS configuration (there are more resolvers configured, you may need to scroll down):
1... 2 3resolver #10 4 domain : io 5 nameserver[0] : 127.0.0.1 6 port : 5300 7 flags : Request A records, Request AAAA records 8 reach : 0x00030002 (Reachable,Local Address,Directly Reachable Address) 9 10...
This looks pretty good! If you already fired up the Vagrant Box, you should vagrant halt
it before. After the next startup of our Vagrant Box with a usual vagrant up
we can try to reach our Box using our configured domain. Again on a Mac we can use:
1dscacheutil -q host -a name gitlab.jonashackt.io
As we configured everything correctly, this should result in something like the following (containing the private IP 172.16.2.15
we configured inside the Vagrantfile ):
1$:gitlab-ci-stack jonashecht$ dscacheutil -q host -a name gitlab.jonashackt.io 2 name: gitlab.jonashackt.io 3 ip_address: 172.16.2.15
The last step is to get our nice domain name jonashackt.gitlab.io
not only available on our host machine, but also inside our Vagrant Box. Sadly the great vagrant-dns Plugin doesn´t support propagating the host´s DNS resolver into the Vagrant Boxes itself.
But luckily we choose VirtualBox as a virtualization provider for Vagrant , which supports the propagation of the host´s DNS resolver to the guest machines 🙂 All we have to do is to use the host’s resolver as a DNS proxy in NAT mode, which is suggested in this serverfault answer :
1# Forward DNS resolver from host (vagrant dns) to box 2virtualbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
After we restarted our Vagrant Box with this configuration in place, our domain name gitlab.jonashackt.io
should be also resolvable inside our Ubuntu guest machine.
HTTPS & Let´s Encrypt for GitLab on publicly accessible servers
If you don´t want to use this post´s setup with Vagrant, have a publicly accessible server ready and a public DNS provider configured to resolve to this server, you don´t need to do much about HTTPS in GitLab:
From
10.7
we will automatically use Let’s Encrypt certificates if the external_url specifies https , the certificate files are absent, and the embedded nginx will be used to terminate ssl connections.
In this case the whole point of HTTPS with Let´s Encrypt is handled by the GitLab Omnibus installation for you. And this posts´ Ansible scripts will just build on top of that – just be sure to have your domain name configured in the main playbook prepare-gitlab.yml . We don´t have to worry about the process of obtaining Let´s Encrypt certificates and configuring them for GitLab. Everything is just done for you by Omnibus.
HTTPS & Let´s Encrypt for GitLab on non-publicly accessible servers
In most other scenarios the whole configuration process of GitLab CI will be much harder! If your GitLab host is not externally accessible by the Let´s Encrypt servers, you´ll need an alternative to the fully automated Omnibus Let´s Encrypt process . And this is true for our local setup with Vagrant as well as for GitLab servers, that should only be accessible for internal development teams.
In both cases the Let´s Encrypt servers won´t be able to validate, if the given domain name resolves to the same host from which the certification process was issued from. After all it´s just a non-public DNS configuration and the server isn´t visible for Let´s Encrypt. If you try to use the automated Omnibus process here, the GitLab installation wouldn´t really fail. But you´d be stuck with self-signed certificates which introduce many of the problems and errors already mentioned before. And to make matters worse, your Browser (and the ones of your colleagues´) will complain in that well known nasty way:
Because of this it would be really nice to use Let´s Encrypt all the same. Although Let´s Encrypt was designed to be used with public accessible websites, there are ways to create these certificates for non-public servers also. All you need is to own a regularly registered domain. That maybe sounds like a big issue, but isn´t really a problem! If you don´t mind about the actual top level domain the cheapest start would be somthing like yourDomainName.yxz
or yourDomainName.online
. Both are available starting from 1$/year. Just be sure to pick one from this provider list .
You´ll need API access! Besides your regularly registered domain you´ll need API access to your DNS provider. This isn´t always included in the standard price of your domain. Be sure to check the prerequisites for API access at your respective provider.
Owning a domain and having API access to the DNS provider, we have everything in place to fetch proper Let´s Encrypt certificates for our Vagrant Box (or private server). There are many discussions and blog posts about this topic, but the by far most elegant way to get the Let´s Encrypt certificates without having to spin up another (publicly accessible server) is to use dehydrated together with lexicon and Let´s Encrypt´s dns-01
challenge. This great answer on security.stackexchange.com nails it:
Since this challenge works by provisioning DNS TXT records, you don’t ever need to point an A record at a public IP address. So your intranet does not need to be reachable from the Internet, but your domain name does need to exist in the public DNS under your control.
Using dehydrated and lexicon together with Let´s Encrypt´s dns-challenge
Great work has been done by the dehydrated team to create an easier-to-use Let´s Encrypt client than the official certbot . And that´s also true for the lexicon team, because they standardise the way how to manipulate DNS records of multiple DNS providers ´ via APIs. Thanks to the great post of Jason Kulatunga , who is the maintainer of lexicon, crafting an Ansible playbook to automatically use dehydrated and lexicon together with Let´s Encrypt´s dns-01
challenge is really straightforward! So let´s have a look at the example project´s playbook obtain-letsencrypt-certs-dehydrated-lexicon.yml :
1- name: Update apt 2 apt: 3 update_cache: yes 4 5 - name: Install openssl, curl, sed, grep, mktemp, git 6 apt: 7 name: 8 - openssl 9 - curl 10 - sed 11 - grep 12 - mktemp 13 - git 14 state: latest 15 16 # install this neat tool https://github.com/lukas2511/dehydrated 17 - name: Install dehydrated 18 git: 19 repo: 'https://github.com/lukas2511/dehydrated.git' 20 dest: /srv/dehydrated 21 22 - name: Make dehydrated executable 23 file: 24 path: /srv/dehydrated/dehydrated 25 mode: "+x" 26 27 - name: Specify our internal domain 28 shell: "echo '{{ gitlab_domain }}' > /srv/dehydrated/domains.txt" 29 30 - name: Install build-essential, python-dev, libffi-dev, python3-pip 31 apt: 32 name: 33 - build-essential 34 - python-dev 35 - libffi-dev 36 - libssl-dev 37 - python3-pip 38 state: latest 39 40 - name: Install requests[security] 41 pip: 42 name: "requests[security]" 43 44 # install this neat tool https://github.com/AnalogJ/lexicon 45 - name: Install dns-lexicon with correct provider (dns-lexicon[providernamehere]) 46 pip: 47 name: "dns-lexicon[{{providername|lower}}]"
As we don´t use a publicly accessible server, we need to use dns-01
challenges instead of the Let´s Encrypt “standard” http-01
. Therefore, dehydrated needs a hook file to work with dns-01
. lexicon has such a file for us dehydrated.default.sh and we copy it simply inside our playbook:
1- name: Configure lexicon with Dehydrated hook for dns-01 challenge 2 get_url: 3 url: https://raw.githubusercontent.com/AnalogJ/lexicon/master/examples/dehydrated.default.sh 4 dest: /srv/dehydrated/dehydrated.default.sh 5 mode: "+x"
At this point we need to use some private information about your DNS provider – because remember, the whole process could only be done, if you have access to a real domain. In order to grant lexicon access to your DNS provider´s API, we set some environment variables and execute dehydrated thereafter. As you maybe notice, lexicon´s environment variables are dynamic based on the provider´s name – which is kind of tricky to configure:
1- name: Generate Certificates 2 shell: "/srv/dehydrated/dehydrated --cron --hook /srv/dehydrated/dehydrated.default.sh --challenge dns-01 --accept-terms" 3 environment: 4 - PROVIDER: "{{providername|lower}}" 5 - "{'LEXICON_{{providername|upper}}_USERNAME':'{{providerusername}}'}" 6 - "{'LEXICON_{{providername|upper}}_TOKEN':'{{providertoken}}'}" 7 ignore_errors: true
You maybe need to whitelist the IP you’re approaching the DNS provider´s API from. You can use a tool like whatsmyip.org to get the IP. Add it to your DNS provider’s API access IP whitelist before you call the playbook
All environment variables values are depending on the --extra-vars
which are configured as providername
, providerusername
and providertoken
:
1ansible-playbook -i hostsfile prepare-gitlab.yml --extra-vars "providername=yourProviderNameHere providerusername=yourUserNameHere providertoken=yourProviderTokenHere"
Configure the certificates in GitLab
Please don´t get confused with this part of the docs . That´s only needed if you want to install a custom certificate authority and not necessarily for properly created Let´s Encrypt certificates, since the Let´s Encrypt authority is already trusted.
According to the docs there are two ways to configure HTTPS in GitLab : the automatic Let´s Encrypt way , which we sadly can´t use in our scenario as our Vagrant Box isn´t publicly accessible. And the way to manually configure HTTPS , the one we need to choose here, because we acquired the Let´s Encrypt certificates for our selfs.
Therefore we set the external_url
via the environment variable EXTERNAL_URL: "{{gitlab_url}}"
at the GitLab Omnibus installation process to contain an https. In my case, this is https://gitlab.jonashackt.io
. Thereafter the GitLab Omnibus installation will look for certificates placed in /etc/gitlab/ssl/
and named gitlab.jonashackt.io.key
& gitlab.jonashackt.io.crt
. Note that both file names must be derived from your domain´s name.
The playbook letsencrypt.yml takes care of this and will just copy the generated certificates with the correct name to the correct location. And as this step is done right before the actual GitLab installation, we also need to create the directory /etc/gitlab/ssl/
at first:
1- name: Create GitLab cert import folder /etc/gitlab/trusted-certs for later GitLab Installation usage 2 file: 3 path: /etc/gitlab/ssl 4 state: directory 5 when: success 6 7 - name: Copy certificate files to GitLab cert import folder /etc/gitlab/trusted-certs 8 copy: 9 src: "{{ item.src }}" 10 dest: "{{ item.dest }}" 11 remote_src: yes 12 with_items: 13 - src: "/srv/dehydrated/certs/{{ gitlab_domain }}/fullchain.pem" 14 dest: "/etc/gitlab/ssl/{{ gitlab_domain }}.crt" 15 16 - src: "/srv/dehydrated/certs/{{ gitlab_domain }}/privkey.pem" 17 dest: "/etc/gitlab/ssl/{{ gitlab_domain }}.key" 18 19 when: success
Note that we´re copying the fullchain.pem
instead of the cert.pem
! This is essential to prevent our selfs from getting the described errors like x509: certificate signed by unknown authority
or ERROR: Registering runner... failed
later. Thanks to this great comment I understood that a green bar inside the security bar of Chrome or Firefox doesn´t mean that Docker or Ubuntu know about Let´s Encrypt´s CA at all levels.
If you ran the example project´s Ansible playbooks, you can use your GitLab CI instance without cryptic error messages because of self-signed certificates:
Install GitLab itself
Now we´ve reached the point where we wanted to be in the first place: we´ll install GitLab itself right now! The playbook install-gitlab.yml will walk through the standard GitLab installation guide for Ubuntu . Just in a fully automated way:
1- name: Update apt and autoremove 2 apt: 3 update_cache: yes 4 cache_valid_time: 3600 5 autoremove: yes 6 7 - name: Install curl, openssh-server, ca-certificates & postfix 8 apt: 9 name: 10 - curl 11 - openssh-server 12 - ca-certificates 13 - postfix 14 state: latest 15 16 - name: Add the GitLab package repository 17 shell: "curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash" 18 19 - name: Update apt and autoremove 20 apt: 21 update_cache: yes 22 23 - name: Install GitLab with Omnibus-Installer 24 apt: 25 name: gitlab-ce 26 state: latest 27 environment: 28 EXTERNAL_URL: "{{gitlab_url}}" 29 ignore_errors: true 30 register: gitlab_install_result 31 32 - name: Gitlab Omnibus is based on Chef and will give many insights what it does in the background 33 debug: 34 msg: 35 - "The installation process said the following: " 36 - "{{gitlab_install_result.stdout_lines}}" 37 38 - name: Wait for GitLab to start up 39 wait_for: 40 port: 443 41 delay: 10 42 sleep: 5 43 44 - name: Let´s check if Gitlab is up and running 45 uri: 46 url: "{{gitlab_url}}"
This is one of the simplest playbooks in this setup here. After the required dependent packages, the GitLab package repository is added and GitLab Omnibus installation is started afterwards. The key point here is the environment variable EXTERNAL_URL
which is set to "{{gitlab_url}}"
. The variable itself is configured inside the main playbook prepare-gitlab.yml . After the GitLab installation, we wait for the port 443
to become available and then check if GitLab answers on the configured URL.
GitLab Container Registry
Remember the introductory phrases? We liked to set up a modern CI Pipeline making heavy usage of Docker and its advantages. For this purpose the GitLab Container Registry comes just in time. With that tool we´ll be able to not only configure a Docker Registry for every GitLab project. We can also leverage the power of GitLab´s user authentication system for the Docker Registry. And last but not least, we will see a nice tab point inside our GitLab GUI where we can scroll through all the Docker images that reside in the project´s corresponding Docker Registry:
The docs about how to configure the GitLab Container Registry domain tells us that we could either use a completely separate domain for our Registry. Or we could just use the same domain as the main GitLab instance . Our Ansible playbook configure-gitlab-registry.yml demonstrates the second way:
1- name: Activate Container Registry in /etc/gitlab/gitlab.rb 2 lineinfile: 3 path: /etc/gitlab/gitlab.rb 4 line: " registry_external_url '{{ gitlab_registry_url }}'" 5 6 - name: Reconfigure Gitlab to activate Container Registry 7 shell: "gitlab-ctl reconfigure" 8 register: reconfigure_result 9 10 - name: Let´s see what Omnibus/Chef does 11 debug: 12 msg: 13 - "The reconfiguration process gave the following: " 14 - "{{reconfigure_result.stdout_lines}}"
The playbook inserts the needed registry_external_url
configuration into the file /etc/gitlab/gitlab.rb
. With my domain, this contains https://gitlab.jonashackt.io:4567
, where the port should be something different than 5000
, according to the docs.
As I already mentioned in the paragraph Configure the certificates in GitLab, it is essential that we use the /srv/dehydrated/certs/{{ gitlab_domain }}/fullchain.pem
inside our GitLab certification configuration. By doing so, we prevent errors while using the GitLab Container Registry. And these errors are sneaky: they will not show up until you try to actually use the Container Registry inside a GitLab CI pipeline:
1Error response from daemon: Get https://gitlab.jonashackt.io:5000/v2/: x509: certificate signed by unknown authority 2ERROR: Job failed: exit status 1
As our certificates are named accordingly with the correct domain name , the GitLab Container Registry also uses these certificates (including the fullchain.pem). The last step inside our configure-gitlab-registry.yml will show us the output of the GitLab Omnibus reconfiguration, which is executed with the command gitlab-ctl reconfigure
(you maybe need to scroll a bit to see it 🙂 ):
1... 2 3 - create new file /var/opt/gitlab/nginx/conf/gitlab-registry.conf 4 - update content in file /var/opt/gitlab/nginx/conf/gitlab-registry.conf from none to 38ba8d 5 --- /var/opt/gitlab/nginx/conf/gitlab-registry.conf 2018-05-23 07:06:18.857687999 +0000 6 +++ /var/opt/gitlab/nginx/conf/.chef-gitlab-registry20180523-13668-614sno.conf 2018-05-23 07:06:18.857687999 +0000 7 @@ -1 +1,59 @@ 8 +# This file is managed by gitlab-ctl. Manual changes will be 9 +# erased! To change the contents below, edit /etc/gitlab/gitlab.rb 10 +# and run `sudo gitlab-ctl reconfigure`. 11 + 12 +## Lines starting with two hashes (##) are comments with information. 13 +## Lines starting with one hash (#) are configuration parameters that can be uncommented. 14 +## 15 +################################### 16 +## configuration ## 17 +################################### 18 + 19 + 20 +server { 21 + listen *:4567 ssl; 22 + server_name gitlab.jonashackt.io; 23 + server_tokens off; ## Don't show the nginx version number, a security best practice 24 + 25 + client_max_body_size 0; 26 + chunked_transfer_encoding on; 27 + 28 + ## Strong SSL Security 29 + ## https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html & https://cipherli.st/ 30 + ssl on; 31 + ssl_certificate /etc/gitlab/ssl/gitlab.jonashackt.io.crt; 32 + ssl_certificate_key /etc/gitlab/ssl/gitlab.jonashackt.io.key; 33 34 ...
Here we see that GitLab Omnibus configured its internal Nginx with a new endpoint on port 4567
for our Container Registry and that our acquired Let´s Encrypt certificates are used. Of course you can configure this port inside the main playbook prepare-gitlab.yml .
Install GitLab Runners
Now we´ve already reached the 5th step of our main playbook: installing and registering the GitLab Runners to access the Docker Engine inside our GitLab CI pipeline. GitLab Runners are needed to really execute the steps inside a GitLab CI pipeline later. These steps are called Jobs inside GitLab.
The process could be split up into two sections: First we need to install the OS service gitlab-runner
. In our playbook gitlab-runner.yml we used the official docs on how to do that on Linux as a blueprint:
1- name: Add the GitLab Runner package repository 2 shell: "curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash" 3 4 - name: Install GitLab Runner package 5 apt: 6 name: gitlab-runner 7 state: latest
Nothing special here. The second part of the process is a bit more tricky. In order to register a GitLab Runner at the end , we´ll need to somehow automatically obtain the current registration token from our GitLab instance. And this token will change every time we start up GitLab together with our Vagrant Box or server. As we don´t want to stop with our automated GitLab installation process here, we need to get this token every time we want to register a new GitLab runner.
Sadly there´s no way to use the great GitLab REST API for that purpose right now. And this leaves us with the only thing we can do right now: we need to dive into GitLab´s Database directly:
1- name: Extract Runner Registration Token directly from GitLab DB 2 become: true 3 become_user: gitlab-psql 4 vars: 5 ansible_ssh_pipelining: true 6 query: "SELECT runners_registration_token FROM application_settings ORDER BY id DESC LIMIT 1" 7 psql_exec: "/opt/gitlab/embedded/bin/psql" 8 gitlab_db_name: "gitlabhq_production" 9 shell: '{{ psql_exec }} -h /var/opt/gitlab/postgresql/ -d {{ gitlab_db_name }} -t -A -c "{{ query }}"' 10 register: gitlab_runner_registration_token_result 11 12 - name: Extracting the Token from the Gitlab SQL query response 13 set_fact: 14 gitlab_runner_registration_token: "{{gitlab_runner_registration_token_result.stdout}}" 15 16 - name: And the Token is... 17 debug: 18 msg: "{{gitlab_runner_registration_token}}"
In order to use Docker, we need to choose one of the Executors that GitLab Runners are implementing to serve in different scenarios. We’ll just kept it simple here and use the shell Executor:
Shell is the simplest executor to configure. All required dependencies for your builds need to be installed manually on the machine on which the Runner is installed.
And as we already decided to use and install Docker (in a fully automated way), that´s all we need right now. No manual interaction needed. 🙂 If you gained more experience with GitLab CI, you can switch to another Executor for your GitLab Runners in the future. I would be keen to hear about your experiences with different Executors in the comments!
Register GitLab Runners
Now we´re ready to register our GitLab Runners. And as our Ansible playbook should be designed idempotently so that it could be executed once or many times without changing the result, we need to unregister potential registered Runners at first. This is naturally not relevant for the first playbook run:
1- name: Unregister all previously used GitLab Runners 2 shell: "sudo gitlab-runner unregister --all-runners" 3 4 - name: Add gitlab-runner user to docker group 5 shell: "sudo usermod -aG docker gitlab-runner" 6 7 - name: Register Gitlab-Runners using shell executor 8 shell: "gitlab-runner register --non-interactive --url '{{gitlab_url}}' --registration-token '{{gitlab_runner_registration_token}}' --description '{{ item.name }}' --executor shell" 9 with_items: 10 - { name: shell-runner-1 } 11 - { name: shell-runner-2 } 12 - { name: shell-runner-3 } 13 - { name: shell-runner-4 } 14 - { name: shell-runner-5 } 15 16 - name: Retrieve all registered Gitlab Runners 17 shell: "gitlab-runner list" 18 register: runner_result 19 20 - name: Show all registered Gitlab Runners 21 debug: 22 msg: 23 - "{{runner_result.stderr_lines}}"
As you can see, we´re using the command gitlab-runner register
together with its non-interactive mode, so that the registration process can be run without user interaction inside our playbook. The extension with_items
shows how many GitLab Runners we´re registering here. To achieve a setup where GitLab CI Jobs can be run in parallel, we´re registering a list of five GitLab Runners.
I have to mention it again: we need to use the /srv/dehydrated/certs/{{ gitlab_domain }}/fullchain.pem
inside our GitLab certification configuration (see the paragraph Configure the certificates in GitLab) also in order to be able to register our GitLab Runners properly. Otherwise errors like the following will occur:
1ERROR: Registering runner... failed 2runner=gyy8axxP status=couldn't execute POST against https://gitlab.jonashackt.io/api/v4/runners: Post https://gitlab.jonashackt.io/api/v4/runners: x509: certificate signed by unknown authority 3PANIC: Failed to register this runner. Perhaps you are having network problems
And don´t try to work around these errors with the --tls-ca-file
option. This would only fix the issue for the moment ! If you try to use the GitLab Container Registry inside GitLab CI, you will run into problems.
Running an example GitLab CI pipeline
That´s all! If you executed the main playbook already, your GitLab instance should already be waiting for you. If not, that´s no problem. Just fire up Ansible now and grab yourself a coffee:
1ansible-playbook -i hostsfile prepare-gitlab.yml --extra-vars "providername=yourProviderNameHere providerusername=yourUserNameHere providertoken=yourProviderTokenHere"
Your GitLab instance will be waiting for you to define a new root
password:
In order to run an example GitLab CI pipeline, we need to import another example project on GitHub containing a GitLab CI pipeline definition file called .gitlab-ci.yml
and an application to build. The example project is an extremely simple Spring Boot Microservice using the Java build tool Maven.
To import the project into our new GitLab instance, just add a new password for the root user first and login with that credentials. Then head over to Create a project and there click on Import Project / Repo by URL:
Now paste the example project’s Git URL https://github.com/jonashackt/restexamples.git into the Git repository URL field, change the Visibility Level to Internal and hit Create Project.
After the import you can head over to the project and its CI / CD / Pipelines section and fire up the pipeline by running it. No worries: only this time we have to do this manually since we didn´t push something new into our project. Each following push will automatically trigger your GitLab CI pipeline to run!
The pipeline should be already running right now:
The example project has a prepared .gitlab-ci.yml ready for us which resembles the 4 steps of the new trends in Continuous Integration/Deployment that the GitLab docs propose :
1# One of the new trends in Continuous Integration/Deployment is to: 2# 3# 1. Create an application image 4# 2. Run tests against the created image 5# 3. Push image to a remote registry 6# 4. Deploy to a server from the pushed image 7 8stages: 9 - build 10 - test 11 - push 12 - deploy 13 14# see usage of Namespaces at https://docs.gitlab.com/ee/user/group/#namespaces 15variables: 16 REGISTRY_GROUP_PROJECT: $CI_REGISTRY/root/restexamples 17 18# see how to login at https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#using-the-gitlab-container-registry 19before_script: 20 - docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY 21 22build-image: 23 stage: build 24 script: 25 - docker build . --tag $REGISTRY_GROUP_PROJECT/restexamples:latest 26 27test-image: 28 stage: test 29 script: 30 - echo Insert fancy API test here! 31 32push-image: 33 stage: push 34 script: 35 - docker push $REGISTRY_GROUP_PROJECT/restexamples:latest 36 37deploy-2-dev: 38 stage: deploy 39 script: 40 - echo You should use Ansible here! 41 environment: 42 name: dev 43 url: https://dev.jonashackt.io
In GitLab, every stage
defines a building block inside the CI Pipeline. You can have multiple Jobs inside those stages. We don´t use them in this simple example here. But if you do, you also get to know the advantages of multiple registered GitLab Runners, because they are able to run in parallel then inside a given stage.
Caution: Mind the namespaces when working with GitLab Container Registry!!!
As you may have noticed, using the GitLab Container Registry has one hidden obstacle. You have to use a correct namespace to push into the GitLab Container Registry! I cannot advise the GitLab team more to please make this hint as prominent in their docs as possible – this just drove me nuts! You need to not only use the GitLab Registry URL itself to push into it. You must also use a user or a group name and the project name in the following order:
1gitlab.jonashackt.io:4567/UserOrGroupName/ProjectName
As you can see, I´am using GitLab CI predefined variables alongside self-defined variables heavily here. This will make your life easier und help other people to be able to read your pipeline definitions!
Another cool GitLab CI feature are Environments . Although just being another view onto your pipelines, this view is really cool as one can easily see which deployments went to which infrastructural stage. All you have to do is to use the environment
keyword inside your .gitlab-ci.yml
files. The environment will then pop up automatically under the CI/CD / Environments tab:
GitLab CI is really great
As an old Jenkins-fanboy I have to admit it: GitLab CI is a really cool tool! After all this journey I wouldn´t say everything is totally easy to install and configure in the first place. But after going over all the small stumbling blocks, where many are naturally only introduced in private server environments, I strongly recommend to give it a try.
With GitLab CI you will be able to use the super neat YAML style pipeline definition files you are used to inside your own projects and also behind big corporate firewalls. And what´s really cool: you don´t need to mess around with a huge bunch of plugins! And you don´t need to integrate your process central Git server with the CI server using all those half-baked web hooks and plugins. No, they are just already integrated. Generally I really like the idea of using the best tools for the respective scenario. But GitLab CI makes it really hard to not love this fully integrated Continuous Integration Platform!
More articles
fromJonas Hecht
Your job at codecentric?
Jobs
Agile Developer und Consultant (w/d/m)
Alle Standorte
More articles in this subject area
Discover exciting further topics and let the codecentric world inspire you.
Gemeinsam bessere Projekte umsetzen.
Wir helfen deinem Unternehmen.
Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.
Hilf uns, noch besser zu werden.
Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.
Blog author
Jonas Hecht
Senior Solution Architect
Do you still have questions? Just send me a message.
Do you still have questions? Just send me a message.