Categories
CICD Cloud Infrastructure

GitLab, App Service & CI/CD – Variant 1

Since I spent way more time fighting with this I though I might write a proper recap and maybe help others to get started. I also set up a repository with example code which should (!) be a “pull-and-run” thing once you have your ARM_ACCESS_KEY ready, which goes along with this blog post.

Now, let’s get started. What we want to do here is basically pretty simple, and detailed in the picture below:

This sounds easy, right? It is, if you are done fighting Azure stupidity and know a couple of things. So what we need to do for this? We need to …

  • Create the App Service (I use terraform for this, and you can use my example repo to get started)
  • Create a GitLab repo (of course, that’s what this is all about, right? An example “hello world” flask up can also be found in the mentioned repo)
  • Configure the GitLab CI system to contain some credentials for the App Service
  • Add a .gitlab-ci.ymlfile to your GitLab repo to enable CI
  • … done.

That should be exactly all. Let’s get started.

Create the app service

As said – please use my example repo and terraform. That should be enough:

  • make init
  • make plan
  • make do

Important to know:

  • use at least “B1 | Basic” for your SKU settings
  • your “source_control” setting should be “LocalGit” I guess. There is an error in either Terraform or Azure (I think the latter) if you use “ExternalGit”, which would be “variant 2” (a blog post to follow)

Create a GitLab repo

This one should do, and if you are rightfully annoyed by Azure give Render a try, they look cool (just found them about 2 mintes ago). To fork the repo you have to click the very unobtrusive “fork” button in the top right.

Configure GitLab CI

The only thing to do is set two environment variables for the test runner. The information you need can be found either from my terraform output, or fom the Azure portal (images below). Then you add two variables in GitLab CI (GitLab repo -> settings -> CI/CD -> Variables):

  • AZ_APP_NAME (in this case this would be “flypenguin-coolapp-xrp”)
  • AP_APP_REPO_PASS (in this case this would be “SYut….”)

There are again a few caveats:

  • Basically the whole thing is based on the app name. Your user name should be your app name prefixed by a “$” sign, which is highly annoying.
  • If you ever want to use a “$” sign in any GitLab runner environment variable, you have to escape it with another “$”.
    Example: “my$variable” should be “my$$variable” in the “value” field of GitLab.

Add .gitlab-ci.yml file to repo

Well, just add the file and push the repo.

stages:
  - push_to_azure
push_it:
  stage: push_to_azure
  only:
    - master
  allow_failure: false
  before_script:
    - git config --global user.email "some@email.address"
    - git config --global user.name "GitLab CI Pipeline"
  script:
    - export REPO_FQDN="$AZ_APP_NAME.scm.azurewebsites.net"
    - export REPO_URL="https://\$$AZ_APP_NAME:$AZ_APP_REPO_PASS@$REPO_FQDN/$AZ_APP_NAME.git"
    - git remote add azure_app_service "$REPO_URL"
    - git remote -v
    # DEEP DIVE INFORMATION, DELETE ME IF YOU WANT
    # the local branch is 'detached head' - which is fucked.
    # we can't do "git push --force azure HEAD:master" on the FIRST push.
    # we can't push into an empty repository at all, even when using
    #   git push --force azure HEAD:refs/heads/master
    # because on the first push this will still not work.
    # the actually easiest way seems to be to not do "detached head" here.
    # so let's try to "unshallow" that thing.
    # https://is.gd/t7y7LM / https://is.gd/8YE4Ua
    # https://is.gd/rZDwxy / https://is.gd/rZDwxy
    - git fetch --unshallow origin
    - git push --force azure_app_service HEAD:refs/heads/master

Finally – test.

Just push a change to your repo and see if it works. The first obvious change is the addition of the .gitlab-ci.yml file … .

Hope that helped, hope it works 🙂

Categories
CICD Development Docker Infrastructure Linux

Databases with dokku

This is part 2 of a couple of blog posts about dokku, a amazing little Heroku clone.

In the previous post I showed how to set up Dokku on a DigitalOcean droplet, and deployed a little hello-world container with a single git push. The reason why I wanted dokku thoug was the need of a database. As said – hosting comes cheap, databases usually come either expensive or with limited flexibility, or just too annoying configuration effort.

Dokku is the perferct middle ground. Let’s see why.

For me it was the existing postgres plugin which you can simply install and use. The whole process is incredibly easy, takes wbout two commands, and looks like this (let’s assume our “hello world” container uses a database):

$ sudo dokku plugin:install https://github.com/dokku/dokku-postgres.git postgres

That’s it, again.

$ dokku postgres:create hello-world

dokku postgres:create hello-world
Waiting for container to be ready
Creating container database Securing connection to database
=====> Postgres container created: hello-world
=====> Container Information
       Config dir: /var/lib/dokku/services/postgres/hello-world/config
       Data dir: /var/lib/dokku/services/postgres/hello-world/data
       Dsn: postgres://postgres:bd6b0725d710bb5a662bb628eee787b1@dokku-postgres-hello-world:5432/hello_world
       Exposed ports: -
       Id: 785ef252c748ed85739d1d6ad375a1e1bd66e925ac79358e9ffaa30ab852d6c0 
       Internal ip: 172.17.0.9
       Links: -
       Service root: /var/lib/dokku/services/postgres/hello-world
       Status: running
       Version: postgres:10.2

$ docker ps

CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS         PORTS      NAMES
cc99cccacf2c   dokku/hello-world:latest   "/bin/sh -c 'php-fpm…"   2 minutes ago   Up 2 minutes   80/tcp     hello-world.web.1
785ef252c748   postgres:10.2              "docker-entrypoint.s…"   5 minutes ago   Up 5 minutes   5432/tcp   dokku.postgres.hello-world
[...]

This creates a database container with postgres 10.2, as you can see. You can influence a lot of behavior by using environment variables, see the GitHub page for more info.

Then you link the container to the running app:

$ dokku postgres:link hello-world hello-world
-----> Setting config vars
       DATABASE_URL: postgres://postgres:bd6b0725d710bb5a662bb628eee787b1@dokku-postgres-hello-world:5432/hello_world
-----> Restarting app hello-world
-----> Releasing hello-world (dokku/hello-world:latest)...
-----> Deploying hello-world (dokku/hello-world:latest)...
-----> Attempting to run scripts.dokku.predeploy from app.json (if defined)
-----> No Procfile found in app image
-----> DOKKU_SCALE file found (/home/dokku/hello-world/DOKKU_SCALE)
=====> web=1
-----> Attempting pre-flight checks 
       For more efficient zero downtime deployments, create a file CHECKS. 
       See http://dokku.viewdocs.io/dokku/deployment/zero-downtime-deploys/ for examples 
       CHECKS file not found in container: Running simple container check...
-----> Waiting for 10 seconds ...
-----> Default container check successful!
-----> Running post-deploy
-----> Configuring hello-world.my-paas.for-myself.com...(using built-in template)
-----> Creating http nginx.conf
-----> Running nginx-pre-reload Reloading nginx
-----> Setting config vars DOKKU_APP_RESTORE: 1
-----> Found previous container(s) (14c349cb496d) named hello-world.web.1
=====> Renaming container (14c349cb496d) hello-world.web.1 to hello-world.web.1.1535285386
=====> Renaming container (cc99cccacf2c) serene_bassi to hello-world.web.1
-----> Attempting to run scripts.dokku.postdeploy from app.json (if defined)
-----> Shutting down old containers in 60 seconds
=====> 14c349cb496d95cc4be1833f2e7f6ef2bef099a37c2a22cd4dcdb542f09bea0f
=====> Application deployed:
       http://hello-world.my-paas.for-myself.com

And done.

What happened? You have now the environment variable $DATABASE_URL set in the hello-world app, that’s why the restart was necessary (which you can postpone, if you want, but you probably need it now, right?).

Let’s check:

$ docker exec -ti hello-world.web.1 /bin/sh 

[now in the container]

# env | grep DATABASE 
DATABASE_URL=postgres://postgres:bd6b0725d710bb5a662bb628eee787b1@dokku-postgres-hello-world:5432/hello_world 

That’s it. Super easy. Now if you’re using Django, you could use kennethreitz/dj-database-url to automatically parse and use it, and you’re done. (Probably every framework has something similar, so just have a look).

Categories
CICD Development Docker Infrastructure

Build your own PaaS with Dokku

I was looking for some “play” deployment method for a couple of things I want to try out. Most of them require a database. And it should be cheap, cause I don’t have any load on them and don’t earn any money, so I don’t want to spend basically no money if possible. The usual suspects are too expensive – AWS, Heroku, etc.

So I looked around and found Dokku.

Dokku is a set of – hang on – shell scripts – which basically emulate Heroku on a machine of your own. It’s integrated with Digital Ocean droplets out of the box, if you want it. And the whole thing is 5 € / month, which is perfect. It also integrates with a Dockerfile based deployment, so you do git push and everything just works.

It’s amazing.

This is how you get started. But before you can get started, you need a domain you control, either on AWS or any other hoster. This is for routing traffic to your deployments later. You also need a public SSH key, or better a public / private key pair. Once you have both you can …

  1. create a Digital Ocean account, and …
  2. add your SSH public key to your account, and …
  3. in that account, create a new droplet with a “Dokku” image preinstalled.
  4. Wait until the droplet finished provisioning.

While the droplet is being created, you can also create a project locally to test it later:

$ mkdir dokku-test
$ cd dokku-test
$ git init
$ echo "FROM tutum/hello-world" > Dockerfile
$ git add Dockerfile
$ git commit -m "Initial commit"

In this little test project we only create a Dockerfile from an hello-world image which displays “Hello world” in a browser so we can verify it worked.

Once the droplet is done, you can start setting up your personal little PaaS. First, you have to configure your DNS. We will set up a wildcard entry for our deployments, and a non-wildcard entry for git. Let’s assume your domain is for-myself.com, then you would add …

  • my-paas.for-myself.com , type “A” (or “AAAA” if you are IPv6) to your droplet IP
  • *.my-paas.for-myself.com just the same

Then you SSH into your droplet, and create your dokku project. (This is something you have to do for every project). All you have to do for this is:

$ ssh root@DROPLET_IP
~# dokku apps:create hello-world
-----> Creating hello-world... done
~# _

Done.

Now you configure a git remote URL for your project, and push it:

$ git remote add dokku dokku@my-paas.for-myself.com:hello-world

Again – done. If you push your project now (assuming DNS is already set), everything should happen automagically:

$ git push --set-upstream dokku master
X11 forwarding request failed
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 241 bytes | 241.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
-----> Cleaning up...
-----> Building hello-world from dockerfile...
remote: build context to Docker daemon  2.048kB
Step 1/1 : FROM tutum/hello-world
latest: Pulling from tutum/hello-world
658bc4dc7069: Pulling fs layer
[... TRUNCATED ...]
983d35417974: Pull complete
Digest: sha256:0d57def8055178aafb4c7669cbc25ec17f0acdab97cc587f30150802da8f8d85
Status: Downloaded newer image for tutum/hello-world:latest
 ---> 31e17b0746e4
Successfully built 31e17b0746e4
Successfully tagged dokku/hello-world:latest
-----> Setting config vars
       DOKKU_DOCKERFILE_PORTS:  80/tcp
-----> Releasing hello-world (dokku/hello-world:latest)...
-----> Deploying hello-world (dokku/hello-world:latest)...
-----> Attempting to run scripts.dokku.predeploy from app.json (if defined)
-----> No Procfile found in app image
-----> DOKKU_SCALE file not found in app image. Generating one based on Procfile...
-----> New DOKKU_SCALE file generated
=====> web=1
-----> Attempting pre-flight checks
       For more efficient zero downtime deployments, create a file CHECKS.
       See http://dokku.viewdocs.io/dokku/deployment/zero-downtime-deploys/ for examples
       CHECKS file not found in container: Running simple container check...
-----> Waiting for 10 seconds ...
-----> Default container check successful!
-----> Running post-deploy
-----> Creating new /home/dokku/hello-world/VHOST...
-----> Setting config vars
       DOKKU_PROXY_PORT_MAP:  http:80:80
-----> Configuring hello-world.my-paas.for-myself.com...(using built-in template)
-----> Creating http nginx.conf
-----> Running nginx-pre-reload
       Reloading nginx
-----> Setting config vars
       DOKKU_APP_RESTORE:  1
=====> Renaming container (14c349cb496d) amazing_snyder to hello-world.web.1
-----> Attempting to run scripts.dokku.postdeploy from app.json (if defined)
=====> Application deployed:
       http://hello-world.my-paas.for-myself.com

To my-paas.for-myself.com:hello-world
 * [new branch]      master -> master
Branch 'master' set up to track remote branch 'master' from 'dokku'.

And if you open your URL now (which is hello-world.my-paas.for-myself.com) you should see this image:

Now, for 5 € / month you get:

  • A heroku-like, no-nonsense, fully automated, git-based deployment platform
  • A server which you control (and have to maintain, okay, but on which you can deploy …)
  • A database (or many of them – dokku provides great integration for databases btw; more on that in another post)
  • Publicly reachable deployments (for customers, testing, whatever)
  • Let’s Encrypt certificates (dokku provides support for these as well, again more in a later post)
  • And for 1 € more (it’s always 20% of the base price) you get backups of your system)

That’s absolutely incredible. Oh, and did I mention that the maintainers are not only friendly, but also super responsive and incredibly helpful on Slack?

Categories
Uncategorized

CI / CD solutions

Everyone wants free candy. Or a CI/CD solution, that …

  • auto-deploys container-based servcies
  • auto-updates (roll-forward, roll-back) those services on keypress and “triggers”
  • has one-click-deployment of services.

My definition of “service” here is “A set of containers working together in a certain way, automatically load balanced where needed”. Example: A n worker nodes, loadbalanced from a web endpoint, and a database container. All deployed at the same time. Including one-click-deployment of environments (“Oh, I’d like to test this revision again, let’s deploy it quickly”…). Note that this is mostly CD (continuous deployment), cause CI is being done for a while now with – mostly – Jenins and other tools.

What I have found so far that seems to satisfy those requirements:

And the service-only solutions without a tools tack which you can deploy locally:

This is kinda it. I would love to evaluate  all those tools, but most of are not really AWS-deploy-friendly, and in the Shippable and Tectonic case they are paid full stack services without local (cloud-owned) deployment anyway. And most are in beta. But the scenery is becoming interesting …

I will try to post my findings here, as well as the final choice I made for my current client, along with the reasons.

And for now: Mesosphere and Rancher looks really cool. And I mean “looks” – the UI is just pleasing (which is the most important selection criteria, I know 😉

Update 2015-12-10: Added Vamp, Kubernetes

Categories
Longer things

My take at a CI infrastructure, Pt.3

All right, back again. Much text here. Let’s talk about …

Containerizing The Binaries

We are done with the build, now we have a binary. I went for something simple: Who knows best how to put this into a container? The dev guy. Cause he knows what he needs, where he needs it, and where it can be found after the build.

But containerizing it should be not hard, given a moderately complex software with a couple of well thought-of build scripts. So I went for this:

/my_code/
   |--- docker/
   |      |--- prepare.sh     # optional
   |      |--- Dockerfile     # required ;)
   |--- main.c
   |--- SELECT_BUILD_CONTAINER
   |--- build/                # created by the build
          |--- ...

Now it get’s straightforward: The build scripts in TeamCity …

  • look for the docker directory, change into it,
  • execute the “prepare.sh” script if found,
  • build a container from the Dockerfile,
  • tag the container and
  • push it into the registry (which is configured centrally in TeamCity)

Tagging the containers

A docker cotainer is referenced like this:

some/thing:and_a_tag

How do we choose how to name the container we just built? Two versions.

For projects which contain nothing but a Dockerfile (which we have, cause our build containers are also versioned, of course), I enforce this combination:

Repository name: docker-one-two
... will yield:  one/two:1234abc9-321 (as container repo/name:tag)

The build script enforces the scheme “docker-one-two”, and takes “one” and “two” automatically as names for the container. Then “1234abc9” is the git commit id (short), and “321” is the build number.

Why not only the git commit ID? Because between builds, the same result is not guaranteed if executing the build again. If you build a container, and the execution contains an “apt-get update”, two builds a week apart will not result in the same contents.

For “simple” or “pure” code builds I use the following scheme:

Repository name: some-thing
... will yield:  some/thing:1234abc9

Same reasoning.

In both cases a container “some/thing:latest” is also tagged and pushed.

Now when we run a software container, we can see

  • with which container it was built (by looking at “SET_BUILD_CONTAINER”),
  • which base container was used to build the software container (by looking at “docker/Dockerfile”)
  • and we can do this cause we know the git commit ID.

For each base container (or “pure” Dockerfile projects), we extend this with a build number.

Done.

So this is my state so far. If anyone reads this, I would be interested in comments or feedback.

 

Categories
Longer things

My take at a CI infrastructure, Pt.2

For CI I want the classics – a check in (push) to the repo should be catched by TeamCity, and trigger …

  • a build of the artifact, once
  • running of unit tests
  • containerizing the artifact
  • uploading it to a private Docker registry

The question was: How?

This post deals with building the code.

Building Code

When I build code I am faced with a simple question: Which library versions do I use?

When I have multiple projects, the question becomes complex. Which version do I install on which build agent? How do I assign build tasks to agents? What if some software cannot be installed? How can I do a rollback? Or try with another lib version quickly?

The solution: Build containers. I am sure I have read about it somewhere, this is in no part an invention of my own, but I just can’t find an article explaining it.

It basically goes like this. We have a docker container, which contains all necessary build libs in their development form and the build tools to build something. We pull the container, mount our checked out code dir in the container, and run the build in the controlled environment of it. We want a different set of libs? We re-build the container with them, and use the other container to build the project. Doesn’t work? Go back to the previous one.

The prerequisite of this is a build process that does not change, or at least does not change for a set of projects. We use CMake, so it’s the same build commands over and over: “cmake .”, “make”, “make test”. That’s it. My first working build container looks like this:

FROM 10.10.10.11:5000/runner/boost:9a443273-2
MAINTAINER Fly Penguin <fly@flypenguin.de>
RUN \
     dnf -y update && dnf -y upgrade \
  && dnf -y install cmake make gcc-c++ boost-test boost-devel \
  && dnf clean all \
  && mkdir /build
WORKDIR /build

Building the code now is super easy:

git clone ssh://... my_code
cd my_code
docker run --rm -v $(pwd):/build builder/boost:1234 cmake .
docker run --rm -v $(pwd):/build builder/boost:1234 make
docker run --rm -v $(pwd):/build builder/boost:1234 make test

Done.

… or? One question remains: How do I select the build container?

There are two possibilities: In the build system configuration (read: TeamCity), or in the code. I went for the code. The reason is pretty simple: I check out a specific revision of the code, I know which container it was built with. From there I can work my way up:

./mycode
   |--- main.c
   |--- SET_BUILD_CONTAINER

Guess what’s in “SET_BUILD_CONTAINER”? Correct. Something like this:

my_registry_url:5000/builder/boost:abcdef98-210

The build configuration in TeamCity reads the file, and acts accordingly. Later I will talk more on those tags, and in the next post I talk about containerizing the binaries.

Categories
Longer things

My take at a CI infrastructure, Pt.1

… so far.

It might be crappy, but I’ll share it, cause it’s working. (Well, today it started doing this 😉 ). But enough preamble, let’s jump in.

The Situation

I am in a new project. Those people have nothing but a deadline, and when I say nothing I mean it. Not even code. They asked me what I would do, and I said “go cloud, use everything you can from other people, so you don’t have to do it, and you stay in tune with the rest of the universe” (read: avoid NIH syndrome). They agreed, and hired me.

The Starting Point

They really want the JetBrains toolchain, the devs use CLion. They also want YouTrack for ticketing (which doesn’t blow my mind so far, but it’s ok). Naturally they want to use TeamCity, which is the Jenkins alternative from JetBrains, and pretty all right from what I can see so far.

The code is probably 95%+ C++, and creates a stateless REST endpoint in the cloud (but load balanced). That’s a really simple setup to start with, just perfect.

Source code hosting was initially planned to be either inhouse or in the bought cloud, not with a hoster. Up to now they were using git, but without graphical frontend which involved manual creation (by the – part time – admin) of every git repo.

The Cloud Environment

That’s just practical stuff now, and has nothing – yet – to do with CI/CD. Skip it if you’re just interested in that. Read it if you want to read my brain.

I looked around for full-stack hosted CI/CD systems, notably found only Shippable, and thought that they don’t fully match the requirements (even when we move source code hosting out). So I went to AWS, and tried ElasticBeanstalk. This is quite cool, unfortunately scaling takes about 3-5 minutes for the new host to come up (tested with my little load dummy tool in a simple setup, which I actually didn’t save, which was stupid).

Anyway, before deploying services CI (the compilation & build stuff) must work. So my first goal was to to get something up and running ASAP, and thats bold and capitalized. Fully automated of course.

For any kubernetes/CoreOS/… layout I lack the experience to make it available quickly, and – really – all the AWS “click here to deploy” images of those tools didn’t work out-of-the-box. So I started fairly conventional with a simple CloudFormation template spawning three hosts: TeamCity server, TeamCity agent, Docker registry, and – really important – GitLab. Since then GitLab was replaced by a paid GitHub account, all the better.

Setting the hosts up I used Puppet (oh wonder, being a Puppet “Expert”). Most of the time went in writing a TeamCity puppet module. A quirk is that the agents must download their ZIP distribution image from a running master only, which is kinda annoying to do right in puppet. For now TeamCity is also set up conventionally (without docker), which I might change soon, at least for the server. The postgres database runs in a container, though, which is super-super simple to set up (please donate a bit if you use it, even 1€ / 1$ helps, that guy did a great job!). Same went for gitlab (same guy), and redis (again). I also used the anti-pattern of configuring the hosts based on their IP addresses.

I also wanted to automate host bootstrapping, so I did this in the cloudformation template for each host. The archive downloaded in this script contains 3 more scripts – a distribution-dependent one which is called first, have a look to see details. Basically it’s just a way to download a snapshot of our current puppet setup (encrypted), and initialize it so puppet can take over. I also use “at” in those scripts to perform a reboot and an action after, which is highly convenient.

CI (finally)

… in the next post 😉