Updates from November, 2015 Toggle Comment Threads | Keyboard Shortcuts

  • penguin 14:46 on 2015-11-29 Permalink | Reply

    iTerm & keyboard 

    … cause I remember having searched for this before, with a lot less useful results.

  • penguin 19:55 on 2015-11-27 Permalink | Reply
    Tags: ,   

    My take at a CI infrastructure, Pt.3 

    All right, back again. Much text here. Let’s talk about …

    Containerizing The Binaries

    We are done with the build, now we have a binary. I went for something simple: Who knows best how to put this into a container? The dev guy. Cause he knows what he needs, where he needs it, and where it can be found after the build.

    But containerizing it should be not hard, given a moderately complex software with a couple of well thought-of build scripts. So I went for this:

       |--- docker/
       |      |--- prepare.sh     # optional
       |      |--- Dockerfile     # required ;)
       |--- main.c
       |--- build/                # created by the build
              |--- ...

    Now it get’s straightforward: The build scripts in TeamCity …

    • look for the docker directory, change into it,
    • execute the “prepare.sh” script if found,
    • build a container from the Dockerfile,
    • tag the container and
    • push it into the registry (which is configured centrally in TeamCity)

    Tagging the containers

    A docker cotainer is referenced like this:


    How do we choose how to name the container we just built? Two versions.

    For projects which contain nothing but a Dockerfile (which we have, cause our build containers are also versioned, of course), I enforce this combination:

    Repository name: docker-one-two
    ... will yield:  one/two:1234abc9-321 (as container repo/name:tag)

    The build script enforces the scheme “docker-one-two”, and takes “one” and “two” automatically as names for the container. Then “1234abc9” is the git commit id (short), and “321” is the build number.

    Why not only the git commit ID? Because between builds, the same result is not guaranteed if executing the build again. If you build a container, and the execution contains an “apt-get update”, two builds a week apart will not result in the same contents.

    For “simple” or “pure” code builds I use the following scheme:

    Repository name: some-thing
    ... will yield:  some/thing:1234abc9

    Same reasoning.

    In both cases a container “some/thing:latest” is also tagged and pushed.

    Now when we run a software container, we can see

    • with which container it was built (by looking at “SET_BUILD_CONTAINER”),
    • which base container was used to build the software container (by looking at “docker/Dockerfile”)
    • and we can do this cause we know the git commit ID.

    For each base container (or “pure” Dockerfile projects), we extend this with a build number.


    So this is my state so far. If anyone reads this, I would be interested in comments or feedback.


    • Tom Trahan 21:49 on 2015-12-02 Permalink | Reply

      hi @flypenguin – Nice journey through setting up CI/CD and thanks for checking out Shippable. I’m with Shippable and we recently launched a beta for integrating with private git instances and for deploying your containers automatically, with rollback, to Amazon EC2 Container Service or Elastic Beanstalk. This essentially enables a fully automated pipeline from code change through multiple test environments and, ultimately production. This will GA soon along with additional functionality that I think you’ll find a great fit with the pipeline you’ve described with less effort and lower costs. I’d be happy to walk you through it and answer any questions. Just drop me an email.


  • penguin 19:32 on 2015-11-27 Permalink | Reply
    Tags: ,   

    My take at a CI infrastructure, Pt.2 

    For CI I want the classics – a check in (push) to the repo should be catched by TeamCity, and trigger …

    • a build of the artifact, once
    • running of unit tests
    • containerizing the artifact
    • uploading it to a private Docker registry

    The question was: How?

    This post deals with building the code.

    Building Code

    When I build code I am faced with a simple question: Which library versions do I use?

    When I have multiple projects, the question becomes complex. Which version do I install on which build agent? How do I assign build tasks to agents? What if some software cannot be installed? How can I do a rollback? Or try with another lib version quickly?

    The solution: Build containers. I am sure I have read about it somewhere, this is in no part an invention of my own, but I just can’t find an article explaining it.

    It basically goes like this. We have a docker container, which contains all necessary build libs in their development form and the build tools to build something. We pull the container, mount our checked out code dir in the container, and run the build in the controlled environment of it. We want a different set of libs? We re-build the container with them, and use the other container to build the project. Doesn’t work? Go back to the previous one.

    The prerequisite of this is a build process that does not change, or at least does not change for a set of projects. We use CMake, so it’s the same build commands over and over: “cmake .”, “make”, “make test”. That’s it. My first working build container looks like this:

    MAINTAINER Fly Penguin <fly@flypenguin.de>
    RUN \
         dnf -y update && dnf -y upgrade \
      && dnf -y install cmake make gcc-c++ boost-test boost-devel \
      && dnf clean all \
      && mkdir /build
    WORKDIR /build

    Building the code now is super easy:

    git clone ssh://... my_code
    cd my_code
    docker run --rm -v $(pwd):/build builder/boost:1234 cmake .
    docker run --rm -v $(pwd):/build builder/boost:1234 make
    docker run --rm -v $(pwd):/build builder/boost:1234 make test


    … or? One question remains: How do I select the build container?

    There are two possibilities: In the build system configuration (read: TeamCity), or in the code. I went for the code. The reason is pretty simple: I check out a specific revision of the code, I know which container it was built with. From there I can work my way up:

       |--- main.c

    Guess what’s in “SET_BUILD_CONTAINER”? Correct. Something like this:


    The build configuration in TeamCity reads the file, and acts accordingly. Later I will talk more on those tags, and in the next post I talk about containerizing the binaries.

  • penguin 19:09 on 2015-11-27 Permalink | Reply
    Tags: , ,   

    My take at a CI infrastructure, Pt.1 

    … so far.

    It might be crappy, but I’ll share it, cause it’s working. (Well, today it started doing this 😉 ). But enough preamble, let’s jump in.

    The Situation

    I am in a new project. Those people have nothing but a deadline, and when I say nothing I mean it. Not even code. They asked me what I would do, and I said “go cloud, use everything you can from other people, so you don’t have to do it, and you stay in tune with the rest of the universe” (read: avoid NIH syndrome). They agreed, and hired me.

    The Starting Point

    They really want the JetBrains toolchain, the devs use CLion. They also want YouTrack for ticketing (which doesn’t blow my mind so far, but it’s ok). Naturally they want to use TeamCity, which is the Jenkins alternative from JetBrains, and pretty all right from what I can see so far.

    The code is probably 95%+ C++, and creates a stateless REST endpoint in the cloud (but load balanced). That’s a really simple setup to start with, just perfect.

    Source code hosting was initially planned to be either inhouse or in the bought cloud, not with a hoster. Up to now they were using git, but without graphical frontend which involved manual creation (by the – part time – admin) of every git repo.

    The Cloud Environment

    That’s just practical stuff now, and has nothing – yet – to do with CI/CD. Skip it if you’re just interested in that. Read it if you want to read my brain.

    I looked around for full-stack hosted CI/CD systems, notably found only Shippable, and thought that they don’t fully match the requirements (even when we move source code hosting out). So I went to AWS, and tried ElasticBeanstalk. This is quite cool, unfortunately scaling takes about 3-5 minutes for the new host to come up (tested with my little load dummy tool in a simple setup, which I actually didn’t save, which was stupid).

    Anyway, before deploying services CI (the compilation & build stuff) must work. So my first goal was to to get something up and running ASAP, and thats bold and capitalized. Fully automated of course.

    For any kubernetes/CoreOS/… layout I lack the experience to make it available quickly, and – really – all the AWS “click here to deploy” images of those tools didn’t work out-of-the-box. So I started fairly conventional with a simple CloudFormation template spawning three hosts: TeamCity server, TeamCity agent, Docker registry, and – really important – GitLab. Since then GitLab was replaced by a paid GitHub account, all the better.

    Setting the hosts up I used Puppet (oh wonder, being a Puppet “Expert”). Most of the time went in writing a TeamCity puppet module. A quirk is that the agents must download their ZIP distribution image from a running master only, which is kinda annoying to do right in puppet. For now TeamCity is also set up conventionally (without docker), which I might change soon, at least for the server. The postgres database runs in a container, though, which is super-super simple to set up (please donate a bit if you use it, even 1€ / 1$ helps, that guy did a great job!). Same went for gitlab (same guy), and redis (again). I also used the anti-pattern of configuring the hosts based on their IP addresses.

    I also wanted to automate host bootstrapping, so I did this in the cloudformation template for each host. The archive downloaded in this script contains 3 more scripts – a distribution-dependent one which is called first, have a look to see details. Basically it’s just a way to download a snapshot of our current puppet setup (encrypted), and initialize it so puppet can take over. I also use “at” in those scripts to perform a reboot and an action after, which is highly convenient.

    CI (finally)

    … in the next post 😉

  • penguin 16:06 on 2015-11-26 Permalink | Reply
    Tags: , s3,   

    Docker registry, S3 and permissions 

    There are a couple of bazillion blog posts saying “yah just did my docker registry on S3”.

    It’s not so easy, though. Cause what if you want to limit access to a certain IAM user? Yup, you need to go deep (well, a bit) into the policy thing of Amazon. Which sounds simple, but isn’t.

    I got “HTTP 500” errors from the docker registry when I first deployed. My configuration, which was wrong, looked like this:

    "RegistryIAMUser" : {
      "Type" : "AWS::IAM::User"
    "RegistryIAMUserAccessKey" : {
      "Type" : "AWS::IAM::AccessKey",
      "Properties" : { "UserName" : { "Ref" : "RegistryIAMUser" } }
    "Bucket" : {
      "Type" : "AWS::S3::Bucket",
      "Properties" : { "BucketName" : "flypenguin.docker-registry" }
    "RegistryPrivateAccess" : {
      "Type" : "AWS::S3::BucketPolicy",
      "Properties" : {
        "Bucket" : {"Ref":"Bucket"},
        "PolicyDocument": {
            "Action":[ "s3:*" ],
            "Resource":  { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "Bucket" } , "/*" ]]},
            "Principal": {"AWS" : {"Fn::GetAtt":["RegistryIAMUser","Arn"]}}

    Since this didn’t work really well, I googled my a** off and found a little post, which used a UserPolicy (instead of a bucket policy, which is basically the other way around), but did one thing different. My working configuration is now … (let’s see if you can see the difference):

    [... same as above ...]
    "UserPolicyRegistryPrivateAccess" : {
      "Type" : "AWS::IAM::Policy",
      "Properties" : {
        "PolicyName" : "AccessToDockerBucket",
        "Users" : [ {"Ref":"RegistryIAMUser"}],
        "PolicyDocument" : {
          "Version" : "2012-10-17",
          "Statement" : [{
            "Action":[ "s3:*" ],
            "Resource": [
              { "Fn::Join" : ["", ["arn:aws:s3:::", {"Ref":"Bucket"} , "/*" ]]},
              { "Fn::Join" : ["", ["arn:aws:s3:::", {"Ref":"Bucket"} ]]}

    See it?

    It’s the two resources now. You need not only “resource/*” as a target, you also need “resource” itself as a target. Which makes sense if you know it and think about it. If you don’t … it’s a bit annoying. And time-consuming.

  • penguin 09:50 on 2015-11-19 Permalink | Reply  

    InsufficientCapabilities on AWS 

    New project. I can play around as much as I want, as long as on day X I am done.

    Really frightening, and really cool.

    Anyway, first operation: Create a bunch of S3 buckets and IAM roles to interface with them. Which is kinda not-so-easy.

    Beacause when you create IAM capabilities with cloudformation, you get this error:

        "CapabilitiesReason": "The following resource(s) require capabilities: [AWS::IAM::AccessKey, AWS::IAM::User]", 
        "Capabilities": [
        "Parameters": []

    … which is a fancy way of saying “do this”:

    # aws cloudformation create-stack \
        --template-url file://env.json 
        --capabilities CAPABILITY_IAM

    … which you don’t really find easily with google. Or everybody knows, but me. Gnaah.

compose new post
next post/next comment
previous post/previous comment
show/hide comments
go to top
go to login
show/hide help
shift + esc