Updates from April, 2017 Toggle Comment Threads | Keyboard Shortcuts

  • penguin 19:03 on 2017-04-10 Permalink | Reply
    Tags:   

    The state of things – management 

    Yep, this is the challenge why I converted from a freelancer (which I still prefer as a working model) to a “normal” employed person. I am a “manager” now. Well, I just have team responsibility. And it is crazy. This brings *so* many challenges which are so amazing (cause they’re new) and exhausting (cause I need to deal with them in a completely different way).

    Here they are.

    Challenge one – team spirit. That is something I am most happy with, because our spirit is pretty high I think. And I take this on me, shamelessly, but this is also something which is deeply connected to my “leading persona”, whatever that is. And I think this one is far from perfect.

    Challenge two – training the team. I think I know some stuff, and I keep in contact with things. And I want to learn new things. Now I have to deal with maintenance shit all day, and yet want to try out new toys and stuff. This is quite complicated: On the technical side I have to think now about a way in which people can learn the most, while making sure a fuckup can not break everything. (Which it did – once, and badly). Also I have to ensure that people learn, and have fun doing it. Which is surprisingly hard, but also surprisingly cool if you see it actually working.

    Challenge three – employee interviews. I suck at it, period. I started to ask technical questions now, because before I was under the assumption that every applicant can do the job, and it’s just about how he fits in. Bad mistake. Now I learn that personal markers are also important. Which is the next thing that I need in the team, personality-wise? And am I sure of this? And does the next candidate have it? Cra-zy.

    Challenge four – managing the big picure. Or simply put – how do I make sure that the team is always up-to-date on priorities, talk to people enough, and has a good sense of when something should be “done”? And a good sense of driving it there, btw. Which is pretty much the same as

    challenge five – processes. Which process do we choose? We tried SCRUM, didn’t really work that well, so we changed it after a couple of iterations. Now we try (some sort of) Kanban, and already I am seeing transparency risks, and I need metrics. Also you often read that Kanban needs analogue ticket boards (paper, wall) – not some fancy JIRA tooling shit or so. Now what if the company policy is “log your time in tickets”? And, even most important – how do I self-manage? And the team with it? And prioritize features if they hit me like “oh in two weeks this must be done, and sorry this came in just today”?

    For me all of this is hard. I guess I am getting there, but it is a great challenge, and I love it. But soon I will need a break. And I hope some stuff is done by then.

    Update 2017-04-14: changed challenges 1&2 a bit.

     
  • penguin 18:54 on 2017-04-10 Permalink | Reply
    Tags: current state,   

    The state of things – technology 

    It’s been a while. I am currently pretty burned out, and the work keeps getting more. This is bad. But let’s talk about some challenges right now. So this is an overview of our …

    Technical state

    We’re still using Rancher. Rancher is super cool, but has the annoying habit of completely crashing about once every two months, leading to a full cluster outage for anything between 1-3 hours. Usually about 2. I still love it, but we matured in our needs, and maybe Rancher needs time to catch up (cause our needs are sometimes a bit “special”). But the Rancher team is making great progress in the right directions, and I am fully competent that Rancher will take a place in the orchestration space. Still we’re thinking about moving to K8s, simply because so much is already there.

    We’re using Prometheus for monitoring now. Rocks. Period.

    We’re still using AWS. Many of our customers would prefer Azure Germany. If you didn’t know – Azure in Germany advertises a “Data Custodian” mode, or “Data Trustee” model, not sure how to translate this and too lazy to look it up. This means that in Germany the data centers are running the true Azure stuff, but they are actually fully operated by Deutsche Telekom.

    Advantage, you ask? Easy. When the DOJ sends one of those super secure letters to Microsoft for “give me your data”, they simply forward this to Deutsche Telekom. They will probably frame it on a wall somewhere, but I don’t think they will actually give out the data. Problem solved. (We all hope :))

    We are almost done with setting the whole cloud up using Terraform. It became a really mature project over the last year, and we are super happy with the progress it’s making. Also, with Azure in the works for us (some cusomers …) this is a cool way to just manage all with the same tooling. Infrastructure as code, eh.

    We try to migrate away from Teamcity to Jenkins. We didn’t succeed yet. Too little manpower.

    But the more interesting thing is in the next post, for me at least 😉

     
  • penguin 19:55 on 2015-11-27 Permalink | Reply
    Tags: ,   

    My take at a CI infrastructure, Pt.3 

    All right, back again. Much text here. Let’s talk about …

    Containerizing The Binaries

    We are done with the build, now we have a binary. I went for something simple: Who knows best how to put this into a container? The dev guy. Cause he knows what he needs, where he needs it, and where it can be found after the build.

    But containerizing it should be not hard, given a moderately complex software with a couple of well thought-of build scripts. So I went for this:

    Now it get’s straightforward: The build scripts in TeamCity …

    • look for the docker directory, change into it,
    • execute the “prepare.sh” script if found,
    • build a container from the Dockerfile,
    • tag the container and
    • push it into the registry (which is configured centrally in TeamCity)

    Tagging the containers

    A docker cotainer is referenced like this:

    How do we choose how to name the container we just built? Two versions.

    For projects which contain nothing but a Dockerfile (which we have, cause our build containers are also versioned, of course), I enforce this combination:

    The build script enforces the scheme “docker-one-two”, and takes “one” and “two” automatically as names for the container. Then “1234abc9” is the git commit id (short), and “321” is the build number.

    Why not only the git commit ID? Because between builds, the same result is not guaranteed if executing the build again. If you build a container, and the execution contains an “apt-get update”, two builds a week apart will not result in the same contents.

    For “simple” or “pure” code builds I use the following scheme:

    Same reasoning.

    In both cases a container “some/thing:latest” is also tagged and pushed.

    Now when we run a software container, we can see

    • with which container it was built (by looking at “SET_BUILD_CONTAINER”),
    • which base container was used to build the software container (by looking at “docker/Dockerfile”)
    • and we can do this cause we know the git commit ID.

    For each base container (or “pure” Dockerfile projects), we extend this with a build number.

    Done.

    So this is my state so far. If anyone reads this, I would be interested in comments or feedback.

     

     
    • Tom Trahan 21:49 on 2015-12-02 Permalink | Reply

      hi @flypenguin – Nice journey through setting up CI/CD and thanks for checking out Shippable. I’m with Shippable and we recently launched a beta for integrating with private git instances and for deploying your containers automatically, with rollback, to Amazon EC2 Container Service or Elastic Beanstalk. This essentially enables a fully automated pipeline from code change through multiple test environments and, ultimately production. This will GA soon along with additional functionality that I think you’ll find a great fit with the pipeline you’ve described with less effort and lower costs. I’d be happy to walk you through it and answer any questions. Just drop me an email.

      Tom

  • penguin 19:32 on 2015-11-27 Permalink | Reply
    Tags: ,   

    My take at a CI infrastructure, Pt.2 

    For CI I want the classics – a check in (push) to the repo should be catched by TeamCity, and trigger …

    • a build of the artifact, once
    • running of unit tests
    • containerizing the artifact
    • uploading it to a private Docker registry

    The question was: How?

    This post deals with building the code.

    Building Code

    When I build code I am faced with a simple question: Which library versions do I use?

    When I have multiple projects, the question becomes complex. Which version do I install on which build agent? How do I assign build tasks to agents? What if some software cannot be installed? How can I do a rollback? Or try with another lib version quickly?

    The solution: Build containers. I am sure I have read about it somewhere, this is in no part an invention of my own, but I just can’t find an article explaining it.

    It basically goes like this. We have a docker container, which contains all necessary build libs in their development form and the build tools to build something. We pull the container, mount our checked out code dir in the container, and run the build in the controlled environment of it. We want a different set of libs? We re-build the container with them, and use the other container to build the project. Doesn’t work? Go back to the previous one.

    The prerequisite of this is a build process that does not change, or at least does not change for a set of projects. We use CMake, so it’s the same build commands over and over: “cmake .”, “make”, “make test”. That’s it. My first working build container looks like this:

    Building the code now is super easy:

    Done.

    … or? One question remains: How do I select the build container?

    There are two possibilities: In the build system configuration (read: TeamCity), or in the code. I went for the code. The reason is pretty simple: I check out a specific revision of the code, I know which container it was built with. From there I can work my way up:

    Guess what’s in “SET_BUILD_CONTAINER”? Correct. Something like this:

    The build configuration in TeamCity reads the file, and acts accordingly. Later I will talk more on those tags, and in the next post I talk about containerizing the binaries.

     
  • penguin 19:09 on 2015-11-27 Permalink | Reply
    Tags: , ,   

    My take at a CI infrastructure, Pt.1 

    … so far.

    It might be crappy, but I’ll share it, cause it’s working. (Well, today it started doing this 😉 ). But enough preamble, let’s jump in.

    The Situation

    I am in a new project. Those people have nothing but a deadline, and when I say nothing I mean it. Not even code. They asked me what I would do, and I said “go cloud, use everything you can from other people, so you don’t have to do it, and you stay in tune with the rest of the universe” (read: avoid NIH syndrome). They agreed, and hired me.

    The Starting Point

    They really want the JetBrains toolchain, the devs use CLion. They also want YouTrack for ticketing (which doesn’t blow my mind so far, but it’s ok). Naturally they want to use TeamCity, which is the Jenkins alternative from JetBrains, and pretty all right from what I can see so far.

    The code is probably 95%+ C++, and creates a stateless REST endpoint in the cloud (but load balanced). That’s a really simple setup to start with, just perfect.

    Source code hosting was initially planned to be either inhouse or in the bought cloud, not with a hoster. Up to now they were using git, but without graphical frontend which involved manual creation (by the – part time – admin) of every git repo.

    The Cloud Environment

    That’s just practical stuff now, and has nothing – yet – to do with CI/CD. Skip it if you’re just interested in that. Read it if you want to read my brain.

    I looked around for full-stack hosted CI/CD systems, notably found only Shippable, and thought that they don’t fully match the requirements (even when we move source code hosting out). So I went to AWS, and tried ElasticBeanstalk. This is quite cool, unfortunately scaling takes about 3-5 minutes for the new host to come up (tested with my little load dummy tool in a simple setup, which I actually didn’t save, which was stupid).

    Anyway, before deploying services CI (the compilation & build stuff) must work. So my first goal was to to get something up and running ASAP, and thats bold and capitalized. Fully automated of course.

    For any kubernetes/CoreOS/… layout I lack the experience to make it available quickly, and – really – all the AWS “click here to deploy” images of those tools didn’t work out-of-the-box. So I started fairly conventional with a simple CloudFormation template spawning three hosts: TeamCity server, TeamCity agent, Docker registry, and – really important – GitLab. Since then GitLab was replaced by a paid GitHub account, all the better.

    Setting the hosts up I used Puppet (oh wonder, being a Puppet “Expert”). Most of the time went in writing a TeamCity puppet module. A quirk is that the agents must download their ZIP distribution image from a running master only, which is kinda annoying to do right in puppet. For now TeamCity is also set up conventionally (without docker), which I might change soon, at least for the server. The postgres database runs in a container, though, which is super-super simple to set up (please donate a bit if you use it, even 1€ / 1$ helps, that guy did a great job!). Same went for gitlab (same guy), and redis (again). I also used the anti-pattern of configuring the hosts based on their IP addresses.

    I also wanted to automate host bootstrapping, so I did this in the cloudformation template for each host. The archive downloaded in this script contains 3 more scripts – a distribution-dependent one which is called first, have a look to see details. Basically it’s just a way to download a snapshot of our current puppet setup (encrypted), and initialize it so puppet can take over. I also use “at” in those scripts to perform a reboot and an action after, which is highly convenient.

    CI (finally)

    … in the next post 😉

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel