Tagged: docker Toggle Comment Threads | Keyboard Shortcuts

  • penguin 12:18 on 2018-08-26 Permalink | Reply
    Tags: , docker, , , postgres   

    Databases with dokku 

    This is part 2 of a couple of blog posts about dokku, a amazing little Heroku clone.

    In the previous post I showed how to set up Dokku on a DigitalOcean droplet, and deployed a little hello-world container with a single git push. The reason why I wanted dokku thoug was the need of a database. As said – hosting comes cheap, databases usually come either expensive or with limited flexibility, or just too annoying configuration effort.

    Dokku is the perferct middle ground. Let’s see why.

    For me it was the existing postgres plugin which you can simply install and use. The whole process is incredibly easy, takes wbout two commands, and looks like this (let’s assume our “hello world” container uses a database):

    That’s it, again.

    This creates a database container with postgres 10.2, as you can see. You can influence a lot of behavior by using environment variables, see the GitHub page for more info.

    Then you link the container to the running app:

    And done.

    What happened? You have now the environment variable $DATABASE_URL set in the hello-world app, that’s why the restart was necessary (which you can postpone, if you want, but you probably need it now, right?).

    Let’s check:

    That’s it. Super easy. Now if you’re using Django, you could use kennethreitz/dj-database-url to automatically parse and use it, and you’re done. (Probably every framework has something similar, so just have a look).

  • penguin 18:10 on 2018-08-25 Permalink | Reply
    Tags: , digitalocean, docker, , heroku, , howto   

    Build your own PaaS with Dokku 

    I was looking for some “play” deployment method for a couple of things I want to try out. Most of them require a database. And it should be cheap, cause I don’t have any load on them and don’t earn any money, so I don’t want to spend basically no money if possible. The usual suspects are too expensive – AWS, Heroku, etc.

    So I looked around and found Dokku.

    Dokku is a set of – hang on – shell scripts – which basically emulate Heroku on a machine of your own. It’s integrated with Digital Ocean droplets out of the box, if you want it. And the whole thing is 5 € / month, which is perfect. It also integrates with a Dockerfile based deployment, so you do git push and everything just works.

    It’s amazing.

    This is how you get started. But before you can get started, you need a domain you control, either on AWS or any other hoster. This is for routing traffic to your deployments later. You also need a public SSH key, or better a public / private key pair. Once you have both you can …

    1. create a Digital Ocean account, and …
    2. add your SSH public key to your account, and …
    3. in that account, create a new droplet with a “Dokku” image preinstalled.
    4. Wait until the droplet finished provisioning.

    While the droplet is being created, you can also create a project locally to test it later:

    In this little test project we only create a Dockerfile from an hello-world image which displays “Hello world” in a browser so we can verify it worked.

    Once the droplet is done, you can start setting up your personal little PaaS. First, you have to configure your DNS. We will set up a wildcard entry for our deployments, and a non-wildcard entry for git. Let’s assume your domain is for-myself.com, then you would add …

    • my-paas.for-myself.com , type “A” (or “AAAA” if you are IPv6) to your droplet IP
    • *.my-paas.for-myself.com just the same

    Then you SSH into your droplet, and create your dokku project. (This is something you have to do for every project). All you have to do for this is:


    Now you configure a git remote URL for your project, and push it:

    Again – done. If you push your project now (assuming DNS is already set), everything should happen automagically:

    And if you open your URL now (which is hello-world.my-paas.for-myself.com) you should see this image:

    Now, for 5 € / month you get:

    • A heroku-like, no-nonsense, fully automated, git-based deployment platform
    • A server which you control (and have to maintain, okay, but on which you can deploy …)
    • A database (or many of them – dokku provides great integration for databases btw; more on that in another post)
    • Publicly reachable deployments (for customers, testing, whatever)
    • Let’s Encrypt certificates (dokku provides support for these as well, again more in a later post)
    • And for 1 € more (it’s always 20% of the base price) you get backups of your system)

    That’s absolutely incredible. Oh, and did I mention that the maintainers are not only friendly, but also super responsive and incredibly helpful on Slack?

  • penguin 13:31 on 2017-01-12 Permalink | Reply
    Tags: docker, logging, , ops   

    Logs with docker and logstash 

    It would be nice to have all container logs from a docker cluster sent to … let’s say, an ELK stack. Right?


    So we did:

    • on each host in the cluster, we use the GELF log driver to send all logs to a logstash instance
    • the logstash instance clones each request using type “ELK”
    • to the “ELK” clone, it adds the token for the external ELK service
    • the “ELK” clone goes out to the external ELK cluster
    • the original event goes to S3.

    Here’s how.

    (More …)

    • David Sanftenberg 09:30 on 2017-07-04 Permalink | Reply

      Multiline gelf filters are no longer supported in 5.x of Logstash it seems. I’m considering downgrading to 4.x for this, as we use a lot of microservices and many JSONs are logged simultaneously, really messing up our logs. Thanks for the writeup.

  • penguin 16:22 on 2016-06-28 Permalink | Reply
    Tags: docker, , ,   

    Testing logstash configs with Docker 

    Now this is really not rocket science, but since I might do this more often, I don’t want to google every time.

    Prepare your directories

    Prepare your logstash config

    Run logstash


    Done. 🙂

  • penguin 19:55 on 2015-11-27 Permalink | Reply
    Tags: , docker   

    My take at a CI infrastructure, Pt.3 

    All right, back again. Much text here. Let’s talk about …

    Containerizing The Binaries

    We are done with the build, now we have a binary. I went for something simple: Who knows best how to put this into a container? The dev guy. Cause he knows what he needs, where he needs it, and where it can be found after the build.

    But containerizing it should be not hard, given a moderately complex software with a couple of well thought-of build scripts. So I went for this:

    Now it get’s straightforward: The build scripts in TeamCity …

    • look for the docker directory, change into it,
    • execute the “prepare.sh” script if found,
    • build a container from the Dockerfile,
    • tag the container and
    • push it into the registry (which is configured centrally in TeamCity)

    Tagging the containers

    A docker cotainer is referenced like this:

    How do we choose how to name the container we just built? Two versions.

    For projects which contain nothing but a Dockerfile (which we have, cause our build containers are also versioned, of course), I enforce this combination:

    The build script enforces the scheme “docker-one-two”, and takes “one” and “two” automatically as names for the container. Then “1234abc9” is the git commit id (short), and “321” is the build number.

    Why not only the git commit ID? Because between builds, the same result is not guaranteed if executing the build again. If you build a container, and the execution contains an “apt-get update”, two builds a week apart will not result in the same contents.

    For “simple” or “pure” code builds I use the following scheme:

    Same reasoning.

    In both cases a container “some/thing:latest” is also tagged and pushed.

    Now when we run a software container, we can see

    • with which container it was built (by looking at “SET_BUILD_CONTAINER”),
    • which base container was used to build the software container (by looking at “docker/Dockerfile”)
    • and we can do this cause we know the git commit ID.

    For each base container (or “pure” Dockerfile projects), we extend this with a build number.


    So this is my state so far. If anyone reads this, I would be interested in comments or feedback.


    • Tom Trahan 21:49 on 2015-12-02 Permalink | Reply

      hi @flypenguin – Nice journey through setting up CI/CD and thanks for checking out Shippable. I’m with Shippable and we recently launched a beta for integrating with private git instances and for deploying your containers automatically, with rollback, to Amazon EC2 Container Service or Elastic Beanstalk. This essentially enables a fully automated pipeline from code change through multiple test environments and, ultimately production. This will GA soon along with additional functionality that I think you’ll find a great fit with the pipeline you’ve described with less effort and lower costs. I’d be happy to walk you through it and answer any questions. Just drop me an email.


  • penguin 19:32 on 2015-11-27 Permalink | Reply
    Tags: , docker   

    My take at a CI infrastructure, Pt.2 

    For CI I want the classics – a check in (push) to the repo should be catched by TeamCity, and trigger …

    • a build of the artifact, once
    • running of unit tests
    • containerizing the artifact
    • uploading it to a private Docker registry

    The question was: How?

    This post deals with building the code.

    Building Code

    When I build code I am faced with a simple question: Which library versions do I use?

    When I have multiple projects, the question becomes complex. Which version do I install on which build agent? How do I assign build tasks to agents? What if some software cannot be installed? How can I do a rollback? Or try with another lib version quickly?

    The solution: Build containers. I am sure I have read about it somewhere, this is in no part an invention of my own, but I just can’t find an article explaining it.

    It basically goes like this. We have a docker container, which contains all necessary build libs in their development form and the build tools to build something. We pull the container, mount our checked out code dir in the container, and run the build in the controlled environment of it. We want a different set of libs? We re-build the container with them, and use the other container to build the project. Doesn’t work? Go back to the previous one.

    The prerequisite of this is a build process that does not change, or at least does not change for a set of projects. We use CMake, so it’s the same build commands over and over: “cmake .”, “make”, “make test”. That’s it. My first working build container looks like this:

    Building the code now is super easy:


    … or? One question remains: How do I select the build container?

    There are two possibilities: In the build system configuration (read: TeamCity), or in the code. I went for the code. The reason is pretty simple: I check out a specific revision of the code, I know which container it was built with. From there I can work my way up:

    Guess what’s in “SET_BUILD_CONTAINER”? Correct. Something like this:

    The build configuration in TeamCity reads the file, and acts accordingly. Later I will talk more on those tags, and in the next post I talk about containerizing the binaries.

  • penguin 13:59 on 2015-07-15 Permalink | Reply
    Tags: docker   

    Docker and proxies 

    … so I don’t forget.

    “docker pull” will not use the HTTP_PROXY variable. Why? Because “docker” is just the cli program which tells the daemon what to do. And the daemon probably does not know about the variable if just set in the terminal.

    So, what to do to make docker use it described pretty well here: https://docs.docker.com/articles/systemd/#http-proxy

    Next thing: Don’t forget to go “systemctl daemon-reload”, because otherwise this will not be effective, even with “systemctl restart docker”.


  • penguin 06:48 on 2015-04-15 Permalink | Reply
    Tags: docker, fedora, general, rhel, ssl   

    Fedora, docker and self-signed SSL certs 

    I am behind a company firewall with a man-in-the-middle SSL certificate for secure connections. Can’t have viruses over SSL, can we?

    But apps which actually verify SSL connections (which is all of the apps using standard SSL/TLS/whatnot libs) do not like this. And rightfully so. But then we’re left with the following problem:

    Now, to solve this on Fedora we do the following (all as root):

    • get a file with the signing certificate as PEM or DER format
    • place this file under /etc/pki/ca-trust/source/anchors
    • run “update-ca-trust extract”
    • restart docker (“systemctl restart docker.service”)

    A “man update-ca-trust” is also helpful to understand what’s happening.


compose new post
next post/next comment
previous post/previous comment
show/hide comments
go to top
go to login
show/hide help
shift + esc