Tagged: docker Toggle Comment Threads | Keyboard Shortcuts

  • penguin 13:31 on 2017-01-12 Permalink | Reply
    Tags: docker, logging, , ops   

    Logs with docker and logstash 

    It would be nice to have all container logs from a docker cluster sent to … let’s say, an ELK stack. Right?

    Right.

    So we did:

    • on each host in the cluster, we use the GELF log driver to send all logs to a logstash instance
    • the logstash instance clones each request using type “ELK”
    • to the “ELK” clone, it adds the token for the external ELK service
    • the “ELK” clone goes out to the external ELK cluster
    • the original event goes to S3.

    Here’s how.

    (More …)

     
    • David Sanftenberg 09:30 on 2017-07-04 Permalink | Reply

      Multiline gelf filters are no longer supported in 5.x of Logstash it seems. I’m considering downgrading to 4.x for this, as we use a lot of microservices and many JSONs are logged simultaneously, really messing up our logs. Thanks for the writeup.

  • penguin 16:22 on 2016-06-28 Permalink | Reply
    Tags: docker, , ,   

    Testing logstash configs with Docker 

    Now this is really not rocket science, but since I might do this more often, I don’t want to google every time.

    Prepare your directories

    Prepare your logstash config

    Run logstash

    Done.

    Done. 🙂

     
  • penguin 19:55 on 2015-11-27 Permalink | Reply
    Tags: , docker   

    My take at a CI infrastructure, Pt.3 

    All right, back again. Much text here. Let’s talk about …

    Containerizing The Binaries

    We are done with the build, now we have a binary. I went for something simple: Who knows best how to put this into a container? The dev guy. Cause he knows what he needs, where he needs it, and where it can be found after the build.

    But containerizing it should be not hard, given a moderately complex software with a couple of well thought-of build scripts. So I went for this:

    Now it get’s straightforward: The build scripts in TeamCity …

    • look for the docker directory, change into it,
    • execute the “prepare.sh” script if found,
    • build a container from the Dockerfile,
    • tag the container and
    • push it into the registry (which is configured centrally in TeamCity)

    Tagging the containers

    A docker cotainer is referenced like this:

    How do we choose how to name the container we just built? Two versions.

    For projects which contain nothing but a Dockerfile (which we have, cause our build containers are also versioned, of course), I enforce this combination:

    The build script enforces the scheme “docker-one-two”, and takes “one” and “two” automatically as names for the container. Then “1234abc9” is the git commit id (short), and “321” is the build number.

    Why not only the git commit ID? Because between builds, the same result is not guaranteed if executing the build again. If you build a container, and the execution contains an “apt-get update”, two builds a week apart will not result in the same contents.

    For “simple” or “pure” code builds I use the following scheme:

    Same reasoning.

    In both cases a container “some/thing:latest” is also tagged and pushed.

    Now when we run a software container, we can see

    • with which container it was built (by looking at “SET_BUILD_CONTAINER”),
    • which base container was used to build the software container (by looking at “docker/Dockerfile”)
    • and we can do this cause we know the git commit ID.

    For each base container (or “pure” Dockerfile projects), we extend this with a build number.

    Done.

    So this is my state so far. If anyone reads this, I would be interested in comments or feedback.

     

     
    • Tom Trahan 21:49 on 2015-12-02 Permalink | Reply

      hi @flypenguin – Nice journey through setting up CI/CD and thanks for checking out Shippable. I’m with Shippable and we recently launched a beta for integrating with private git instances and for deploying your containers automatically, with rollback, to Amazon EC2 Container Service or Elastic Beanstalk. This essentially enables a fully automated pipeline from code change through multiple test environments and, ultimately production. This will GA soon along with additional functionality that I think you’ll find a great fit with the pipeline you’ve described with less effort and lower costs. I’d be happy to walk you through it and answer any questions. Just drop me an email.

      Tom

  • penguin 19:32 on 2015-11-27 Permalink | Reply
    Tags: , docker   

    My take at a CI infrastructure, Pt.2 

    For CI I want the classics – a check in (push) to the repo should be catched by TeamCity, and trigger …

    • a build of the artifact, once
    • running of unit tests
    • containerizing the artifact
    • uploading it to a private Docker registry

    The question was: How?

    This post deals with building the code.

    Building Code

    When I build code I am faced with a simple question: Which library versions do I use?

    When I have multiple projects, the question becomes complex. Which version do I install on which build agent? How do I assign build tasks to agents? What if some software cannot be installed? How can I do a rollback? Or try with another lib version quickly?

    The solution: Build containers. I am sure I have read about it somewhere, this is in no part an invention of my own, but I just can’t find an article explaining it.

    It basically goes like this. We have a docker container, which contains all necessary build libs in their development form and the build tools to build something. We pull the container, mount our checked out code dir in the container, and run the build in the controlled environment of it. We want a different set of libs? We re-build the container with them, and use the other container to build the project. Doesn’t work? Go back to the previous one.

    The prerequisite of this is a build process that does not change, or at least does not change for a set of projects. We use CMake, so it’s the same build commands over and over: “cmake .”, “make”, “make test”. That’s it. My first working build container looks like this:

    Building the code now is super easy:

    Done.

    … or? One question remains: How do I select the build container?

    There are two possibilities: In the build system configuration (read: TeamCity), or in the code. I went for the code. The reason is pretty simple: I check out a specific revision of the code, I know which container it was built with. From there I can work my way up:

    Guess what’s in “SET_BUILD_CONTAINER”? Correct. Something like this:

    The build configuration in TeamCity reads the file, and acts accordingly. Later I will talk more on those tags, and in the next post I talk about containerizing the binaries.

     
  • penguin 13:59 on 2015-07-15 Permalink | Reply
    Tags: docker   

    Docker and proxies 

    … so I don’t forget.

    “docker pull” will not use the HTTP_PROXY variable. Why? Because “docker” is just the cli program which tells the daemon what to do. And the daemon probably does not know about the variable if just set in the terminal.

    So, what to do to make docker use it described pretty well here: https://docs.docker.com/articles/systemd/#http-proxy

    Next thing: Don’t forget to go “systemctl daemon-reload”, because otherwise this will not be effective, even with “systemctl restart docker”.

    Done.

     
  • penguin 06:48 on 2015-04-15 Permalink | Reply
    Tags: docker, fedora, general, rhel, ssl   

    Fedora, docker and self-signed SSL certs 

    I am behind a company firewall with a man-in-the-middle SSL certificate for secure connections. Can’t have viruses over SSL, can we?

    But apps which actually verify SSL connections (which is all of the apps using standard SSL/TLS/whatnot libs) do not like this. And rightfully so. But then we’re left with the following problem:

    Now, to solve this on Fedora we do the following (all as root):

    • get a file with the signing certificate as PEM or DER format
    • place this file under /etc/pki/ca-trust/source/anchors
    • run “update-ca-trust extract”
    • restart docker (“systemctl restart docker.service”)

    A “man update-ca-trust” is also helpful to understand what’s happening.

    Sources:

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel