Updates from January, 2020 Toggle Comment Threads | Keyboard Shortcuts

  • penguin 21:28 on 2020-01-25 Permalink | Reply
    Tags: check_mk, , ,   

    Check MK container/k8s deployment 

    In the company everybody seems to love Check MK. Me? Not so much, but a better alternative costs time and effort, both resources we don’t really have right now. Yet there’s a positive thing about it – because there’s an official docker container. Since I already coded a helm chart for stateful single container softwares (which I personally find super useful), I just wrote a Check MK YAML and installed it on my K8S cluster.

    And then nothing worked. Turns out, Apache – which is used in that very strange “Open Monitoring Distribution” which Check MK seems to have been at one point – has a slightly sub-optimal configuration for running in a container behind a load balancer using cert-manager.

    In short, you connect to the load balancer using “cmk.my.domain”, and it redirects you to the container port, which to itself is “https://cmk.my.domain:5000/” and just wrong. Which brings me to the question if anybody has ever tried to run the Check MK container in a k8s cluster or behind a load balancer, which brings me to the question that I’d rather use software which actively embraces that, which brings me to the question WHICH ONE?!? which brings us back to “no resources, no time”.

    So, bad luck, Check MK it is. But what about the bug? Reporting it you get an email “DONT CALL US – WE CALL YOU (and we probably won’t)“, with a ticket ID but no link. So probably no help here. So I “forked” the container, fooled around with it, and found a solution. The “fixed” container is now available on docker hub (sources on GitHub) and running nicely in our internal cluster. Let’s see which hidden bugs I have introduced 😉 . The stasico-Helm-YAML file I used to deploy Check MK in K8S is also available.

    TL;DR
     
  • penguin 12:18 on 2018-08-26 Permalink | Reply
    Tags: , , , , postgres   

    Databases with dokku 

    This is part 2 of a couple of blog posts about dokku, a amazing little Heroku clone.

    In the previous post I showed how to set up Dokku on a DigitalOcean droplet, and deployed a little hello-world container with a single git push. The reason why I wanted dokku thoug was the need of a database. As said – hosting comes cheap, databases usually come either expensive or with limited flexibility, or just too annoying configuration effort.

    Dokku is the perferct middle ground. Let’s see why.

    For me it was the existing postgres plugin which you can simply install and use. The whole process is incredibly easy, takes wbout two commands, and looks like this (let’s assume our “hello world” container uses a database):

    $ sudo dokku plugin:install https://github.com/dokku/dokku-postgres.git postgres

    That’s it, again.

    $ dokku postgres:create hello-world
    
    dokku postgres:create hello-world
    Waiting for container to be ready
    Creating container database Securing connection to database
    =====> Postgres container created: hello-world
    =====> Container Information
           Config dir: /var/lib/dokku/services/postgres/hello-world/config
           Data dir: /var/lib/dokku/services/postgres/hello-world/data
           Dsn: postgres://postgres:bd6b0725d710bb5a662bb628eee787b1@dokku-postgres-hello-world:5432/hello_world
           Exposed ports: -
           Id: 785ef252c748ed85739d1d6ad375a1e1bd66e925ac79358e9ffaa30ab852d6c0 
           Internal ip: 172.17.0.9
           Links: -
           Service root: /var/lib/dokku/services/postgres/hello-world
           Status: running
           Version: postgres:10.2
    
    $ docker ps
    
    CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS         PORTS      NAMES
    cc99cccacf2c   dokku/hello-world:latest   "/bin/sh -c 'php-fpm…"   2 minutes ago   Up 2 minutes   80/tcp     hello-world.web.1
    785ef252c748   postgres:10.2              "docker-entrypoint.s…"   5 minutes ago   Up 5 minutes   5432/tcp   dokku.postgres.hello-world
    [...]
    

    This creates a database container with postgres 10.2, as you can see. You can influence a lot of behavior by using environment variables, see the GitHub page for more info.

    Then you link the container to the running app:

    $ dokku postgres:link hello-world hello-world
    -----> Setting config vars
           DATABASE_URL: postgres://postgres:bd6b0725d710bb5a662bb628eee787b1@dokku-postgres-hello-world:5432/hello_world
    -----> Restarting app hello-world
    -----> Releasing hello-world (dokku/hello-world:latest)...
    -----> Deploying hello-world (dokku/hello-world:latest)...
    -----> Attempting to run scripts.dokku.predeploy from app.json (if defined)
    -----> No Procfile found in app image
    -----> DOKKU_SCALE file found (/home/dokku/hello-world/DOKKU_SCALE)
    =====> web=1
    -----> Attempting pre-flight checks 
           For more efficient zero downtime deployments, create a file CHECKS. 
           See http://dokku.viewdocs.io/dokku/deployment/zero-downtime-deploys/ for examples 
           CHECKS file not found in container: Running simple container check...
    -----> Waiting for 10 seconds ...
    -----> Default container check successful!
    -----> Running post-deploy
    -----> Configuring hello-world.my-paas.for-myself.com...(using built-in template)
    -----> Creating http nginx.conf
    -----> Running nginx-pre-reload Reloading nginx
    -----> Setting config vars DOKKU_APP_RESTORE: 1
    -----> Found previous container(s) (14c349cb496d) named hello-world.web.1
    =====> Renaming container (14c349cb496d) hello-world.web.1 to hello-world.web.1.1535285386
    =====> Renaming container (cc99cccacf2c) serene_bassi to hello-world.web.1
    -----> Attempting to run scripts.dokku.postdeploy from app.json (if defined)
    -----> Shutting down old containers in 60 seconds
    =====> 14c349cb496d95cc4be1833f2e7f6ef2bef099a37c2a22cd4dcdb542f09bea0f
    =====> Application deployed:
           http://hello-world.my-paas.for-myself.com

    And done.

    What happened? You have now the environment variable $DATABASE_URL set in the hello-world app, that’s why the restart was necessary (which you can postpone, if you want, but you probably need it now, right?).

    Let’s check:

    $ docker exec -ti hello-world.web.1 /bin/sh 
    
    [now in the container]
    
    # env | grep DATABASE 
    DATABASE_URL=postgres://postgres:bd6b0725d710bb5a662bb628eee787b1@dokku-postgres-hello-world:5432/hello_world 
    

    That’s it. Super easy. Now if you’re using Django, you could use kennethreitz/dj-database-url to automatically parse and use it, and you’re done. (Probably every framework has something similar, so just have a look).

     
  • penguin 18:10 on 2018-08-25 Permalink | Reply
    Tags: , digitalocean, , , heroku, , howto   

    Build your own PaaS with Dokku 

    I was looking for some “play” deployment method for a couple of things I want to try out. Most of them require a database. And it should be cheap, cause I don’t have any load on them and don’t earn any money, so I don’t want to spend basically no money if possible. The usual suspects are too expensive – AWS, Heroku, etc.

    So I looked around and found Dokku.

    Dokku is a set of – hang on – shell scripts – which basically emulate Heroku on a machine of your own. It’s integrated with Digital Ocean droplets out of the box, if you want it. And the whole thing is 5 € / month, which is perfect. It also integrates with a Dockerfile based deployment, so you do git push and everything just works.

    It’s amazing.

    This is how you get started. But before you can get started, you need a domain you control, either on AWS or any other hoster. This is for routing traffic to your deployments later. You also need a public SSH key, or better a public / private key pair. Once you have both you can …

    1. create a Digital Ocean account, and …
    2. add your SSH public key to your account, and …
    3. in that account, create a new droplet with a “Dokku” image preinstalled.
    4. Wait until the droplet finished provisioning.

    While the droplet is being created, you can also create a project locally to test it later:

    $ mkdir dokku-test
    $ cd dokku-test
    $ git init
    $ echo "FROM tutum/hello-world" > Dockerfile
    $ git add Dockerfile
    $ git commit -m "Initial commit"
    

    In this little test project we only create a Dockerfile from an hello-world image which displays “Hello world” in a browser so we can verify it worked.

    Once the droplet is done, you can start setting up your personal little PaaS. First, you have to configure your DNS. We will set up a wildcard entry for our deployments, and a non-wildcard entry for git. Let’s assume your domain is for-myself.com, then you would add …

    • my-paas.for-myself.com , type “A” (or “AAAA” if you are IPv6) to your droplet IP
    • *.my-paas.for-myself.com just the same

    Then you SSH into your droplet, and create your dokku project. (This is something you have to do for every project). All you have to do for this is:

    $ ssh root@DROPLET_IP
    ~# dokku apps:create hello-world
    -----> Creating hello-world... done
    ~# _

    Done.

    Now you configure a git remote URL for your project, and push it:

    $ git remote add dokku dokku@my-paas.for-myself.com:hello-world
    

    Again – done. If you push your project now (assuming DNS is already set), everything should happen automagically:

    $ git push --set-upstream dokku master
    X11 forwarding request failed
    Enumerating objects: 3, done.
    Counting objects: 100% (3/3), done.
    Writing objects: 100% (3/3), 241 bytes | 241.00 KiB/s, done.
    Total 3 (delta 0), reused 0 (delta 0)
    -----> Cleaning up...
    -----> Building hello-world from dockerfile...
    remote: build context to Docker daemon  2.048kB
    Step 1/1 : FROM tutum/hello-world
    latest: Pulling from tutum/hello-world
    658bc4dc7069: Pulling fs layer
    [... TRUNCATED ...]
    983d35417974: Pull complete
    Digest: sha256:0d57def8055178aafb4c7669cbc25ec17f0acdab97cc587f30150802da8f8d85
    Status: Downloaded newer image for tutum/hello-world:latest
     ---> 31e17b0746e4
    Successfully built 31e17b0746e4
    Successfully tagged dokku/hello-world:latest
    -----> Setting config vars
           DOKKU_DOCKERFILE_PORTS:  80/tcp
    -----> Releasing hello-world (dokku/hello-world:latest)...
    -----> Deploying hello-world (dokku/hello-world:latest)...
    -----> Attempting to run scripts.dokku.predeploy from app.json (if defined)
    -----> No Procfile found in app image
    -----> DOKKU_SCALE file not found in app image. Generating one based on Procfile...
    -----> New DOKKU_SCALE file generated
    =====> web=1
    -----> Attempting pre-flight checks
           For more efficient zero downtime deployments, create a file CHECKS.
           See http://dokku.viewdocs.io/dokku/deployment/zero-downtime-deploys/ for examples
           CHECKS file not found in container: Running simple container check...
    -----> Waiting for 10 seconds ...
    -----> Default container check successful!
    -----> Running post-deploy
    -----> Creating new /home/dokku/hello-world/VHOST...
    -----> Setting config vars
           DOKKU_PROXY_PORT_MAP:  http:80:80
    -----> Configuring hello-world.my-paas.for-myself.com...(using built-in template)
    -----> Creating http nginx.conf
    -----> Running nginx-pre-reload
           Reloading nginx
    -----> Setting config vars
           DOKKU_APP_RESTORE:  1
    =====> Renaming container (14c349cb496d) amazing_snyder to hello-world.web.1
    -----> Attempting to run scripts.dokku.postdeploy from app.json (if defined)
    =====> Application deployed:
           http://hello-world.my-paas.for-myself.com
    
    To my-paas.for-myself.com:hello-world
     * [new branch]      master -> master
    Branch 'master' set up to track remote branch 'master' from 'dokku'.

    And if you open your URL now (which is hello-world.my-paas.for-myself.com) you should see this image:

    Now, for 5 € / month you get:

    • A heroku-like, no-nonsense, fully automated, git-based deployment platform
    • A server which you control (and have to maintain, okay, but on which you can deploy …)
    • A database (or many of them – dokku provides great integration for databases btw; more on that in another post)
    • Publicly reachable deployments (for customers, testing, whatever)
    • Let’s Encrypt certificates (dokku provides support for these as well, again more in a later post)
    • And for 1 € more (it’s always 20% of the base price) you get backups of your system)

    That’s absolutely incredible. Oh, and did I mention that the maintainers are not only friendly, but also super responsive and incredibly helpful on Slack?

     
  • penguin 09:50 on 2018-05-25 Permalink | Reply
    Tags: , , rbac   

    Helm in a kops cluster with RBAC 

    I created a K8S cluster on AWS with kops.

    I ran helm init to install tiller in the cluster.

    I ran helm list  to see if it worked.

    I got this:

    Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" \ 
        cannot list configmaps in the namespace "kube-system"

    That sucked. And google proved … reluctant. What I could figure out is:

    Causes

    • kops sets up the cluster with RBAC enabled (which is good)
    • helm (well, tiller) uses a standard role for doing things (which might be ok, at least it was with my stackpoint cluster), but in that case (for whatever reason) it did not have sufficient privileges
    • so we need to prepare some cluster admin roles for helm to use

    Fixes

    Just do exactly as it says in the helm docs 🙂 :

    • apply the RBAC yaml file which creates the kube-system/tiller service account, and binds this to the cluster-admin  role.
    • install helm with: helm init –service-account tiller

    Is that secure? Not so much. With helm you can still do anything to the cluster at all. I might get to this in a later post.

     
  • penguin 16:14 on 2017-04-13 Permalink | Reply
    Tags: , elastic beanstalk   

    Elastic Beanstalk with Docker using Terraform 

    I just investigate AWS Elastic Beanstalk. And I want to use terraform for this. This is what I’ve done, and how I’ve got it running. I basically do this because the docs for this are either super-long (and are still missing critical points) or super-short (and are also missing critical points), at least what I’ve found.

    This should get you up and running in very little time. You can also get all the code from a demo github repository.

    General principles

    The Architectural Overview is a good page to read to get an idea of what you’re about to do. It’s not that long.

    In short, Elastic Beanstalk runs a version of an application in an environment. So the process is: Declaring an application, defining a couple of versions and environments, and then combine one specific version with one specific environment of an app to create an actually running deployment.

    The environment is just a set of hosts configured in a special way (autoscaling & triggers, subnets, roles, etc.), whereas the application version is the info about how to deploy the containers on that environment (ports, env variables, etc.). Naturally, you think of having a DEV environment which runs “latest”, and a PROD environment which runs “stable” or so. Go crazy.

    Prerequisites & Preparation

    For the example here you need a couple of things & facts:

    • An AWS account
    • In that account, you need:
      • an S3 bucket to save your app versions
      • a VPC ID
      • subnet IDs for the instance networks
      • an IAM roles for the hosts
      • an IAM service roles elastic beanstalk. (see bottom for how to create that)
    • Terraform 🙂
    • The aws command line client

    Get started

    The files in the repository have way more parameters, but this is the basic set which should get you running (I tried once, then added all that stuff). The main.tf  file below will create the application and an environment associated with it.

    # file: main.tf
    
    resource "aws_elastic_beanstalk_application" "test" {
      name        = "ebs-test"
      description = "Test of beanstalk deployment"
    }
    
    resource "aws_elastic_beanstalk_environment" "test_env" {
      name                = "ebs-test-env"
      application         = "ebs-test"
      cname_prefix        = "mytest"
    
      # the next line IS NOT RANDOM, see "final notes" at the bottom
      solution_stack_name = "64bit Amazon Linux 2016.09 v2.5.2 running Docker 1.12.6"
    
      # There are a LOT of settings, see here for the basic list:
      # https://is.gd/vfB51g
      # This should be the minimally required set for Docker.
    
      setting {
        namespace = "aws:ec2:vpc"
        name      = "VPCId"
        value     = "${var.vpc_id}"
      }
      setting {
        namespace = "aws:ec2:vpc"
        name      = "Subnets"
        value     = "${join(",", var.subnets)}"
      }
      setting {
        namespace = "aws:autoscaling:launchconfiguration"
        name      = "IamInstanceProfile"
        value     = "${var.instance_role}"
      }
      setting {
        namespace = "aws:elasticbeanstalk:environment"
        name      = "ServiceRole"
        value     = "${var.ebs_service_role}"
      }
    
    }
    

    If you run this, at least one host and one ELB should appear in the defined subnets. Still, this is an empty environment, there’s no app running in it. If if you ask yourself, “where’s the version he talked about?” – well, it’s not in there. We didn’t create one yet. This is just the very basic platform you need to run a version of an app.

    In my source repo you can now just use the script app_config_create_and_upload.sh , followed by deploy.sh . You should be able to figure out how to use them, and they should work out of the box. But we’re here to explain, so this is what happens behind the scenes if you do this:

    1. create a file “Dockerrun.aws.json ” with the information about the service (Docker image, etc.) to deploy
    2. upload that file into an S3 bucket, packed into a ZIP file (see “final notes” below)
    3. tell Elastic Beanstalk to create a new app version using the info from that file (on S3)

    That obviously was app_config_create_and_upload.sh . The next script, deploy.sh , does this:

    1. tell EBS to actually deploy that configuration using the AWS cli.

    This is the Dockerrun.aws.json  file which describes our single-container test application:

    {
      "AWSEBDockerrunVersion": "1",
      "Image": {
        "Name": "flypenguin/test:latest",
        "Update": "true"
      },
      "Ports": [
        {
          "ContainerPort": "5000"
        }
      ],
      "Volumes": [],
      "Logging": "/var/log/flypenguin-test"
    }
    

    See “final notes” for the “ContainerPort” directive.

    I also guess you know how to upload a file to S3, so I’ll skip that. If not, look in the script. The Terraform declaration to add the version to Elastic Beanstalk looks like this: (if you used my script, a file called app_version_<VERSION>.tf  was created for you automatically with pretty much this content):

    # define elastic beanstalk app version "latest"
    resource "aws_elastic_beanstalk_application_version" "latest" {
      name        = "latest"
      application = "${aws_elastic_beanstalk_application.test_app.name}"
      description = "Version latest of app ${aws_elastic_beanstalk_application.test_app.name}"
      bucket      = "my-test-bucket-for-ebs"
      key         = "latest.zip"
    }
    

    Finally, deploying this using the AWS cli:

    $ aws elasticbeanstalk update-environment \
      --application-name test-app \
      --version-label latest \
      --environment-name test-env 
    

    All done correctly, this should be it, and you should be able to access your app now under your configured address.

    Wrap up & reasoning

    My repo works, at least for me (I hope for you as well). I did not yet figure out the autoscaling, for which I didn’t have time. I will catch up in a 2nd blog post once I figured that out. First tests gave pretty weird results 🙂 .

    The reason why I did this (when I have Rancher available for me) is the auto-scaling, and the host-management. I don’t need to manage any more hosts and Docker versions and Rancher deployments just to deploy a super-simle, CPU-intensive, scaling production workload, which relies on very stable (even pretty conservative) components in that way. Also I learned something.

    Finally, after reading a lot of postings and way to much AWS docs, I am surprised how easy this thing actually is. It certainly doesnt look that way if you start reading up on it. I tried to catch the essence of the whole process in that blog post.

    Final notes & troubleshooting

    1. I have no idea what the aws_elastic_beanstalk_configuration_template  Terraform resource is for. I would like to understand it, but the documentation is rather … sparse.
    2. The solution stack name has semantic meaning. You must set something that AWS understands. This can be found out by using the following command:
      $ aws elasticbeanstalk list-available-solution-stacks 
      … or on the AWS documentation. Whatever is to your liking.
    3. If you don’t specify a security group (aws:autoscaling:launchconfiguration  – “SecurityGroups “) one will be created for you automatically. That might not be convenient because this means that on “terraform destroy” this group might not be destroyed automatically. (which is just a guess, I didn’t test this)
    4. The same goes for the auto scaling group scaling rules.
    5. When trying the minimal example, be extra careful when you can’t access the service after everything is there. The standard settings seem to be: Same subnet for ELB and hosts (obviously), and public ELB (with public IPv4 address). Now, placing a public-facing ELB into an internal-only subnet does not work, right? 🙂
    6. The ZIP file: According to the docs you can only upload the JSON file (or the Dockerfile file if you build the container in the process) to S3. But the docs are not extremely clear, and Terraform did not mention this. So I am using ZIPs which works just fine.
    7. The ContainerPort is always the port the applications listens on in the container, it is not the port which is opened to the outside. That always seems to be 80 (at least for single-container deployments)

    Appendix I: Create ServiceRole IAM role

    For some reason on the first test run this did not seem to be necessary. On all subsequent runs it was, though. This is the way to create this. Sorry that I couldn’t figure out how to do this with Terraform.

    • open AWS IAM console
    • click “Create new role”
    • Step 1 – select role type: choose “AWS service role”, and under that “AWS Elastic Beanstalk”
    • Step 2 – establish trust: is skipped by the wizard after this
    • Step 3 – Attach policy: Check both policies in the table (should be “AWSElasticBeanstalkEnhancedHealth”, and “AWSElasticBeanstalkService”)
    • Step 4 – Set role name and review: Enter a role name (e.g. “aws-elasticbeanstalk-service-role”), and hit “Create role”

    Now you can use (if you chose that name) “aws-elasticbeanstalk-service-role” as your ServiceRole parameter.

    Appendix II: Sources

     
  • penguin 13:31 on 2017-01-12 Permalink | Reply
    Tags: , logging, ,   

    Logs with docker and logstash 

    It would be nice to have all container logs from a docker cluster sent to … let’s say, an ELK stack. Right?

    Right.

    So we did:

    • on each host in the cluster, we use the GELF log driver to send all logs to a logstash instance
    • the logstash instance clones each request using type “ELK”
    • to the “ELK” clone, it adds the token for the external ELK service
    • the “ELK” clone goes out to the external ELK cluster
    • the original event goes to S3.

    Here’s how.

    (More …)

     
    • David Sanftenberg 09:30 on 2017-07-04 Permalink | Reply

      Multiline gelf filters are no longer supported in 5.x of Logstash it seems. I’m considering downgrading to 4.x for this, as we use a lot of microservices and many JSONs are logged simultaneously, really messing up our logs. Thanks for the writeup.

  • penguin 16:22 on 2016-06-28 Permalink | Reply
    Tags: , , ,   

    Testing logstash configs with Docker 

    Now this is really not rocket science, but since I might do this more often, I don’t want to google every time.

    Prepare your directories

    ./tmp                   # THIS IS YOUR WORKING DIRECTORY
      |- patterns/          # optional
      |   |- patternfile1   # optional
      |   |- patternfile2   # optional
      |- logs.log
      |- logstash.conf

    Prepare your logstash config

    # logstash.conf
    input {
      file {
        path => '/stash/logs.log'
      }
    }
    
    filter {
      # whatever config you want to test
      grok {
        match        => [ "message", "%{WHATEVER}" ]
        patterns_dir => '/stash/patterns'              # optional :)
      }
    }
    
    output {
      stdout { codec => rubydebug }
    }

    Run logstash

    docker run --rm -ti -v $(pwd):/stash logstash logstash -f /stash/logstash.conf

    Done.

    Done. 🙂

     
  • penguin 11:53 on 2016-06-22 Permalink | Reply
    Tags: jumpcloud, ldap, teamcity   

    TeamCity LDAP authentication with JumpCloud 

    JumpCloud looks like a great service to use LDAP without using LDAP. And I have just managed to find an error in the documentation, precisely the file “ldap-config.properties.dist”.

    The working configuration is:

    # basic jumpcloud url
    java.naming.provider.url=ldap://ldap.jumpcloud.com:389/
    
    # search user for jumpcloud
    java.naming.security.principal=uid=BIND_USER_NAME,ou=Users,o=ORG_ID,dc=jumpcloud,dc=com
    java.naming.security.credentials=BIND_USER_PASSWORD
    
    # unix ldap seems to use uid as username - see https://is.gd/dBPegr
    teamcity.users.login.filter=(uid=$capturedLogin$)
    teamcity.users.username=uid
    teamcity.users.base=ou=Users,o=ORG_ID,dc=jumpcloud,dc=com

    Seems to work nicely, now comes the finetuning.

     
  • penguin 15:01 on 2016-05-30 Permalink | Reply
    Tags:   

    Migrate Rancher database from container to external 

    I wanted to switch from an in-container database setup to an external database setup. And I didn’t know what happens when you just lose all database contents, and I thought with Docker and some tweaking that should also not be necessary. So I just migrated the databases. Here’s what I did for those interested:

    • stop rancher
    • use a container (sameersbn/mysql) to mount the rancher database content and do a mysqldump
    • import the dump into the external database (AWS RDS instance)
    • start rancher up with different parameters (use external database, as described in the official docs)

    And now the actual command lines:

    # create socket directory
    $ cd RANCHER_MYSQL_MOUNT
    $ mkdir sockets
    
    # start sameersbn/mysql to have a mysql container for dumping everything
    $ docker run -d --name temp-mysql -v $(pwd)/sockets:/var/run/mysqld -v $(pwd):/var/lib/mysql sameersbn/mysql
    
    # dump the database
    $ mysqldump -S ./sockets/mysqld.sock --add-drop-database --add-drop-table --add-drop-trigger --routines --triggers cattle > cattle.sql
    
    # restore the database in AWS / whatever
    $ mysql -u USERNAME -p -h DB_HOST DB_NAME < cattle.sq

    (Don’t forget to stop the sammersbn container once you’re done). I have configured puppet to start rancher. The final configuration in puppet looks like this:

    ::docker::run { 'rancher-master':
      image   => 'rancher/server',
      ports   => "${rancher_port}:8080",
      volumes => [],
      env     =>  [
        "CATTLE_DB_CATTLE_MYSQL_HOST=${db_host}",
        "CATTLE_DB_CATTLE_MYSQL_NAME=${db_name}",
        "CATTLE_DB_CATTLE_MYSQL_PORT=${db_port}",
        "CATTLE_DB_CATTLE_MYSQL_USERNAME=${db_user}",
        "CATTLE_DB_CATTLE_MYSQL_PASSWORD=${db_pass}",
      ],
    }

    Restart, and it seems to be working just fine. To check go to http://RANCHER_URL/admin/ha (yes, we still use HTTP internally, it will change), and you should see this:

    Bildschirmfoto von »2016-05-30 16-41-23«Nice.

    #rancher

     
  • penguin 20:38 on 2016-03-08 Permalink | Reply
    Tags: , , prometheus,   

    Host monitoring with Prometheus 

    I needed monitoring. The plan was to go for an external service – if our environment breaks down, the monitoring is still functional (at least as far as the remaining environment goes). I started to evaluate sysdig cloud, which comes somewhat recommended from “the internet”.

    But then I was kinda unsatisfied (to be honest – most probably unjustified) with the service, because I really didn’t like the UI, and then one metric which was displayed was just wrong. So I got back to prometheus, which we use for metrics gathering of our running services anyway, and used it for host metric monitoring, too.

    That’s my setup. (sorry for the crappy graphic, WordPress does not support SVG … ?!?)

    Monitoring setup.png

    Cause I have consul running on every host, puppet deploying everything, I can use puppet to register the exporter services to consul, and consul to configure prometheus, which has native consul support.

    The prometheus configuration to pull all this is pretty simple actually, once it works:

    global:
      scrape_interval: 10s
      scrape_timeout: 3s
      evaluation_interval: 10s
    scrape_configs:
      - job_name: consul
        consul_sd_configs:
          - server: consul.internal.net:8500
            services: [prom-pushgateway, cadvisor, node-exporter]
        relabel_configs:
          - source_labels:  ['__meta_consul_node']
            regex:          '^(.*)$'
            target_label:   node
            replacement:    '$1'
          - source_labels:  ['__meta_consul_service']
            regex:          '^(.*)$'
            target_label:   job
            replacement:    '$1'
        metric_relabel_configs:
          - source_labels:  ['id']
            regex:          '/([^/]+)/.*'
            target_label:   item_type
            replacement:    '$1'
          - source_labels:  ['id']
            regex:          '/[^/]+/(.*)'
            target_label:   item
            replacement:    '$1'
          - source_labels:  ['id']
            regex:          '/docker/(.{8}).*'
            target_label:   item
            replacement:    '$1'

    Some caveats:

    • Prometheus will not tell you why a relabeling does not work. It will just not do it.
    • Prometheus will not tell you that a regex is faulty on SIGHUP, only on restart.
    • The difference between “metric_relabel_configs” and “relabel_configs” seems to be that the former must be applied to scraped metrics, while the latter can only be applied to metrics which are “already present”, which seems to be only the “__*”-meta labels (for example “__meta_consul_service”)

    Then it works like a charm.

    And the final bonbon: Directly after I had it running I discovered a problem:

    CdCVlPNWAAArHEg.jpg

    Yippieh 😀

    #consul, #monitoring, #prometheus, #puppet

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel