Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • penguin 14:58 on 2017-04-20 Permalink | Reply
    Tags: , , fonts   

    Linux font rendering sucks, a.k.a “Where is Boohomil”? 

    For some reason, the maintainer behind the “*-infinality” packages in Arch Linux “has gone missing” for a while.

    Why is that important to me? Because infinality is a patch set to a bunch of font and rendering packages, which make fonts under Linux look SO much better than the default setup. (Yes, there are still a couple of things that Linux just absolutely cannot compete in with Mac and / or Win, and font rendering is one of them. Ubuntu does a reasonable job of this, every other distro just sucks.

    Except when you were using infinality. And now it’s defunct.

    Anyway, after experiencing the unbelievably ugly phenomenon described in here, I tried this guide here now, and it seems to fix it.

     
  • penguin 16:14 on 2017-04-13 Permalink | Reply
    Tags: , elastic beanstalk   

    Elastic Beanstalk with Docker using Terraform 

    I just investigate AWS Elastic Beanstalk. And I want to use terraform for this. This is what I’ve done, and how I’ve got it running. I basically do this because the docs for this are either super-long (and are still missing critical points) or super-short (and are also missing critical points), at least what I’ve found.

    This should get you up and running in very little time. You can also get all the code from a demo github repository.

    General principles

    The Architectural Overview is a good page to read to get an idea of what you’re about to do. It’s not that long.

    In short, Elastic Beanstalk runs a version of an application in an environment. So the process is: Declaring an application, defining a couple of versions and environments, and then combine one specific version with one specific environment of an app to create an actually running deployment.

    The environment is just a set of hosts configured in a special way (autoscaling & triggers, subnets, roles, etc.), whereas the application version is the info about how to deploy the containers on that environment (ports, env variables, etc.). Naturally, you think of having a DEV environment which runs “latest”, and a PROD environment which runs “stable” or so. Go crazy.

    Prerequisites & Preparation

    For the example here you need a couple of things & facts:

    • An AWS account
    • In that account, you need:
      • an S3 bucket to save your app versions
      • a VPC ID
      • subnet IDs for the instance networks
      • an IAM roles for the hosts
      • an IAM service roles elastic beanstalk. (see bottom for how to create that)
    • Terraform 🙂
    • The aws command line client

    Get started

    The files in the repository have way more parameters, but this is the basic set which should get you running (I tried once, then added all that stuff). The main.tf  file below will create the application and an environment associated with it.

    If you run this, at least one host and one ELB should appear in the defined subnets. Still, this is an empty environment, there’s no app running in it. If if you ask yourself, “where’s the version he talked about?” – well, it’s not in there. We didn’t create one yet. This is just the very basic platform you need to run a version of an app.

    In my source repo you can now just use the script app_config_create_and_upload.sh , followed by deploy.sh . You should be able to figure out how to use them, and they should work out of the box. But we’re here to explain, so this is what happens behind the scenes if you do this:

    1. create a file “ Dockerrun.aws.json ” with the information about the service (Docker image, etc.) to deploy
    2. upload that file into an S3 bucket, packed into a ZIP file (see “final notes” below)
    3. tell Elastic Beanstalk to create a new app version using the info from that file (on S3)

    That obviously was app_config_create_and_upload.sh . The next script, deploy.sh , does this:

    1. tell EBS to actually deploy that configuration using the AWS cli.

    This is the Dockerrun.aws.json  file which describes our single-container test application:

    See “final notes” for the “ContainerPort” directive.

    I also guess you know how to upload a file to S3, so I’ll skip that. If not, look in the script. The Terraform declaration to add the version to Elastic Beanstalk looks like this: (if you used my script, a file called app_version_<VERSION>.tf  was created for you automatically with pretty much this content):

    Finally, deploying this using the AWS cli:

    All done correctly, this should be it, and you should be able to access your app now under your configured address.

    Wrap up & reasoning

    My repo works, at least for me (I hope for you as well). I did not yet figure out the autoscaling, for which I didn’t have time. I will catch up in a 2nd blog post once I figured that out. First tests gave pretty weird results 🙂 .

    The reason why I did this (when I have Rancher available for me) is the auto-scaling, and the host-management. I don’t need to manage any more hosts and Docker versions and Rancher deployments just to deploy a super-simle, CPU-intensive, scaling production workload, which relies on very stable (even pretty conservative) components in that way. Also I learned something.

    Finally, after reading a lot of postings and way to much AWS docs, I am surprised how easy this thing actually is. It certainly doesnt look that way if you start reading up on it. I tried to catch the essence of the whole process in that blog post.

    Final notes & troubleshooting

    1. I have no idea what the aws_elastic_beanstalk_configuration_template  Terraform resource is for. I would like to understand it, but the documentation is rather … sparse.
    2. The solution stack name has semantic meaning. You must set something that AWS understands. This can be found out by using the following command:
      $ aws elasticbeanstalk list-available-solution-stacks 
      … or on the AWS documentation. Whatever is to your liking.
    3. If you don’t specify a security group ( aws:autoscaling:launchconfiguration  – “ SecurityGroups “) one will be created for you automatically. That might not be convenient because this means that on “terraform destroy” this group might not be destroyed automatically. (which is just a guess, I didn’t test this)
    4. The same goes for the auto scaling group scaling rules.
    5. When trying the minimal example, be extra careful when you can’t access the service after everything is there. The standard settings seem to be: Same subnet for ELB and hosts (obviously), and public ELB (with public IPv4 address). Now, placing a public-facing ELB into an internal-only subnet does not work, right? 🙂
    6. The ZIP file: According to the docs you can only upload the JSON file (or the Dockerfile file if you build the container in the process) to S3. But the docs are not extremely clear, and Terraform did not mention this. So I am using ZIPs which works just fine.
    7. The ContainerPort is always the port the applications listens on in the container, it is not the port which is opened to the outside. That always seems to be 80 (at least for single-container deployments)

    Appendix I: Create ServiceRole IAM role

    For some reason on the first test run this did not seem to be necessary. On all subsequent runs it was, though. This is the way to create this. Sorry that I couldn’t figure out how to do this with Terraform.

    • open AWS IAM console
    • click “Create new role”
    • Step 1 – select role type: choose “AWS service role”, and under that “AWS Elastic Beanstalk”
    • Step 2 – establish trust: is skipped by the wizard after this
    • Step 3 – Attach policy: Check both policies in the table (should be “AWSElasticBeanstalkEnhancedHealth”, and “AWSElasticBeanstalkService”)
    • Step 4 – Set role name and review: Enter a role name (e.g. “aws-elasticbeanstalk-service-role”), and hit “Create role”

    Now you can use (if you chose that name) “aws-elasticbeanstalk-service-role” as your ServiceRole parameter.

    Appendix II: Sources

     
  • penguin 19:03 on 2017-04-10 Permalink | Reply
    Tags:   

    The state of things – management 

    Yep, this is the challenge why I converted from a freelancer (which I still prefer as a working model) to a “normal” employed person. I am a “manager” now. Well, I just have team responsibility. And it is crazy. This brings *so* many challenges which are so amazing (cause they’re new) and exhausting (cause I need to deal with them in a completely different way).

    Here they are.

    Challenge one – team spirit. That is something I am most happy with, because our spirit is pretty high I think. And I take this on me, shamelessly, but this is also something which is deeply connected to my “leading persona”, whatever that is. And I think this one is far from perfect.

    Challenge two – training the team. I think I know some stuff, and I keep in contact with things. And I want to learn new things. Now I have to deal with maintenance shit all day, and yet want to try out new toys and stuff. This is quite complicated: On the technical side I have to think now about a way in which people can learn the most, while making sure a fuckup can not break everything. (Which it did – once, and badly). Also I have to ensure that people learn, and have fun doing it. Which is surprisingly hard, but also surprisingly cool if you see it actually working.

    Challenge three – employee interviews. I suck at it, period. I started to ask technical questions now, because before I was under the assumption that every applicant can do the job, and it’s just about how he fits in. Bad mistake. Now I learn that personal markers are also important. Which is the next thing that I need in the team, personality-wise? And am I sure of this? And does the next candidate have it? Cra-zy.

    Challenge four – managing the big picure. Or simply put – how do I make sure that the team is always up-to-date on priorities, talk to people enough, and has a good sense of when something should be “done”? And a good sense of driving it there, btw. Which is pretty much the same as

    challenge five – processes. Which process do we choose? We tried SCRUM, didn’t really work that well, so we changed it after a couple of iterations. Now we try (some sort of) Kanban, and already I am seeing transparency risks, and I need metrics. Also you often read that Kanban needs analogue ticket boards (paper, wall) – not some fancy JIRA tooling shit or so. Now what if the company policy is “log your time in tickets”? And, even most important – how do I self-manage? And the team with it? And prioritize features if they hit me like “oh in two weeks this must be done, and sorry this came in just today”?

    For me all of this is hard. I guess I am getting there, but it is a great challenge, and I love it. But soon I will need a break. And I hope some stuff is done by then.

    Update 2017-04-14: changed challenges 1&2 a bit.

     
  • penguin 18:54 on 2017-04-10 Permalink | Reply
    Tags: current state,   

    The state of things – technology 

    It’s been a while. I am currently pretty burned out, and the work keeps getting more. This is bad. But let’s talk about some challenges right now. So this is an overview of our …

    Technical state

    We’re still using Rancher. Rancher is super cool, but has the annoying habit of completely crashing about once every two months, leading to a full cluster outage for anything between 1-3 hours. Usually about 2. I still love it, but we matured in our needs, and maybe Rancher needs time to catch up (cause our needs are sometimes a bit “special”). But the Rancher team is making great progress in the right directions, and I am fully competent that Rancher will take a place in the orchestration space. Still we’re thinking about moving to K8s, simply because so much is already there.

    We’re using Prometheus for monitoring now. Rocks. Period.

    We’re still using AWS. Many of our customers would prefer Azure Germany. If you didn’t know – Azure in Germany advertises a “Data Custodian” mode, or “Data Trustee” model, not sure how to translate this and too lazy to look it up. This means that in Germany the data centers are running the true Azure stuff, but they are actually fully operated by Deutsche Telekom.

    Advantage, you ask? Easy. When the DOJ sends one of those super secure letters to Microsoft for “give me your data”, they simply forward this to Deutsche Telekom. They will probably frame it on a wall somewhere, but I don’t think they will actually give out the data. Problem solved. (We all hope :))

    We are almost done with setting the whole cloud up using Terraform. It became a really mature project over the last year, and we are super happy with the progress it’s making. Also, with Azure in the works for us (some cusomers …) this is a cool way to just manage all with the same tooling. Infrastructure as code, eh.

    We try to migrate away from Teamcity to Jenkins. We didn’t succeed yet. Too little manpower.

    But the more interesting thing is in the next post, for me at least 😉

     
  • penguin 10:52 on 2017-01-21 Permalink | Reply
    Tags: blog, code, wordpress   

    Syntax highlighting with wordpress 

    This is just a test for syntax highlighting. Which I really really really wanted to have. Even if it’s WordPress and not something cool like hugo.

    So, let’s try:

    That doesn’t look so bad, right?

    How to do this

    • install the crayon syntax highlight plugin
    • when writing posts, your toolbar will have a new icon looking like this: <>
    • press it, and an “add code” dialog will open
    • do your thing
    • save
    • done

    Like.

    Dislikes

    • I don’t know which JS engine this thing uses (if any public one in particular)
    • I like highlight.js

    (So, none really)

    Alternative plugins (untested)

    To be honest, the only real contender in installation base and features seems to be Syntax Highlighter Evolved. I have not tried it, but if you don’t like crayon, that looks like the one to go.

     
  • penguin 14:45 on 2017-01-17 Permalink | Reply
    Tags: , fix, pycharm,   

    PyCharm, Arch linux & Python 3.6 

    Love Python. Love PyCharm. Love Arch Linux.

    Unfortunately Arch sneakily updated Python to 3.6. Cool, new version … but hey, why don’t my debug runs in PyCharm work any more??

    Yup, pretty confusing. It seems unable to find shared python 3.5 library. Well. After some cursing, turns out the solution is pretty simple (if you know what to do):

    • get pyenv
    • use pyenv to install Python 3.5.2, but with –enable-shared option set
    • use this python version for PyCharm projects (it does not matter if it’s in a virtualenv or not)

    Like this:

    That solved it for me 🙂

    References:
     
  • penguin 13:31 on 2017-01-12 Permalink | Reply
    Tags: , logging, , ops   

    Logs with docker and logstash 

    It would be nice to have all container logs from a docker cluster sent to … let’s say, an ELK stack. Right?

    Right.

    So we did:

    • on each host in the cluster, we use the GELF log driver to send all logs to a logstash instance
    • the logstash instance clones each request using type “ELK”
    • to the “ELK” clone, it adds the token for the external ELK service
    • the “ELK” clone goes out to the external ELK cluster
    • the original event goes to S3.

    Here’s how.

    (More …)

     
  • penguin 10:49 on 2017-01-09 Permalink | Reply
    Tags: ,   

    Logstash, clone filter & add_field mysteries 

    That’s a really great piece of documentation. This does not work:

    Why? Because the clone filter will not clone anything. And the documentation is super unclear on this. If you know it, you can read it – if you don’t know this, you’ll … google.

    For it to actually clone anything you have to specify the ‘clones => [“one”, …]’ parameter. Then it will clone, and add the token field as expected. Like this:

    The reasoning that I don’t just add the field altogether is that this is the access token for our externally hosted ELK service. This should only be there for the external path, and not be put in S3 in parallel.

    References:
     
  • penguin 16:55 on 2016-07-06 Permalink | Reply
    Tags:   

    Quick puppet debugging snipper for Atom 

    Not sure how I could have lived without this until now (had it before in Sublime, never bothered porting, stooooopid as I realize now 😉 ):

     
  • penguin 16:22 on 2016-06-28 Permalink | Reply
    Tags: , , ,   

    Testing logstash configs with Docker 

    Now this is really not rocket science, but since I might do this more often, I don’t want to google every time.

    Prepare your directories

    Prepare your logstash config

    Run logstash

    Done.

    Done. 🙂

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel