Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • penguin 12:18 on 2018-08-26 Permalink | Reply
    Tags: , , , , postgres   

    Databases with dokku 

    This is part 2 of a couple of blog posts about dokku, a amazing little Heroku clone.

    In the previous post I showed how to set up Dokku on a DigitalOcean droplet, and deployed a little hello-world container with a single git push. The reason why I wanted dokku thoug was the need of a database. As said – hosting comes cheap, databases usually come either expensive or with limited flexibility, or just too annoying configuration effort.

    Dokku is the perferct middle ground. Let’s see why.

    For me it was the existing postgres plugin which you can simply install and use. The whole process is incredibly easy, takes wbout two commands, and looks like this (let’s assume our “hello world” container uses a database):

    That’s it, again.

    This creates a database container with postgres 10.2, as you can see. You can influence a lot of behavior by using environment variables, see the GitHub page for more info.

    Then you link the container to the running app:

    And done.

    What happened? You have now the environment variable $DATABASE_URL set in the hello-world app, that’s why the restart was necessary (which you can postpone, if you want, but you probably need it now, right?).

    Let’s check:

    That’s it. Super easy. Now if you’re using Django, you could use kennethreitz/dj-database-url to automatically parse and use it, and you’re done. (Probably every framework has something similar, so just have a look).

     
  • penguin 18:10 on 2018-08-25 Permalink | Reply
    Tags: , digitalocean, , , heroku, , howto   

    Build your own PaaS with Dokku 

    I was looking for some “play” deployment method for a couple of things I want to try out. Most of them require a database. And it should be cheap, cause I don’t have any load on them and don’t earn any money, so I don’t want to spend basically no money if possible. The usual suspects are too expensive – AWS, Heroku, etc.

    So I looked around and found Dokku.

    Dokku is a set of – hang on – shell scripts – which basically emulate Heroku on a machine of your own. It’s integrated with Digital Ocean droplets out of the box, if you want it. And the whole thing is 5 € / month, which is perfect. It also integrates with a Dockerfile based deployment, so you do git push and everything just works.

    It’s amazing.

    This is how you get started. But before you can get started, you need a domain you control, either on AWS or any other hoster. This is for routing traffic to your deployments later. You also need a public SSH key, or better a public / private key pair. Once you have both you can …

    1. create a Digital Ocean account, and …
    2. add your SSH public key to your account, and …
    3. in that account, create a new droplet with a “Dokku” image preinstalled.
    4. Wait until the droplet finished provisioning.

    While the droplet is being created, you can also create a project locally to test it later:

    In this little test project we only create a Dockerfile from an hello-world image which displays “Hello world” in a browser so we can verify it worked.

    Once the droplet is done, you can start setting up your personal little PaaS. First, you have to configure your DNS. We will set up a wildcard entry for our deployments, and a non-wildcard entry for git. Let’s assume your domain is for-myself.com, then you would add …

    • my-paas.for-myself.com , type “A” (or “AAAA” if you are IPv6) to your droplet IP
    • *.my-paas.for-myself.com just the same

    Then you SSH into your droplet, and create your dokku project. (This is something you have to do for every project). All you have to do for this is:

    Done.

    Now you configure a git remote URL for your project, and push it:

    Again – done. If you push your project now (assuming DNS is already set), everything should happen automagically:

    And if you open your URL now (which is hello-world.my-paas.for-myself.com) you should see this image:

    Now, for 5 € / month you get:

    • A heroku-like, no-nonsense, fully automated, git-based deployment platform
    • A server which you control (and have to maintain, okay, but on which you can deploy …)
    • A database (or many of them – dokku provides great integration for databases btw; more on that in another post)
    • Publicly reachable deployments (for customers, testing, whatever)
    • Let’s Encrypt certificates (dokku provides support for these as well, again more in a later post)
    • And for 1 € more (it’s always 20% of the base price) you get backups of your system)

    That’s absolutely incredible. Oh, and did I mention that the maintainers are not only friendly, but also super responsive and incredibly helpful on Slack?

     
  • penguin 09:50 on 2018-05-25 Permalink | Reply
    Tags: helm, kubernetes, rbac   

    Helm in a kops cluster with RBAC 

    I created a K8S cluster on AWS with kops.

    I ran helm init to install tiller in the cluster.

    I ran helm list  to see if it worked.

    I got this:

    That sucked. And google proved … reluctant. What I could figure out is:

    Causes

    • kops sets up the cluster with RBAC enabled (which is good)
    • helm (well, tiller) uses a standard role for doing things (which might be ok, at least it was with my stackpoint cluster), but in that case (for whatever reason) it did not have sufficient privileges
    • so we need to prepare some cluster admin roles for helm to use

    Fixes

    Just do exactly as it says in the helm docs 🙂 :

    • apply the RBAC yaml file which creates the kube-system/tiller service account, and binds this to the cluster-admin  role.
    • install helm with: helm init --service-account tiller

    Is that secure? Not so much. With helm you can still do anything to the cluster at all. I might get to this in a later post.

     
  • penguin 18:10 on 2018-04-24 Permalink | Reply
    Tags: , gitlab   

    GitLab spot runners & Puppet 

    We are on AWS with GitLab. For ease of use, and because our build hosts degenerate for some reason (network issues), we decided to use spot instances with GitLab.

    The journey was all but easy. Here’s why.

    GitLab Runner configuration complaints

    First: The process

    To configure GitLab runner, you have to …

    • install GitLab,
    • write down the runner registration token,
    • start a runner,
    • manually a registration command using above token.

    That registration command will then modify the config file of the runner. That is important because you can’t just write a static, read-only config file and start the runner. This is not possible for two reasons:

    • when you execute the registration command, the runner wants to modify the config file to add yet another token (its “personal” token, not the general registration secret), so it must not be read-only
    • the runner has to be registered, so just starting it will do … nothing.

    That is in my eyes a huge design flaw, which undoubtedly has its reasons, but it still – sorry – sucks IMHO.

    Second: The configuration

    You can configure pretty much everything in the config file. But once the runner registers, the registration process for some reason appends a completely new config to any existing config file, so that … the state is weird. It works, but it looks fucked, and feels fucked.

    You can also set all configuration file entries using the gitlab-runner register  command. Well, not all: The global parameters (like, for example, log_level  or concurrent ) cannot be set. Those have to be in a pre-existing config file, so you need both – the file and the registration command, which will look super ugly in a very short time.

    Especially if you still use Puppet to manage the runners, cause then you just can’t just restart the runner once the config file changes. Because it will always change, because of above reasons.

    Third: The AWS permission documentation

    Another thing is that the list of AWS permissions the runner needs in order to create spot instances is nowhere to be found. Hint: EC2FullAccess  and S3FullAccess is not enough. We are using admin permissions right now, until we figured it out. Not nice.

    Our solution

    For this we’re still using Puppet (our K8S migration is still ongoing), and our solution so far looks like this:

    • Create a config file with puppet next to the designated config file location,
      • containing only global parameters.
      • The file has a puppet hook which triggers an exec that deletes the “final” config file if the puppet-created one has changed.
    • Start the GitLab runner.
    • Perform a “docker exec” which registers the runner in GitLab.
      • The “unless” contains a check that skips execution if the final config file is present.
      • The register  command sets all configuration values except the global ones. Like said above, the command appends all non-global config settings to any existing config file.

    Some code

    Does this look ugly? You bet.

    Should this be a puppet module? Most probably.

    Did I foresee this? Nope.

    Am I completely fed up? Yes.

    Is this stuff I want to do? No.

    Does it work?

    Yes (at least … 🙂 )

    Remarks

    If you wander what all those create::THING  entries are – it’s this:

    We have an awful lot of those, cause then we can do a lot of stuff in the config YAMLs and don’t need to go in puppet DSL code.

     
  • penguin 18:55 on 2018-04-18 Permalink | Reply
    Tags: osx,   

    Mac three finger gestures in browsers 

    Well, I *love* my three-finger-jump-to-top gesture in Firefox. It was gone.

    It took me about 2h to get it back by googling. And I blog so I don’t forget.

    Here’s where I should look next time straight away: System setting – Trackpad – rightmost tab – first setting.

     
  • penguin 08:26 on 2018-04-18 Permalink | Reply
    Tags:   

    Arch followup actions 

    Once you’ve installed Arch Linux, a couple of things are … nice.

    Packages

     

    Configurations

    For network manager, I prefer dnsmasq as the tool of choice, especially when using VPN connections:

    Enable services

    To-be-updated

    … from time to time 😉

     
  • penguin 18:27 on 2018-04-12 Permalink | Reply
    Tags: , vscode   

    Python & Visual Studio code 

    The official python plugin claims that the interpreters of Pipenv are automatically found.

    They are not.

    At least not on my machine.

    Here’s how you set them.

     
  • penguin 13:18 on 2018-03-28 Permalink | Reply
    Tags: , yubikey   

    Arch linux + yubikeys 

    To use “ykman” for arch linux, you do this:

    Sounds easy? Still had to google the things.

     
  • penguin 08:07 on 2018-02-23 Permalink | Reply
    Tags: font,   

    Ugly ligatures in Linux 

    Unfortunately boohomil went off grid. I still haven’t replicated his config fully. And it still sucks.

    One more step was fixing those super-ugly ligatures in Linux. Works at least in LibreOffice (just restart the app to see changes).

     
  • penguin 08:55 on 2018-02-01 Permalink | Reply
    Tags: commandline,   

    crontab and nano 

    Ever used update-alternatives to switch everything to vim and … crontab -e still used nano?

    Well, I had this. I found the answer:

     

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel