Django, psql & “permission denied” on migrate

I got this error:

psycopg2.ProgrammingError: permission denied for relation django_migrations

… when I wanted to do a “python migrate”. This post had the solution. In short: You have to change the owner of the tables to the one specified in the Django configuration.

This is how my script looks:

#!/usr/bin/env bash
echo "ALTER TABLE public.django_admin_log OWNER TO <new_owner>;" | psql -U <current_owner> <database>
# ...



Docker registry, S3 and permissions

There are a couple of bazillion blog posts saying “yah just did my docker registry on S3”.

It’s not so easy, though. Cause what if you want to limit access to a certain IAM user? Yup, you need to go deep (well, a bit) into the policy thing of Amazon. Which sounds simple, but isn’t.

I got “HTTP 500” errors from the docker registry when I first deployed. My configuration, which was wrong, looked like this:

"RegistryIAMUser" : {
  "Type" : "AWS::IAM::User"
"RegistryIAMUserAccessKey" : {
  "Type" : "AWS::IAM::AccessKey",
  "Properties" : { "UserName" : { "Ref" : "RegistryIAMUser" } }
"Bucket" : {
  "Type" : "AWS::S3::Bucket",
  "Properties" : { "BucketName" : "flypenguin.docker-registry" }

"RegistryPrivateAccess" : {
  "Type" : "AWS::S3::BucketPolicy",
  "Properties" : {
    "Bucket" : {"Ref":"Bucket"},
    "PolicyDocument": {
        "Action":[ "s3:*" ],
        "Resource":  { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "Bucket" } , "/*" ]]},
        "Principal": {"AWS" : {"Fn::GetAtt":["RegistryIAMUser","Arn"]}}

Since this didn’t work really well, I googled my a** off and found a little post, which used a UserPolicy (instead of a bucket policy, which is basically the other way around), but did one thing different. My working configuration is now … (let’s see if you can see the difference):

[... same as above ...]

"UserPolicyRegistryPrivateAccess" : {
  "Type" : "AWS::IAM::Policy",
  "Properties" : {
    "PolicyName" : "AccessToDockerBucket",
    "Users" : [ {"Ref":"RegistryIAMUser"}],
    "PolicyDocument" : {
      "Version" : "2012-10-17",
      "Statement" : [{
        "Action":[ "s3:*" ],
        "Resource": [
          { "Fn::Join" : ["", ["arn:aws:s3:::", {"Ref":"Bucket"} , "/*" ]]},
          { "Fn::Join" : ["", ["arn:aws:s3:::", {"Ref":"Bucket"} ]]}

See it?

It’s the two resources now. You need not only “resource/*” as a target, you also need “resource” itself as a target. Which makes sense if you know it and think about it. If you don’t … it’s a bit annoying. And time-consuming.


The limits of puppetDB a.k.a. etcd with puppet

Okay. Maybe this is the completely wrong way to do this, but I found no other. Comments appreciated.


  • A DNS server cluster, meaning pacemaker managing an IP failover if one of them fails.
  • The cluster IP (the one switching over) should be the IP used for DNS lookups.


  • Hosts should automatically find ‘their’ DNS server using something. The DNS server should register itself somewhere so it is automatically set up during the first puppet run.

So far, so good. Unfortunately there are a few problems with it. My first approach was something like this:


# on every DNS server (*)
@@resolv_conf_entry { $my_cluster_ip :
    tag = $domain,

# on every other machine:
Resolv_conf_entry &lt;&lt;| tag == $domain |&gt;&gt;

Unfortunately this will lead to a “duplicate declaration” on the client side – because all of the clustered DNS servers will export this, and the cluster_ip is identical on the DNS server hosts.

The solution I came up with is etcd. A project from the girls and guys of CoreOS, which is basically a REST powered key-value store as trimmed down as possible, a binary about 3 MB, cluster-able, super-super-super simple to set up, and stand alone. The whole setup is like this:

  • etcd is installed on every puppet master
  • I wrote an etcd module(**) providing the methods etcd_set_value($etcd_url,$key_name,$key_value) and etcd_get_value($etcd_url,$key_name)
  • The $etcd_url is entered into hiera

Now, we can do the following:

# on each DNS server:
$etcd_url = hiera('etcd_url')
etcd_set_value( $etcd_url , &quot;/puppet/dns_server_for/${domain}&quot; , $my_cluster_ip )

# on each other server:
$etcd_url = hiera('etcd_url')
$dns_server = etcd_get_value( $etcd_url , &quot;/puppet/dns_server_for/${domain}&quot; )
resolv_conf_entry { $dns_server : }

Works beautifully.

Now go read the etcd docs and try to think of some other applications. etcd is a great tool (CoreOS is not too bad either, we’re evaluating it right now and are pretty amazed), and go use it. It is really much simpler than puppet.

(*) Note that resolv_conf_entry{} is just a placeholder and does not exist (unless you write it).

(**) I cannot – unfortunately – publish the module (yet), because the customer I wrote it for has not yet allowed me to.