Recent Posts

by John M Costa, III

Installing Redis on Docker

I’m currently employed by dotCloud and had an opportunity to play around with our open sourced linux container runtime project called Docker.

You’ll need to have an functional version of docker to follow these steps. I’ve included an overview of my notes for installation, however you can find additional installation instructions at the docker website.

Introduction to Docker

If you’ve already worked with docker, you can skip this part. You already have docker installed and probably are running your own containers. If you haven’t, here’s a general overview to a handful of docker commands. Please read on.

I’m working on a MBA, so I ran through the MacOS instructions which are repeated below. It will require that you already have VirtualBox and Vagrant already installed. If you don’t have these, you can find the getting started docs here.

First clone the repo and cd into the cloned repository:

$ git clone && cd docker

Now, a quick vagrant up and vagrant ssh and I was already issuing docker commands.

Also note: I’ve intentionally left out the vagrant output as there’s nothing too important there. It took about 1 minute to complete.

$ vagrant up
$ vagrant ssh

vagrant@vagrant-ubuntu-12:/opt/go/src/$ docker ps

vagrant@vagrant-ubuntu-12:/opt/go/src/$ docker images
REPOSITORY          TAG                 ID                  CREATED             PARENT

vagrant@vagrant-ubuntu-12:/opt/go/src/$ docker version
Git Commit:

So far so good! Now lets run a shell within a docker container.

docker run -i -t base /bin/bash
Image base not found, trying to pull it from registry.
Pulling repository base
Pulling tag base:ubuntu-quantl
Pulling b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc metadata
Pulling b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc fs layer
10240/10240 (100%)
Pulling 27cf784147099545 metadata
Pulling 27cf784147099545 fs layer
94863360/94863360 (100%)
Pulling tag base:latest
Pulling tag base:ubuntu-12.10
Pulling tag base:ubuntu-quantal

So, what have we done here? We’ve called run, which runs our command in a new container, and passed a few docker specific parameters. These include, -i, to keep stdin open, and -t to allocate a pseudo-tty. And finally the command we’re running is /bin/bash to give us a bash shell.

An interesting side effect is that we now have a docker base image locally. We can see this when we run docker images.

$ docker images
REPOSITORY          TAG                 ID                  CREATED             PARENT
base                latest              b750fe79269d        12 days ago         27cf78414709
base                ubuntu-12.10        b750fe79269d        12 days ago         27cf78414709
base                ubuntu-quantal      b750fe79269d        12 days ago         27cf78414709
base                ubuntu-quantl       b750fe79269d        12 days ago         27cf78414709
<none>              <none>              27cf78414709        12 days ago

Lastly, lets exit out of our docker container, and you should see the following:

SIGINT received

Let’s check the status of our docker container:

vagrant@vagrant-ubuntu-12:/opt/go/src/$ docker ps
ID             IMAGE         COMMAND      CREATED          STATUS          COMMENT
9468f9c097f7   base:latest   /bin/bash    25 minutes ago   Up 25 minutes

It looks like its still running…ok lets stop it:

vagrant@vagrant-ubuntu-12:/opt/go/src/$ docker stop 9468f9c097f7

Let’s make sure that it’s really gone:

$ docker ps

Installing and running Redis within a docker container

Now that we have a notion of what going on with docker commands and installation, lets start loading it up with the tools we’ll need for running a redis server within a docker container.

Start up a new container using the base image.

$ docker run -i -t base /bin/bash

Lets update our system packages from what’s included in our base image:

root@b9859484e68f:/# apt-get update
Ign quantal InRelease
Hit quantal Release.gpg
Hit quantal Release
Hit quantal/main amd64 Packages
Get:1 quantal/universe amd64 Packages [5274 kB]
Get:2 quantal/multiverse amd64 Packages [131 kB]
Get:3 quantal/main Translation-en [660 kB]
Get:4 quantal/multiverse Translation-en [100 kB]
Get:5 quantal/universe Translation-en [3648 kB]
Fetched 9813 kB in 17s (557 kB/s)
Reading package lists... Done

Now install telnet and our redis-server:

root@b9859484e68f:/# apt-get install telnet redis-server
Reading package lists... Done
Building dependency tree... Done
The following extra packages will be installed:
  libidn11 libjemalloc1
The following NEW packages will be installed:
  libidn11 libjemalloc1 redis-server telnet wget
0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
Need to get 784 kB of archives.
After this operation, 1968 kB of additional disk space will be used.
Get:1 quantal/main libidn11 amd64 1.25-2 [119 kB]
Get:2 quantal/universe libjemalloc1 amd64 3.0.0-3 [85.9 kB]
Get:3 quantal/main telnet amd64 0.17-36build2 [67.1 kB]
Get:4 quantal/main wget amd64 1.13.4-3ubuntu1 [280 kB]
Get:5 quantal/universe redis-server amd64 2:2.4.15-1 [233 kB]
Fetched 784 kB in 2s (334 kB/s)
dpkg-preconfigure: unable to re-open stdin: No such file or directory
Selecting previously unselected package libidn11:amd64.
(Reading database ... 9893 files and directories currently installed.)
Unpacking libidn11:amd64 (from .../libidn11_1.25-2_amd64.deb) ...
Selecting previously unselected package libjemalloc1.
Unpacking libjemalloc1 (from .../libjemalloc1_3.0.0-3_amd64.deb) ...
Selecting previously unselected package telnet.
Unpacking telnet (from .../telnet_0.17-36build2_amd64.deb) ...
Selecting previously unselected package wget.
Unpacking wget (from .../wget_1.13.4-3ubuntu1_amd64.deb) ...
Selecting previously unselected package redis-server.
Unpacking redis-server (from .../redis-server_2%3a2.4.15-1_amd64.deb) ...
Processing triggers for ureadahead ...
Setting up libidn11:amd64 (1.25-2) ...
Setting up libjemalloc1 (3.0.0-3) ...
Setting up telnet (0.17-36build2) ...
update-alternatives: using /usr/bin/telnet.netkit to provide /usr/bin/telnet (telnet) in auto mode
Setting up wget (1.13.4-3ubuntu1) ...
Setting up redis-server (2:2.4.15-1) ...
Starting redis-server: redis-server.
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
Processing triggers for ureadahead ...

A quick double check to make sure that our redis-server is now installed:

root@b9859484e68f:/# which redis-server

root@b9859484e68f:/# redis-server --version
Redis server version 2.4.15 (00000000:0)

The image that we’ve created is clean and I want to keep this image before I go much further, so now I’m going to open up another terminal and get back to our vagrant image. Here we can commit and store the filesystem state.

$ docker login
Username (): johncosta
Email ():
Login Succeeded

$ docker commit b9859484e68f johncosta/redis

$ docker push johncosta/redis
Pushing repository johncosta/redis (1 tags)
Pushing tag johncosta/redis:latest
Pushing 3e7b84670ea1c7d4b5df8095a3f2051ac2fb4e34fed101d553ad919c4bd923e4 metadata
Pushing 3e7b84670ea1c7d4b5df8095a3f2051ac2fb4e34fed101d553ad919c4bd923e4 fs layer
21975040/21975040 (100%)
Pushing b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc metadata
Pushing 27cf784147099545 metadata
Registering tag johncosta/redis:latest


I forgot to capture the commit command when grabbing the terminal output. Not to worry! I was able ssh into my vagrant VM and check the dockerd logs using: vagrant@vagrant-ubuntu-12:/opt/go/src/$ less /var/log/dockerd

Let’s check what we’ve done and exit out of our container:

root@b9859484e68f:/# exit

It looks like our docker container isn’t running anymore!

$ docker ps

Lets start up a new container, but this time use the image we just created and committed to providing my redis instance as the image to use.

docker run -i -t johncosta/redis /bin/bash

vagrant@vagrant-ubuntu-12:/opt/go/src/$ docker run -i -t johncosta/redis /bin/bash

root@61507c28cd67:/# /etc/init.d/redis-server start
Starting redis-server: redis-server.
root@61507c28cd67:/# ps faux
root         1  0.0  0.5  18068  2016 ?        S    15:58   0:00 /bin/bash
redis       14  0.0  0.4  36624  1656 ?        Ssl  16:01   0:00 /usr/bin/redis-
root        17  0.0  0.3  15524  1108 ?        R    16:01   0:00 ps faux

Let’s make sure we can connect to it and interact with redis (remember we installed telnet!).

root@61507c28cd67:/# telnet 6379
Connected to
Escape character is '^]'.
+1365177842.076283 "MONITOR"
+1365177870.147208 "set" "docker" "awesome"
+1365177874.927280 "get" "docker"
Connection closed by foreign host.

Ok, we know we can connect to redis and interact with it. But we’re inside the container, lets see how to connect to it from outside the container! First, lets inspect our container:

$ docker inspect 61507c28cd67
   "Id": "61507c28cd673ea4464248a8c2b936807bf951d6dc82d0f872b02586c5681139",
   "Created": "2013-04-05T08:58:33.711054-07:00",
   "Path": "/bin/bash",
   "Args": [],
   "Config": {
       "Hostname": "61507c28cd67",
       "User": "",
       "Memory": 0,
       "MemorySwap": 0,
       "AttachStdin": true,
       "AttachStdout": true,
       "AttachStderr": true,
       "Ports": null,
       "Tty": true,
       "OpenStdin": true,
       "StdinOnce": true,
       "Env": null,
       "Cmd": [
       "Image": "johncosta/redis"
   "State": {
       "Running": true,
       "Pid": 6052,
       "ExitCode": 0,
       "StartedAt": "2013-04-05T09:09:19.733633-07:00"
   "Image": "3e7b84670ea1c7d4b5df8095a3f2051ac2fb4e34fed101d553ad919c4bd923e4",
   "NetworkSettings": {
       "IpAddress": "",
       "IpPrefixLen": 24,
       "Gateway": "",
       "PortMapping": {}
   "SysInitPath": "/opt/go/bin/docker"

Hmm, It looks like we don’t have a port that we can connect to. Looking at the run command, there’s something that we missed, the -p option, map a network port to the container. Let’s try this with the following:

docker run -p 6379 -i -t johncosta/redis /usr/bin/redis-server

Much better, we can now see that we’ve allocated ports 6379 and mapped it to the external port 49153.

vagrant@vagrant-ubuntu-12:/opt/go/src/$ docker ps
ID             IMAGE                    COMMAND      CREATED         STATUS         COMMENT
0be92ce8581e   johncosta/redis:latest   /bin/bash    3 minutes ago   Up 3 minutes
vagrant@vagrant-ubuntu-12:/opt/go/src/$ $ docker port 0be92ce8581e 6379

Note: We don’t need to inspect the container and parse the entire container information set to get the mapped port. We can use the convenience command docker port.

OK! We’re almost there. Now terminate that docker process and start with a new command to start our redis server within docker in daemon mode. Test the results with a telnet session and a redis-cli session external to the docker container.

docker run -d -p 6379 -i -t johncosta/redis /usr/bin/redis-server

vagrant@vagrant-ubuntu-12:/opt/go/src/$ docker ps
ID             IMAGE                    COMMAND                CREATED         STATUS         COMMENT
c0f7e48cafcf   johncosta/redis:latest   /usr/bin/redis-serve   4 minutes ago   Up 4 minutes
vagrant@vagrant-ubuntu-12:/opt/go/src/$ docker port c0f7e48cafcf 6379

vagrant@vagrant-ubuntu-12:/opt/go/src/$ telnet 49174
Connected to
Escape character is '^]'.
+1365194060.897490 "monitor"
set docker awesome
+1365194071.640199 "set" "docker" "awesome"
get docker
+1365194073.519484 "get" "docker"
Connection closed by foreign host.

vagrant@vagrant-ubuntu-12:/opt/go/src/$ redis-cli -h -p 49174
redis> get docker

Update 5/6/2013:

It’s now possible to save images with their configuration options! I added one additional commit to do this:

docker commit -run '{"Cmd": ["/usr/bin/redis-server"], "PortSpecs": [":6379"]}' b9859484e68f johncosta/redis

Now to run an image it’s as easy as:

Get the image: docker pull johncosta/redis

Run the image: docker run johncosta/redis

Run in daemon mode: docker run -d johncosta/redis

Also, Check out the docker index.

la fin

by John M Costa, III

Django view decorators

I recently worked on a project that required a standard account and profile system. django-userena is usually my goto project for this due to its ease of setting up and its extensibility. There’s a subtle nuance to using this project’s default urls patterns in that the majority of the url patterns require passing the user’s username in the url. The username is then used in the view to find the user, since usernames are unique to the user.

For this particular project, I wanted to hide the username from the url path and came up with the following decorator that would allow us to use all the existing functionality of django-userena.

from functools import wraps

from django.core.urlresolvers import reverse
from django.conf import settings
from django.http import HttpResponseRedirect
from django.utils.decorators import available_attrs

LOGIN_URL = getattr(settings, 'LOGIN_URL')

def user_to_view(view_func):
    """ This view decorator is used to wrap views that require a user name,
    injecting the username, pulled from the request, into the view.
    def _wrapped_view(request, *args, **kwargs):
        if not request or not request.user:
            return HttpResponseRedirect(reverse(LOGIN_URL))
        username = request.user.username
        return view_func(request, *args, **kwargs)
    return wraps(view_func, assigned=available_attrs(view_func))(_wrapped_view)

Now, for the each url pattern you want to modify, redefine it in your file, wrapping the url you’re looking to modify.

urlpatterns += patterns('',
    url(r'^edit/$', user_to_view(userena_views.profile_edit),
        {'edit_profile_form': ProfileFormExtra}, name='userena_profile_edit'),)
by John M Costa, III

Converting my blog to Octopress

Recently I started looking into migrating my blog to something that would be a little easier to maintain. My Django powered blog was nice, but there where a lot of moving parts and required a lot of resource overhead (apache, mysql, django, etc…). I enjoy exploring new technologies so I started looking into static site generators.

What I was looking for:

  • Easy to use and learn
  • Straightforward development to live process
  • Somewhat customizable

A quick google search for static site generators pulls up quite a few. It even turns up a [github][1] repo from one contributor who maintains a [list][0] of them. I wasn’t really sure where to start and my acceptance criteria wasn’t super restrictive, so I picked the first one that [seemed interesting][2]. This happened to be [Octopress][3].

Some of the features Octopress touts:

  • A [semantic HTML5][4] template
  • A Mobile first [responsive layout][5]
  • Built in 3rd party support for Twitter, Google Plus One, Disqus Comments, Pinboard, Delicious, and Google Analytics
  • An easy deployment strategy
  • Built in support for POW and Rack servers
  • Easy theming with [Compass][6] and [Sass][7]
  • A Beautiful Solarized syntax highlighting

“Octopress is a blogging framework for hackers.”

It was incredibly straight forward to get Octopress up and running. The [setup documentation][8] was easy to find and follow.

I didn’t have the mentioned version of Ruby installed, so I followed the instructions for the [RVM installation][9].

I then followed the instructions to clone Octopress, installed the dependencies and install the default theme.

git clone git:// octopress
cd octopress
gem install bundler
bundle install
rake install

I now had everything in place for a bearbones, uncustomized octopress blog. Just to make sure things where working I then tried the local development server:

rake generate
rake preview

Hit the local url (localhost:4000) in the browser and there it was!



So, I mentioned that I wanted to have some ability to configure the blog. Meaning adding a few bells and whistles (like adding some social links, disqus comments, and some customized css).

Just like setting up the framework, customizations are also super easy. One can find the out of the box configuration points within the _config.yml file.

Changing the url, title, subtitle are the first things to configure at the top of the file.

title: My Octopress Blog
subtitle: A blogging framework for hackers.
author: Your Name

Plugin configurations are next. You can change the structure of how the links are constucted, pagination, etc. Also, anything listed in the sidebar can be modified by changing the list of included files in the default_asides setting.

# If publishing to a subdirectory as in set 'root: /project'
root: /
permalink: /blog/:year/:month/:day/:title/
source: source
destination: public
plugins: plugins
code_dir: downloads/code
category_dir: blog/categories
markdown: rdiscount
pygments: false # default python pygments have been replaced by pygments.rb

paginate: 10          # Posts per page on the blog index
pagination_dir: blog  # Directory base for pagination URLs eg. /blog/page/2/
recent_posts: 5       # Posts in the sidebar Recent Posts section
excerpt_link: "Read on &rarr;"  # "Continue reading" link text at the bottom of excerpted articles

titlecase: true       # Converts page and post titles to titlecase

# list each of the sidebar modules you want to include, in the order you want them to appear.
# To add custom asides, create files in /source/_includes/custom/asides/ and add them to the list like 'custom/asides/custom_aside_name.html'
default_asides: [asides/recent_posts.html, asides/github.html, asides/twitter.html, asides/delicious.html, asides/pinboard.html, asides/googleplus.html]

# Each layout uses the default asides, but they can have their own asides instead. Simply uncomment the lines below
# and add an array with the asides you want to use.
# blog_index_asides:
# post_asides:
# page_asides:

Any fun widgets like github repos, or social links are configured from within the 3rd Party plugin section.

# Github repositories
github_repo_count: 0
github_show_profile_link: true
github_skip_forks: true

# Twitter
twitter_tweet_count: 4
twitter_show_replies: false
twitter_follow_button: true
twitter_show_follower_count: false
twitter_tweet_button: true

# Google +1
google_plus_one: false
google_plus_one_size: medium

# Google Plus Profile
# Hidden: No visible button, just add author information to search results
googleplus_hidden: false

# Pinboard
pinboard_count: 3

# Delicious
delicious_count: 3

# Disqus Comments
disqus_show_comment_count: false

# Google Analytics

# Facebook Like
facebook_like: false

Lastly, I modified some css to personalize the look and feel. I changed the background color and added an image (I’m not a designer, so this is always magic to me:). It was straight forward to add a custom.css stylesheet to the source/stylesheets directory and then link to it in the source/_includes/custom/header.html file.

All the changes can be viewed in my personal fork of octopress: [][10]

Porting The Existing Data

I didn’t have a lot of blog entries so I manually moved all my data over. I had articles written in html and restructured text so most of it ported over almost directly. I made a few adjustments to make sure that the urls matched the existing urls so that any links carried over. I’m sure that I could have written a script to extract the data and format it into a post file but this was just as easy.

rake new_post['the title of the article']

Then it was a matter of cutting and pasting in the previous text and then change the name of the post file to match the url that the article was previously hosted at.

Rinse and repeat.

[0]: of them. [1]: [2]: [3]: [4]: [5]: [6]: [7]: [8]: [9]: [10]:

by John M Costa, III

Presentation Notes from CashStar Developer Sprint

Its tough to talk about documentation:

  • Can seem overly judgmental
  • Boring.
  • We already know how to do it
  • We never have time to do it

Why choose a sprint on ReadTheDocs and documentation?

  • I want to learn best documentation practice (or really just better practice)
  • Explore how to make it easier


  • Consider why we document
  • Consider where we put that documentation
  • Introduce team to `Sphinx `_
  • Introduce team to `reStructuredText `_
  • Introduce team to `CashStar's ReadTheDocs Server `_

Why do you document code

It's a simple question...though it doesn't appear to have a simple answer. Through scouring various resources, I found numerous lists of reasons why to document, how to document, where in your code to document, how to get people to document...and so on. There are quite a few lists detailing all these things, here are some of my favorites:

  • Not all code is obvious, complex algorithms are not quite readable by all
  • Finding out details take long time, it is a waste of business money
  • When you understand the function of each component you can answer business questions.
  • Not all developers have the same IQ - You want every one to get it not only smart John
  • You’re asked to change or update a piece of code that you wrote six months ago. You didn’t comment your code, and now you can’t remember why the heck you did what you did!
  • Don’t put yourself or anyone else in the position of having to guess how a piece of code works.

Other lists (some of the items above are from these):

  • ` `_
  • ` `_

What does this boil down to?

  • comment your code to make other people’s lives easier
  • comment your code to make your life easier

My belief is in value

This `Slashdot Thread `_ has a lot of interesting points about getting developers to code, the how and why.

I think Tom (822) hits the nail on the head:

Who is it valuable to?

  It's an investment into the future. If you need to pick this project up again one, two or five years down the road, and do any non-trivial changes to it, good (and that means correct, short and to the point, not extensive and theoretical) documentation will save you valuable time.

If it’s throwaway code, don’t waste time and effort on documentation. If you plan to use it for some time, chances are very high it will need fixes, updates and changes, and documentation will make those a lot easier, faster and cheaper.

  Decisions are made in the present, and if resources are tight in the present, things of potential value in the future are discounted further.

Why do we document code?

I think this answer is simple:

We document code so that we create additional value for the ourselves, our peers, and effectively the company or project we are working for/on.

How do you document code?

What does typcial code documentation look like?

Below is a bit of sample code that could use a little bit of work. Some of the code has been snipped for brevity so that we can focus on the method at a higher level.

What could we improve here?

  1. We don't know what's being passed in for objects.
  2. What is the intention of the method?
  3. There's a lot going on in this method, can it be simplified?

Our sample... but reworked (somewhat):

Other improvements to consider

  1. Further refactor into even smaller bits of code
  2. Unit tests documenting the use of the functions

Additional references

  1. StackOverflow (Mil, moonshadow):
  2. The Art of Code Documentation (Drew Sikora):>
  3. CodeAsDocumentation (martinfowler):
  4. Golden rule of documenting code (Jeff Davis):
  5. How not to write python code:
by John M Costa, III

Configuring an internal ReadTheDocs

Project Overview

  • ReadTheDocs application to serve project documentation
  • Simple and Straightforward, minimal overhead
  • Modified to point to our domain, not readthedocs
  • Restricted Public Access

Technology Overview

ReadTheDocs comes with the following technology stack:

  • Varnish
  • Nginx
  • gunicorn
  • postgres
  • python/django
  • solr (haystack search)
  • Chef

In an effort to align with some of the technologies I have some experience with, I modified the technology stack slightly, its now as follows:

* supervisor
* gunicorn
* memcached
* nginx
* python/django
* mysql
* whoosh (haystack search)
* fabric

Key Functionality Overview

  • Built and versioned documentation (
  • Search

Setup Steps

Provision a server:

  • Provision an ubuntu 11.10 instance (I used rackspace, other versions have not been tested)

Clone and setup the project locally:

  • git clone
  • mkvirtualenv --distribute readthedocs
  • pip install -r pip_requirements.txt
  • modify the file by changing the server ip and root password to the values returned by your instance provisioner
  • run fab -f stage_rtd

Post Installation Steps:

  • Try
  • Change the root password to mysql!!
  • Change the test user password!!
  • Configure IP Tables to be as restrictive as you need
  • Enable email via django settings
  • Upload a test project (test/test)
  • Modify the nginx settings to support (project name) support:

Some Gotcha’s

  • If builds fail, information on why they fail is sparse
by John M Costa, III

My Notes On Uploading a Package PyPI

These are my notes for uploading to Pypi. Additionally, I've included some useful links that provide a lot of background.

  1. Register at PyPI

    You can do so here: Register at PyPI

  2. create a .pypirc file in your home directory

        vi .pypirc
    index-servers = pypi
    username: &#060; username &#062;
    password: &#060; password &#062;
    <li><p>upload your package to PyPI</p>
    cd  &#060; package root &#062;
    python register sdist upload
by John M Costa, III

New Relic's Python App Public Beta

I recently made the trek to Portland, OR for #djangocon. Demo'd there was New Relic's Real-Time Performance tool, complete with a new implementation for Python apps! This seemed like some fantastic software, but I was skeptical as to how easy it would be to install. As an experiment, I used their public beta invite on this blog.

I'd like to first point out, that the documentation to configure the the app was excellent and abundant. The software was bundled with an install file that was located in the root of their distribution (easy to find) and straight forward (easy to follow). I don't have in my notes exactly where I downloaded the installation package. However I did so and received version "newrelic-"

Because I use Virtualenv and Pip for dependency management, I added the following line to my requirements file.

This was the most challenging part of the installation process. It wasn’t clear where this endpoint was located. While the documentation listed “http://host/path/to/newrelic-python-A.B.C.D.tar.gz” as the location, I had to ferret out the app version. Listed in the agent download page for the python agent was “", however this didn’t appear to exist yet (confusing because I had already downloaded it). With a few curls, I was able to find the version listed above and carried on merrily.

Per the Install file, the next step was to create a newrelic.ini file. I copied the example file from software bundle into the root of my project. Again per the instructions, I added my settings for license_key, app_name, and log file location.

The final change I made was to my index.wsgi file. Here I added the following lines:

# configure new relic
import newrelic.agent #new
newrelic.agent.initialize('/path/to/blog/newrelic.ini')  #new

# wsgi
import django.core.handlers.wsgi # old
application = django.core.handlers.wsgi.WSGIHandler() #old
application = newrelic.agent.wsgi_application()(application) #new

The values commented as old already existed in my WSGI. Those listed where required to initialize and start the new_relic agent. A restart of the application is now required.

Now what? Start checking out the cool reports! I've shown a few examples below. Depending on the current site traffic, there may not be any data.

by John M Costa, III

My experience with python-gnupg

I was working though some usage of python-gnupg with a co-worker and, in the hope of helping out others (or my future self), am posting my shell and bpython notes here. As time permits, I'll clean up the notes.

I've broken out my notes into 4 parts:

  1. Manual Key Creation
  2. Sample File Creation
  3. Checking your keys & Writing your file
  4. Validating that it works

Manual Key Creation

I created some keys manually with gpg so that I would have a baseline to work with. If you don't have gpg installed, you can get it here

Once you have gpg installed, you can now start the process of generating your public key. Kick off the gpg generate key command. For my use, the default selections where good enough.

Johns-MacBook-Air:~ jcosta$ gpg --gen-key
gpg (GnuPG/MacGPG2) 2.0.17; Copyright (C) 2011 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Please select what kind of key you want\:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)

Your selection? 1 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) 2048 Requested keysize is 2048 bits Please specify how long the key should be valid.

0 = key does not expire
<n>  = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years

Key is valid for? (0) Key does not expire at all Is this correct? (y/N) y GnuPG needs to construct a user ID to identify your key. Real name: John Costa Email address: Comment: You selected this USER-ID: “John Costa” Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key. We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. gpg: /Users/jcosta/.gnupg/trustdb.gpg: trustdb created gpg: key 6FE30238 marked as ultimately trusted public and secret key created and signed. gpg: checking the trustdb

Sample File Creation

When attempting automation, I usually try to validate that I can complete the steps manually. In this case, to validate that encryption/decryption is working, and that I haven't botched key creation, I created a sample file called "test.txt". I have placed a bit of text in the file which can be double checked when decrypted.

Johns-MacBook-Air:Documents jcosta$ echo "test" > test.txt
Johns-MacBook-Air:Documents jcosta$ cat test.txt

Before encrypting the file, it will be useful to know the id of the key just installed. Use the "list keys" function to display your keys.

Johns-MacBook-Air:Documents jcosta$ gpg --list-keys
pub   2048R/C4ECDCDC 2011-09-09
uid                  John Costa 
sub   2048R/8149FB83 2011-09-09

Now encrypt the file, outputting the encrypted file as "test.gpg". Use the public key id listed to encrypt the file.

Johns-MacBook-Air:Documents jcosta$ gpg --output test.gpg --armor --encrypt test.txt
You did not specify a user ID. (you may use "-r")

Current recipients:

Enter the user ID.  End with an empty line: John Costa 

Current recipients:
2048R/8149FB83 2011-09-09 "John Costa "

Enter the user ID.  End with an empty line:
Johns-MacBook-Air:Documents jcosta$ ls -ltr
-rw-r--r--  1 jcosta  staff    5 Sep 13 06:01 test.txt
-rw-r--r--  1 jcosta  staff  609 Sep 13 06:32 test.gpg

Now that we've encrypted a file, lets decrypt the file!

Johns-MacBook-Air:Documents jcosta$ gpg --armor --output decrypt.txt --decrypt test.gpg

You need a passphrase to unlock the secret key for
user: "John Costa "
2048-bit RSA key, ID 8149FB83, created 2011-09-09 (main key ID C4ECDCDC)

gpg: encrypted with 2048-bit RSA key, ID 8149FB83, created 2011-09-09
      "John Costa "
Johns-MacBook-Air:Documents jcosta$ ls -ltr
total 24
-rw-r--r--  1 jcosta  staff    5 Sep 13 06:01 test.txt
-rw-r--r--  1 jcosta  staff  609 Sep 13 06:32 test.gpg
-rw-r--r--  1 jcosta  staff    5 Sep 13 06:42 decrypt.txt
Johns-MacBook-Air:Documents jcosta$ cat decrypt.txt

====================================== Checking your keys & Writing your file

I then fired up a bpython session:

| Johns-MacBook-Air:~ jcosta$ workon example-gpg | (example-gpg)Johns-MacBook-Air:~ jcosta$ bpython

| »> import gnupg | »> gpg = gnupg.GPG(gnupghome="/Users/jcosta/.gnupg") | »> gpg.list_keys() | [{‘dummy’: u’’, ‘keyid’: u'059FF24CC4ECDCDC’, ’expires’: u’’, ’length’: u'2048’, ‘ownertrust’: u’u’, ‘algo’: u'1’, ‘fingerprint’: u'0F379C3E410B6924C2502E26059FF24CC4ECDCDC’, ‘date’: u'1315609511’, ’trust’: u’u’, ’type’: u’pub’, ‘uids’: [u’John Costa’]}] | »> stream = open(’/Users/jcosta/Documents/test.txt’, “rb”) | »> encrypted_ascii_data = gpg.encrypt_file(stream, “059FF24CC4ECDCDC”) | »> encrypted_ascii_data.status | ’encryption ok’

| »> | ‘—–BEGIN PGP MESSAGE—–\nVersion: GnuPG/MacGPG2 v2.0.17 (Darwin)\nComment: | GPGTools - |\n\nhQEMAxqnnNGBSfuDAQf/est1PAn3sI4ZhPTHmcVe80wKlIcSu6N9 | BZqPykkBso9S\nfHGkcljtdJ0ICs3W38gn0qLG88UqzjNKWWCIgedAO0Pe12v38c8Ro3kN | SpJ+2hgo\nWUpn1JxuunThHyfDK8UxmNXperlO1PjKhMlFsQwSFWHhC5u7CH4/hCaVN | KOKQc0K\nkktXyoXM1D/CM1vlYCqDRbWyBdLg/W8VEOFy6zZHunDo4YxEWDmLE | EKj9kbdGTkq\ndsEL6/Y6Zykx17RMonGVCZU1X7DEyLUCuVfDGCHrlSFi8NjxFR1CB | POhJWNadzlG\nh7L8PJnWjcb/T2Mko5ZP5XWl4qN8hZljyg45x0PGzNI7AZLLnIOyzAt3T | AcyFZaJ\nhq8qxoJAvJ7tNjt4BCb1hXOav/hJ64Xyp7IpgTL1PUiC9hK7nCYwBvv3QUg= | \n=Vc4H\n—–END PGP MESSAGE—–\n’ | »> out.write( | »> out.close()

The files aren’t exactly the same size, but they should be close.

| Johns-MacBook-Air:Documents jcosta$ ls -ltr | -rw-r–r– 1 jcosta staff 15 Sep 9 15:02 test.txt | -rw-r–r– 1 jcosta staff 592 Sep 9 16:28 | -rw-r–r– 1 jcosta staff 609 Sep 9 16:33 test.gpg

Handy References

by John M Costa, III

Removing MySQL from OSX Lion

Recently I’ve had to remove a version of MySQL 5.5 from my Macbook so that I could go back to a 5.1 version. However it appears that there isn’t an automatic way to remove and install an older version. A few google searches revealed a bulk of the removal process, but additional searching revealed a few more steps.

sudo rm /usr/local/mysql
sudo rm -rf /usr/local/mysql*
sudo rm -rf /Library/StartupItems/MySQLCOM
sudo rm -rf /Library/PreferencePanes/My*
rm -rf ~/Library/PreferencePanes/My*
sudo rm -rf /Library/Receipts/mysql*
sudo rm -rf /Library/Receipts/MySQL*
sudo rm -rf /var/db/receipts/com.mysql.*

# Edit the following file, removing the line `MYSQLCOM=-YES-`.
# you may need sudo for write privileges to edit the file
# TIP: when using vim, use `dd` to delete the line and then `:wq` to save
#      the file
sudo vim /etc/hostconfig   # remove the line MYSQLCOM=-YES-

9/28/2011 - added comment on last line. Thanks Justin for pointing this out!
8/16/2013 - removed html line breaks. Added additional notes on vim from Tom Jacobs. Thanks!

Web references:

Migrating a Mercurial Repository

When I first started playing with Python and Django, I was introduced to Mercurial. I had used Subversion for a while and once familiar with Mercurial, there was no going back (well...when I had the choice ;-) ). I've posted before that I use WebFaction as a host for my personal projects. This hosting also included setting up my own Hg server. I was happy, until Ken Cochrane turned me on to BitBucket.

I've been using BitBucket off and on for about a year now. My old projects have remained in my WebFaction repository, but my new projects have been going into BitBucket. No complaints. It has been solid and reliable. I can even setup SSH public keys for all my machines accessing the account. A plus when compared with my personal hosting.

So, it occurred to me. How do I convert all my projects over to BitBucket? As with most tasks I haven't yet encountered I look to my friend Google to see if someone has solved the task in some trivial way. I should have realized how simple it was, but I'm glad I checked.

  • Create a project at
  • Clone a repo from the old repository. Ensure everything is up to date with latest code and tags.
  • Change the .hg/hgrc file to point to the new Bit bucket repository

    The old .hg/hgrc file

    ``` [paths] default = ```

    The new .hg/hgrc file

    ``` [paths] default = ssh:// account/project name ```
  • hg push

Yep, that's all it took. I love Hg.

Inspired by Andrew Frayling's post Bitbucket Import