Recent Posts

by John M Costa, III

How do I setup multi-domain GitHub pages?

Credit goes to this Stack Overflow answer, but note, not the accepted answer it’s the one currently below.

  1. Create an extra repository for your domain. I used the name of the domain as the repository name. See https://GitHub.com/johncosta/johnmcostaiii.net.

  2. Create an index.html file in the root of the project.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>Redirecting to https://johnmcostaiii.com</title>
    <meta http-equiv="refresh" content="0; URL=https://johnmcostaiii.com">
  </head>

  <link rel="canonical" href="https://johnmcostaiii.com">
</html>
  1. Create a CNAME file in the root of the project.
johnmcostaiii.com
  1. Setup the DNS for the domain to point to the GitHub pages servers. See this write up for how it should look: https://johnmcostaiii.com/posts/2023-11-10-new-blog-hosting/
by John M Costa, III

New Blog Hosting

It was recently suggested by a mentor that I get back into blogging. I’ll create an entry dedicated on this topic, but the byproduct of this discussion inspired me to resurface and re-host the blog I had started over 10 years ago.

Choosing the Static Site Generator

Given I already had some content formatted in Markdown and the old site which used a version of Hugo, I didn’t really spend a significant amount of time re-considering a static site to drive it.

I did take a few moments to see what was out there and found this list of Awesome Static Generators. I also peeked at reddit to see if there was any consensus out there, but as expected there was little and it was mostly opinion based.

Gitlab has a write-up suggesting an approach static site generator which was a little closer to what I was hoping to read through, but they didn’t draw any conclusions. This was also not unexpected as they probably can’t really back one vs another as they could host any of them.

To summarize the article, see the following table:

GeneratorLanguageTemplating EngineFeaturesCommunity and Support
HugoGoMarkdownCross-platform, statically compiled Go binary- Thriving community, prebuilt themes, and starter repositories
ZolaRustTeraStrongly opinionated, prebuilt binary, fast setup- Limited plugin ecosystem, content-driven focus
JekyllRubyLiquidInspired static sites, Liquid templating language, vast plugin ecosystem- Beginner-friendly, over 200 plugins, themes, and resources
HexoNodeJSNunjucksNodeJS-based, built-in support for Markdown, front matter, and tag plugins- Specializes in markup-driven blogs, supports multiple templating engines
GatsbyJSReactGraphQLReact-based, optimized for speed, extensive plugin library, supports data pulling from multiple sources- “Content mesh” philosophy, 2000+ community-contributed plugins
AstroJavaScriptVariesBring Your Own Framework (BYOF), no package dependencies, supports partial hydration- Flexibility, future-proof for migrations, online playground for trying features

Setup

I’m a little embarassed to admit this, but I’ve been late to the party in using GitHub pages. Instead, I had a container running the site on a droplet on DigitalOcean. One of the best parts of the move is that I’ll be able to save a little on hosting costs. And by save a little, I mean can start another project for a similar cost :)

Here’s some of the steps I needed to take to move it over:

  1. Create a new GitHub repository. So that I can find it easier later on, I used the domain as the repository name. See the repository here: https://github.com/johncosta/johnmcostaiii.com

  2. I looked through the Hugo theme site for a theme that I wanted: https://themes.gohugo.io/

  3. Following the hugo guide posted here I then created a new hugo site with the following command:

    hugo new site quickstart
    cd quickstart
    git checkout <the theme> themes/<theme name>
    echo "theme = '<theme name'" >> hugo.toml
    hugo server
    

    NOTE: The guide uses the ananke theme, but I wanted something different.

  4. Move the generated content out of quickstart and into the root.

    NOTE: I did this to avoid the complexity of a directory. Now everything can run from the root.

  5. Copy all my content into the content directory

  6. Test the site with hugo server.

NOTE: I created a Makefile to start encapsulating the raw commands.

GitHub Actions Workflows

  1. Copy and paste the action workflow into the project

.github/workflows/hugo.yml

# Sample workflow for building and deploying a Hugo site to GitHub Pages
name: Deploy Hugo site to Pages

on:
  # Runs on pushes targeting the default branch
  push:
    branches:
      - main

  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:

# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
  contents: read
  pages: write
  id-token: write

# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
  group: "pages"
  cancel-in-progress: false

# Default to bash
defaults:
  run:
    shell: bash

jobs:
  # Build job
  build:
    runs-on: ubuntu-latest
    env:
      HUGO_VERSION: 0.120.2
    steps:
      - name: Install Hugo CLI
        run: |
          wget -O ${{ runner.temp }}/hugo.deb https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-amd64.deb \
          && sudo dpkg -i ${{ runner.temp }}/hugo.deb          
      - name: Install Dart Sass
        run: sudo snap install dart-sass
      - name: Checkout
        uses: actions/checkout@v4
        with:
          submodules: recursive
          fetch-depth: 0
      - name: Setup Pages
        id: pages
        uses: actions/configure-pages@v3
      - name: Install Node.js dependencies
        run: "[[ -f package-lock.json || -f npm-shrinkwrap.json ]] && npm ci || true"
      - name: Build with Hugo
        env:
          # For maximum backward compatibility with Hugo modules
          HUGO_ENVIRONMENT: production
          HUGO_ENV: production
        run: |
          hugo \
            --gc \
            --minify \
            --baseURL "${{ steps.pages.outputs.base_url }}/"          
      - name: Upload artifact
        uses: actions/upload-pages-artifact@v2
        with:
          path: ./public

  # Deployment job
  deploy:
    environment:
      name: github-pages
      url: ${{ steps.deployment.outputs.page_url }}
    runs-on: ubuntu-latest
    needs: build
    steps:
      - name: Deploy to GitHub Pages
        id: deployment
        uses: actions/deploy-pages@v2

Deployment

GitHub has a guide for setting up static sites which can be found here: https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site

  1. Set up your domain registrar. Here it points to Digital Ocean as I manage projects through them.

Godaddy Settings

  1. Get the ip values for your GitHub pages. Mine is johncosta.github.io
% dig johncosta.github.io

; <<>> DiG 9.10.6 <<>> johncosta.github.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12535
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;johncosta.github.io.		IN	A

;; ANSWER SECTION:
johncosta.github.io.	3600	IN	A	185.199.111.153
johncosta.github.io.	3600	IN	A	185.199.108.153
johncosta.github.io.	3600	IN	A	185.199.110.153
johncosta.github.io.	3600	IN	A	185.199.109.153

;; Query time: 40 msec
;; SERVER: 192.168.87.16#53(192.168.87.16)
;; WHEN: Fri Nov 10 18:50:54 EST 2023
;; MSG SIZE  rcvd: 112
  1. Setup Digital Ocean to point to GitHub

DigitalOcean Settings

  1. Set the custom domain in the GitHub Pages Settings section of the repository:

GitHub Settings

by John M Costa, III

Installing New Relic Server Monitoring within Docker Containers

The inspiration for this post is from a recent Stack Overflow question that I had answered when I had found the selected answer could be improved upon. You can find it here.

I ran into a problem recently when working with Docker and New Relic Server Monitoring together. Using the directions found in the New Relic docs for Ubuntu/Debian, the Dockerfile additions I first came up with looked as follows:

FROM stackbrew/ubuntu:12.04
MAINTAINER John Costa (john.costa@gmail.com)

RUN apt-get update
RUN apt-get -y install wget

# install new relic server monitoring
RUN echo deb http://apt.newrelic.com/debian/ newrelic non-free >> /etc/apt/sources.list.d/newrelic.list
RUN wget -O- https://download.newrelic.com/548C16BF.gpg | apt-key add -
RUN apt-get update
RUN apt-get install newrelic-sysmond
RUN nrsysmond-config --set license_key=YOUR_LICENSE_KEY

CMD ["/etc/init.d/newrelic-sysmond", "start"]

This results in an error when trying to wget from download.newrelic.com:

--2014-02-21 23:19:33--  https://download.newrelic.com/548C16BF.gpg
Resolving download.newrelic.com (download.newrelic.com)... 50.31.164.159
Connecting to download.newrelic.com (download.newrelic.com)|50.31.164.159|:443... connected.
ERROR: cannot verify download.newrelic.com's certificate, issued by `/C=US/O=GeoTrust, Inc./CN=GeoTrust SSL CA':
  Unable to locally verify the issuer's authority.
To connect to download.newrelic.com insecurely, use `--no-check-certificate'.
gpg: no valid OpenPGP data found.

The error seems to present a solution that is tempting, especially because it works. This would be adding --no-check-certificate to your wget command. This workaround does avoid the error, but also works around the protection that ssl is providing.

The fix is really straight forward, but not obvious if you’re not familiar with apt. With the installation of the ca-certificates package as part of your dockerfile, you can use wget and still validate the certificate.

The following is a working sample:

FROM stackbrew/ubuntu:12.04
MAINTAINER John Costa (john.costa@gmail.com)

RUN apt-get update
RUN apt-get -y install ca-certificates wget  # <-- updated line

# install new relic server monitoring
RUN echo deb http://apt.newrelic.com/debian/ newrelic non-free >> /etc/apt/sources.list.d/newrelic.list
RUN wget -O- https://download.newrelic.com/548C16BF.gpg | apt-key add -
RUN apt-get update
RUN apt-get install newrelic-sysmond
RUN nrsysmond-config --set license_key=YOUR_LICENSE_KEY

CMD ["/etc/init.d/newrelic-sysmond", "start"]

Some caveats:

  • This container is really short lived and will exit almost immediately. The example is for illustrative use.

  • Don’t forget to put your actual license key in place of “YOUR_LICENSE_KEY” or else you’ll get an error of the following: Error: invalid license key - must be 40 characters exactly

  • This is a working example, but I realize that most wont want to use the single /etc/init.d/newrelic-sysmond start command to run their container. you’ll most likely have some sort of init.sh script and place this command in the init.sh.

  • You might not want to install the server monitoring in your development environments. To work around this, in the same init.sh script above, you could check for an environment variable that you inject when the container is first started. Your init file might look as follows (including the start command):

# Conditionally install our key only in production and staging
if [ "${MY_ENV}" == "production" ] || [ "${MY_ENV}" == "staging" ] ; then
    nrsysmond-config --set license_key=YOUR_LICENSE_KEY
fi

# The New Relic daemon likes to manage itself. Start it here.
/etc/init.d/newrelic-sysmond start
by John M Costa, III

Django Projects to Django Apps: Converting the Unit Tests

Recently I went through a process of breaking a large django project into smaller installable applications. Each smaller component could be reused from within any number of django projects, but wasn’t a django project itself. One of the issues I encountered was “What do I do with the unit tests?” Using the standard ./manage.py test no longer worked for me because my settings where in the master project.

I had heard of py.test, so this seemed like an opportunity to see if some of the py.test magic would work for me. Admittedly, I didn’t do a large amount of searching around for additional testing frameworks or processes…this was an excuse to try out the project. :)

Installation

Installing py.test is easy. Because I wanted some additional features (DJANGO_SETTINGS_MODULE environment variable specifically), so I opted for the pytest-django module instead of the base pytest project.

pip install pytest-django

Configuration

To get my unit tests running, I needed to add a few additional things:

  • a test settings file
  • a conftest.py file
  • a pytest.ini file
  • a small amount of test package cleanup

test settings file

Created a very light settings file with only my database configuration

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': ':memory:',
        }
}

conftest.py

This was required to fix an issue with my settings file location.

  import os
  import sys

  sys.path.append(os.path.dirname(__file__))

pytest.ini file

As a convenience, instead of passing parameters on the commandline each time, py.test uses a pytest.ini file to pass these arguments to the test runner.

[pytest]
DJANGO_SETTINGS_MODULE = tests.pytest_settings

test package cleanup

py.test has smarter test resolution. To take advantage of these features, I did the following:

  • Removed statements like from mytests import * from the __init__.py files
  • Changed the name of my tests to match test* format

Wrap-up

Hopefully this post helps future me and others to quickly get up and running with py.test and pytest-django.

by John M Costa, III

Installing Redis on Docker

I’m currently employed by dotCloud and had an opportunity to play around with our open sourced linux container runtime project called Docker.

You’ll need to have an functional version of docker to follow these steps. I’ve included an overview of my notes for installation, however you can find additional installation instructions at the docker website.

Introduction to Docker

If you’ve already worked with docker, you can skip this part. You already have docker installed and probably are running your own containers. If you haven’t, here’s a general overview to a handful of docker commands. Please read on.

I’m working on a MBA, so I ran through the MacOS instructions which are repeated below. It will require that you already have VirtualBox and Vagrant already installed. If you don’t have these, you can find the getting started docs here.

First clone the repo and cd into the cloned repository:

$ git clone https://github.com/dotcloud/docker.git && cd docker

Now, a quick vagrant up and vagrant ssh and I was already issuing docker commands.

Also note: I’ve intentionally left out the vagrant output as there’s nothing too important there. It took about 1 minute to complete.

$ vagrant up
$ vagrant ssh

vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ docker ps
ID          IMAGE       COMMAND     CREATED     STATUS      COMMENT

vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ docker images
REPOSITORY          TAG                 ID                  CREATED             PARENT

vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ docker version
Version:0.1.2
Git Commit:

So far so good! Now lets run a shell within a docker container.

docker run -i -t base /bin/bash
Image base not found, trying to pull it from registry.
Pulling repository base
Pulling tag base:ubuntu-quantl
Pulling b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc metadata
Pulling b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc fs layer
10240/10240 (100%)
Pulling 27cf784147099545 metadata
Pulling 27cf784147099545 fs layer
94863360/94863360 (100%)
Pulling tag base:latest
Pulling tag base:ubuntu-12.10
Pulling tag base:ubuntu-quantal

So, what have we done here? We’ve called run, which runs our command in a new container, and passed a few docker specific parameters. These include, -i, to keep stdin open, and -t to allocate a pseudo-tty. And finally the command we’re running is /bin/bash to give us a bash shell.

An interesting side effect is that we now have a docker base image locally. We can see this when we run docker images.

$ docker images
REPOSITORY          TAG                 ID                  CREATED             PARENT
base                latest              b750fe79269d        12 days ago         27cf78414709
base                ubuntu-12.10        b750fe79269d        12 days ago         27cf78414709
base                ubuntu-quantal      b750fe79269d        12 days ago         27cf78414709
base                ubuntu-quantl       b750fe79269d        12 days ago         27cf78414709
<none>              <none>              27cf78414709        12 days ago

Lastly, lets exit out of our docker container, and you should see the following:

SIGINT received

Let’s check the status of our docker container:

vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ docker ps
ID             IMAGE         COMMAND      CREATED          STATUS          COMMENT
9468f9c097f7   base:latest   /bin/bash    25 minutes ago   Up 25 minutes

It looks like its still running…ok lets stop it:

vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ docker stop 9468f9c097f7
9468f9c097f7

Let’s make sure that it’s really gone:

$ docker ps
ID          IMAGE       COMMAND     CREATED     STATUS      COMMENT

Installing and running Redis within a docker container

Now that we have a notion of what going on with docker commands and installation, lets start loading it up with the tools we’ll need for running a redis server within a docker container.

Start up a new container using the base image.

$ docker run -i -t base /bin/bash
root@b9859484e68f:/#

Lets update our system packages from what’s included in our base image:

root@b9859484e68f:/# apt-get update
Ign http://archive.ubuntu.com quantal InRelease
Hit http://archive.ubuntu.com quantal Release.gpg
Hit http://archive.ubuntu.com quantal Release
Hit http://archive.ubuntu.com quantal/main amd64 Packages
Get:1 http://archive.ubuntu.com quantal/universe amd64 Packages [5274 kB]
Get:2 http://archive.ubuntu.com quantal/multiverse amd64 Packages [131 kB]
Get:3 http://archive.ubuntu.com quantal/main Translation-en [660 kB]
Get:4 http://archive.ubuntu.com quantal/multiverse Translation-en [100 kB]
Get:5 http://archive.ubuntu.com quantal/universe Translation-en [3648 kB]
Fetched 9813 kB in 17s (557 kB/s)
Reading package lists... Done
root@b9859484e68f:/#

Now install telnet and our redis-server:

root@b9859484e68f:/# apt-get install telnet redis-server
Reading package lists... Done
Building dependency tree... Done
The following extra packages will be installed:
  libidn11 libjemalloc1
The following NEW packages will be installed:
  libidn11 libjemalloc1 redis-server telnet wget
0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
Need to get 784 kB of archives.
After this operation, 1968 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ quantal/main libidn11 amd64 1.25-2 [119 kB]
Get:2 http://archive.ubuntu.com/ubuntu/ quantal/universe libjemalloc1 amd64 3.0.0-3 [85.9 kB]
Get:3 http://archive.ubuntu.com/ubuntu/ quantal/main telnet amd64 0.17-36build2 [67.1 kB]
Get:4 http://archive.ubuntu.com/ubuntu/ quantal/main wget amd64 1.13.4-3ubuntu1 [280 kB]
Get:5 http://archive.ubuntu.com/ubuntu/ quantal/universe redis-server amd64 2:2.4.15-1 [233 kB]
Fetched 784 kB in 2s (334 kB/s)
dpkg-preconfigure: unable to re-open stdin: No such file or directory
Selecting previously unselected package libidn11:amd64.
(Reading database ... 9893 files and directories currently installed.)
Unpacking libidn11:amd64 (from .../libidn11_1.25-2_amd64.deb) ...
Selecting previously unselected package libjemalloc1.
Unpacking libjemalloc1 (from .../libjemalloc1_3.0.0-3_amd64.deb) ...
Selecting previously unselected package telnet.
Unpacking telnet (from .../telnet_0.17-36build2_amd64.deb) ...
Selecting previously unselected package wget.
Unpacking wget (from .../wget_1.13.4-3ubuntu1_amd64.deb) ...
Selecting previously unselected package redis-server.
Unpacking redis-server (from .../redis-server_2%3a2.4.15-1_amd64.deb) ...
Processing triggers for ureadahead ...
Setting up libidn11:amd64 (1.25-2) ...
Setting up libjemalloc1 (3.0.0-3) ...
Setting up telnet (0.17-36build2) ...
update-alternatives: using /usr/bin/telnet.netkit to provide /usr/bin/telnet (telnet) in auto mode
Setting up wget (1.13.4-3ubuntu1) ...
Setting up redis-server (2:2.4.15-1) ...
Starting redis-server: redis-server.
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
Processing triggers for ureadahead ...

A quick double check to make sure that our redis-server is now installed:

root@b9859484e68f:/# which redis-server
/usr/bin/redis-server

root@b9859484e68f:/# redis-server --version
Redis server version 2.4.15 (00000000:0)

The image that we’ve created is clean and I want to keep this image before I go much further, so now I’m going to open up another terminal and get back to our vagrant image. Here we can commit and store the filesystem state.

$ docker login
Username (): johncosta
Password:
Email (): john.costa@gmail.com
Login Succeeded

$ docker commit b9859484e68f johncosta/redis

$ docker push johncosta/redis
Pushing repository johncosta/redis (1 tags)
Pushing tag johncosta/redis:latest
Pushing 3e7b84670ea1c7d4b5df8095a3f2051ac2fb4e34fed101d553ad919c4bd923e4 metadata
Pushing 3e7b84670ea1c7d4b5df8095a3f2051ac2fb4e34fed101d553ad919c4bd923e4 fs layer
21975040/21975040 (100%)
Pushing b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc metadata
Pushing 27cf784147099545 metadata
Registering tag johncosta/redis:latest

Update:

I forgot to capture the commit command when grabbing the terminal output. Not to worry! I was able ssh into my vagrant VM and check the dockerd logs using: vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ less /var/log/dockerd


Let’s check what we’ve done and exit out of our container:

root@b9859484e68f:/# exit
exit

It looks like our docker container isn’t running anymore!

$ docker ps
ID          IMAGE       COMMAND     CREATED     STATUS      COMMENT

Lets start up a new container, but this time use the image we just created and committed to providing my redis instance as the image to use.

docker run -i -t johncosta/redis /bin/bash

vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ docker run -i -t johncosta/redis /bin/bash


root@61507c28cd67:/# /etc/init.d/redis-server start
Starting redis-server: redis-server.
root@61507c28cd67:/# ps faux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.5  18068  2016 ?        S    15:58   0:00 /bin/bash
redis       14  0.0  0.4  36624  1656 ?        Ssl  16:01   0:00 /usr/bin/redis-
root        17  0.0  0.3  15524  1108 ?        R    16:01   0:00 ps faux

Let’s make sure we can connect to it and interact with redis (remember we installed telnet!).

root@61507c28cd67:/# telnet 127.0.0.1 6379
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
+OK
+1365177842.076283 "MONITOR"
+OK
+1365177870.147208 "set" "docker" "awesome"
$7
awesome
+1365177874.927280 "get" "docker"
+OK
Connection closed by foreign host.

Ok, we know we can connect to redis and interact with it. But we’re inside the container, lets see how to connect to it from outside the container! First, lets inspect our container:

$ docker inspect 61507c28cd67
{
   "Id": "61507c28cd673ea4464248a8c2b936807bf951d6dc82d0f872b02586c5681139",
   "Created": "2013-04-05T08:58:33.711054-07:00",
   "Path": "/bin/bash",
   "Args": [],
   "Config": {
       "Hostname": "61507c28cd67",
       "User": "",
       "Memory": 0,
       "MemorySwap": 0,
       "AttachStdin": true,
       "AttachStdout": true,
       "AttachStderr": true,
       "Ports": null,
       "Tty": true,
       "OpenStdin": true,
       "StdinOnce": true,
       "Env": null,
       "Cmd": [
           "/bin/bash"
       ],
       "Image": "johncosta/redis"
   },
   "State": {
       "Running": true,
       "Pid": 6052,
       "ExitCode": 0,
       "StartedAt": "2013-04-05T09:09:19.733633-07:00"
   },
   "Image": "3e7b84670ea1c7d4b5df8095a3f2051ac2fb4e34fed101d553ad919c4bd923e4",
   "NetworkSettings": {
       "IpAddress": "10.0.3.8",
       "IpPrefixLen": 24,
       "Gateway": "10.0.3.1",
       "PortMapping": {}
   },
   "SysInitPath": "/opt/go/bin/docker"
}

Hmm, It looks like we don’t have a port that we can connect to. Looking at the run command, there’s something that we missed, the -p option, map a network port to the container. Let’s try this with the following:

docker run -p 6379 -i -t johncosta/redis /usr/bin/redis-server

Much better, we can now see that we’ve allocated ports 6379 and mapped it to the external port 49153.

vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ docker ps
ID             IMAGE                    COMMAND      CREATED         STATUS         COMMENT
0be92ce8581e   johncosta/redis:latest   /bin/bash    3 minutes ago   Up 3 minutes
vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ $ docker port 0be92ce8581e 6379
49153

Note: We don’t need to inspect the container and parse the entire container information set to get the mapped port. We can use the convenience command docker port.

OK! We’re almost there. Now terminate that docker process and start with a new command to start our redis server within docker in daemon mode. Test the results with a telnet session and a redis-cli session external to the docker container.

docker run -d -p 6379 -i -t johncosta/redis /usr/bin/redis-server

vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ docker ps
ID             IMAGE                    COMMAND                CREATED         STATUS         COMMENT
c0f7e48cafcf   johncosta/redis:latest   /usr/bin/redis-serve   4 minutes ago   Up 4 minutes
vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ docker port c0f7e48cafcf 6379
49174

vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ telnet 10.0.3.30 49174
Trying 10.0.3.30...
Connected to 10.0.3.30.
Escape character is '^]'.
monitor
+OK
+1365194060.897490 "monitor"
set docker awesome
+OK
+1365194071.640199 "set" "docker" "awesome"
get docker
$7
awesome
+1365194073.519484 "get" "docker"
quit
+OK
Connection closed by foreign host.

vagrant@vagrant-ubuntu-12:/opt/go/src/github.com/dotcloud/docker$ redis-cli -h 10.0.3.30 -p 49174
redis 10.0.3.30:49174> get docker
"awesome"

Update 5/6/2013:

It’s now possible to save images with their configuration options! I added one additional commit to do this:

docker commit -run '{"Cmd": ["/usr/bin/redis-server"], "PortSpecs": [":6379"]}' b9859484e68f johncosta/redis

Now to run an image it’s as easy as:

Get the image: docker pull johncosta/redis

Run the image: docker run johncosta/redis

Run in daemon mode: docker run -d johncosta/redis

Also, Check out the docker index.


la fin

by John M Costa, III

Django view decorators

I recently worked on a project that required a standard account and profile system. django-userena is usually my goto project for this due to its ease of setting up and its extensibility. There’s a subtle nuance to using this project’s default urls patterns in that the majority of the url patterns require passing the user’s username in the url. The username is then used in the view to find the user, since usernames are unique to the user.

For this particular project, I wanted to hide the username from the url path and came up with the following decorator that would allow us to use all the existing functionality of django-userena.

from functools import wraps

from django.core.urlresolvers import reverse
from django.conf import settings
from django.http import HttpResponseRedirect
from django.utils.decorators import available_attrs

LOGIN_URL = getattr(settings, 'LOGIN_URL')


def user_to_view(view_func):
    """ This view decorator is used to wrap views that require a user name,
    injecting the username, pulled from the request, into the view.
    """
    def _wrapped_view(request, *args, **kwargs):
        if not request or not request.user:
            return HttpResponseRedirect(reverse(LOGIN_URL))
        username = request.user.username
        kwargs.update(dict(username=username))
        return view_func(request, *args, **kwargs)
    return wraps(view_func, assigned=available_attrs(view_func))(_wrapped_view)

Now, for the each url pattern you want to modify, redefine it in your urls.py file, wrapping the url you’re looking to modify.

urlpatterns += patterns('',
    url(r'^edit/$', user_to_view(userena_views.profile_edit),
        {'edit_profile_form': ProfileFormExtra}, name='userena_profile_edit'),)
by John M Costa, III

Converting my blog to Octopress

Recently I started looking into migrating my blog to something that would be a little easier to maintain. My Django powered blog was nice, but there where a lot of moving parts and required a lot of resource overhead (apache, mysql, django, etc…). I enjoy exploring new technologies so I started looking into static site generators.

What I was looking for:

  • Easy to use and learn
  • Straightforward development to live process
  • Somewhat customizable

A quick google search for static site generators pulls up quite a few. It even turns up a [github][1] repo from one contributor who maintains a [list][0] of them. I wasn’t really sure where to start and my acceptance criteria wasn’t super restrictive, so I picked the first one that [seemed interesting][2]. This happened to be [Octopress][3].

Some of the features Octopress touts:

  • A [semantic HTML5][4] template
  • A Mobile first [responsive layout][5]
  • Built in 3rd party support for Twitter, Google Plus One, Disqus Comments, Pinboard, Delicious, and Google Analytics
  • An easy deployment strategy
  • Built in support for POW and Rack servers
  • Easy theming with [Compass][6] and [Sass][7]
  • A Beautiful Solarized syntax highlighting

“Octopress is a blogging framework for hackers.”

It was incredibly straight forward to get Octopress up and running. The [setup documentation][8] was easy to find and follow.

I didn’t have the mentioned version of Ruby installed, so I followed the instructions for the [RVM installation][9].

I then followed the instructions to clone Octopress, installed the dependencies and install the default theme.

git clone git://github.com/imathis/octopress.git octopress
cd octopress
gem install bundler
bundle install
rake install

I now had everything in place for a bearbones, uncustomized octopress blog. Just to make sure things where working I then tried the local development server:

rake generate
rake preview

Hit the local url (localhost:4000) in the browser and there it was!

bare_bones

Customization

So, I mentioned that I wanted to have some ability to configure the blog. Meaning adding a few bells and whistles (like adding some social links, disqus comments, and some customized css).

Just like setting up the framework, customizations are also super easy. One can find the out of the box configuration points within the _config.yml file.

Changing the url, title, subtitle are the first things to configure at the top of the file.

url: http://yoursite.com
title: My Octopress Blog
subtitle: A blogging framework for hackers.
author: Your Name
simple_search: http://google.com/search
description:

Plugin configurations are next. You can change the structure of how the links are constucted, pagination, etc. Also, anything listed in the sidebar can be modified by changing the list of included files in the default_asides setting.

# If publishing to a subdirectory as in http://site.com/project set 'root: /project'
root: /
permalink: /blog/:year/:month/:day/:title/
source: source
destination: public
plugins: plugins
code_dir: downloads/code
category_dir: blog/categories
markdown: rdiscount
pygments: false # default python pygments have been replaced by pygments.rb

paginate: 10          # Posts per page on the blog index
pagination_dir: blog  # Directory base for pagination URLs eg. /blog/page/2/
recent_posts: 5       # Posts in the sidebar Recent Posts section
excerpt_link: "Read on &rarr;"  # "Continue reading" link text at the bottom of excerpted articles

titlecase: true       # Converts page and post titles to titlecase

# list each of the sidebar modules you want to include, in the order you want them to appear.
# To add custom asides, create files in /source/_includes/custom/asides/ and add them to the list like 'custom/asides/custom_aside_name.html'
default_asides: [asides/recent_posts.html, asides/github.html, asides/twitter.html, asides/delicious.html, asides/pinboard.html, asides/googleplus.html]

# Each layout uses the default asides, but they can have their own asides instead. Simply uncomment the lines below
# and add an array with the asides you want to use.
# blog_index_asides:
# post_asides:
# page_asides:

Any fun widgets like github repos, or social links are configured from within the 3rd Party plugin section.

# Github repositories
github_user:
github_repo_count: 0
github_show_profile_link: true
github_skip_forks: true

# Twitter
twitter_user:
twitter_tweet_count: 4
twitter_show_replies: false
twitter_follow_button: true
twitter_show_follower_count: false
twitter_tweet_button: true

# Google +1
google_plus_one: false
google_plus_one_size: medium

# Google Plus Profile
# Hidden: No visible button, just add author information to search results
googleplus_user:
googleplus_hidden: false

# Pinboard
pinboard_user:
pinboard_count: 3

# Delicious
delicious_user:
delicious_count: 3

# Disqus Comments
disqus_short_name:
disqus_show_comment_count: false

# Google Analytics
google_analytics_tracking_id:

# Facebook Like
facebook_like: false

Lastly, I modified some css to personalize the look and feel. I changed the background color and added an image (I’m not a designer, so this is always magic to me:). It was straight forward to add a custom.css stylesheet to the source/stylesheets directory and then link to it in the source/_includes/custom/header.html file.

All the changes can be viewed in my personal fork of octopress: [https://github.com/johncosta/octopress][10]

Porting The Existing Data

I didn’t have a lot of blog entries so I manually moved all my data over. I had articles written in html and restructured text so most of it ported over almost directly. I made a few adjustments to make sure that the urls matched the existing urls so that any links carried over. I’m sure that I could have written a script to extract the data and format it into a post file but this was just as easy.

rake new_post['the title of the article']

Then it was a matter of cutting and pasting in the previous text and then change the name of the post file to match the url that the article was previously hosted at.

Rinse and repeat.

[0]: https://github.com/jaspervdj/static-site-generator-comparison of them. [1]: https://github.com [2]: http://siliconangle.com/blog/2012/03/20/5-minimalist-static-html-blog-generators-to-check-out/ [3]: http://octopress.org/ [4]: http://en.wikipedia.org/wiki/Semantic_HTML [5]: http://en.wikipedia.org/wiki/Responsive_web_design [6]: http://www.webresourcesdepot.com/compass-a-powerful-stylesheet-framework/ [7]: http://sass-lang.com/ [8]: http://octopress.org/docs/setup/ [9]: http://octopress.org/docs/setup/rvm/ [10]: https://github.com/johncosta/octopress

by John M Costa, III

Presentation Notes from CashStar Developer Sprint

Its tough to talk about documentation:

  • Can seem overly judgmental
  • Boring.
  • We already know how to do it
  • We never have time to do it

Why choose a sprint on ReadTheDocs and documentation?

  • I want to learn best documentation practice (or really just better practice)
  • Explore how to make it easier

Overview:

  • Consider why we document
  • Consider where we put that documentation
  • Introduce team to `Sphinx `_
  • Introduce team to `reStructuredText `_
  • Introduce team to `CashStar's ReadTheDocs Server `_

Why do you document code

It's a simple question...though it doesn't appear to have a simple answer. Through scouring various resources, I found numerous lists of reasons why to document, how to document, where in your code to document, how to get people to document...and so on. There are quite a few lists detailing all these things, here are some of my favorites:

  • Not all code is obvious, complex algorithms are not quite readable by all
  • Finding out details take long time, it is a waste of business money
  • When you understand the function of each component you can answer business questions.
  • Not all developers have the same IQ - You want every one to get it not only smart John
  • You’re asked to change or update a piece of code that you wrote six months ago. You didn’t comment your code, and now you can’t remember why the heck you did what you did!
  • Don’t put yourself or anyone else in the position of having to guess how a piece of code works.

Other lists (some of the items above are from these):

  • `http://programmers.stackexchange.com/questions/121775/why-should-you-document-code `_
  • `http://programmers.stackexchange.com/questions/10857/should-you-document-everything-or-just-most `_

What does this boil down to?

  • comment your code to make other people’s lives easier
  • comment your code to make your life easier

My belief is in value

This `Slashdot Thread `_ has a lot of interesting points about getting developers to code, the how and why.

I think Tom (822) hits the nail on the head:

Who is it valuable to?

  It's an investment into the future. If you need to pick this project up again one, two or five years down the road, and do any non-trivial changes to it, good (and that means correct, short and to the point, not extensive and theoretical) documentation will save you valuable time.

If it’s throwaway code, don’t waste time and effort on documentation. If you plan to use it for some time, chances are very high it will need fixes, updates and changes, and documentation will make those a lot easier, faster and cheaper.

  Decisions are made in the present, and if resources are tight in the present, things of potential value in the future are discounted further.

Why do we document code?

I think this answer is simple:

We document code so that we create additional value for the ourselves, our peers, and effectively the company or project we are working for/on.

How do you document code?

What does typcial code documentation look like?

Below is a bit of sample code that could use a little bit of work. Some of the code has been snipped for brevity so that we can focus on the method at a higher level.

What could we improve here?

  1. We don't know what's being passed in for objects.
  2. What is the intention of the method?
  3. There's a lot going on in this method, can it be simplified?

Our sample... but reworked (somewhat):

Other improvements to consider

  1. Further refactor into even smaller bits of code
  2. Unit tests documenting the use of the functions

Additional references

  1. StackOverflow (Mil, moonshadow): http://stackoverflow.com/questions/167498/what-is-less-annoying-no-source-code-documentation-or-bad-code-documentation
  2. The Art of Code Documentation (Drew Sikora): http://bit.ly/NwovOC>http://www.gamedev.net/page/resources/_/technical/general-programming/the-art-of-code-documentation-r1218
  3. CodeAsDocumentation (martinfowler): http://martinfowler.com/bliki/CodeAsDocumentation.html
  4. Golden rule of documenting code (Jeff Davis): http://www.techrepublic.com/article/the-golden-rule-of-documenting-code/1032951
  5. How not to write python code: http://eikke.com/how-not-to-write-python-code
by John M Costa, III

Configuring an internal ReadTheDocs

Project Overview

  • ReadTheDocs application to serve project documentation
  • Simple and Straightforward, minimal overhead
  • Modified to point to our domain, not readthedocs
  • Restricted Public Access

Technology Overview

ReadTheDocs comes with the following technology stack:

  • Varnish
  • Nginx
  • gunicorn
  • postgres
  • python/django
  • solr (haystack search)
  • Chef

In an effort to align with some of the technologies I have some experience with, I modified the technology stack slightly, its now as follows:

* supervisor
* gunicorn
* memcached
* nginx
* python/django
* mysql
* whoosh (haystack search)
* fabric

Key Functionality Overview

  • Built and versioned documentation (http://50.57.69.212/)
  • Search

Setup Steps

Provision a server:

  • Provision an ubuntu 11.10 instance (I used rackspace, other versions have not been tested)

Clone and setup the project locally:

  • git clone git@github.com:johncosta/readthedocs.org.git
  • mkvirtualenv --distribute readthedocs
  • pip install -r pip_requirements.txt
  • modify the fabfile-ubuntu.py file by changing the server ip and root password to the values returned by your instance provisioner
  • run fab -f fabfile-ubuntu.py stage_rtd

Post Installation Steps:

  • Try http://50.57.69.212/
  • Change the root password to mysql!!
  • Change the test user password!!
  • Configure IP Tables to be as restrictive as you need
  • Enable email via django settings
  • Upload a test project (test/test)
  • Modify the nginx settings to support (project name).domain.com support: http://50.57.69.212/docs/readthedocsexample/en/latest/py-modindex.html

Some Gotcha’s

  • If builds fail, information on why they fail is sparse
by John M Costa, III

My Notes On Uploading a Package PyPI

These are my notes for uploading to Pypi. Additionally, I've included some useful links that provide a lot of background.

http://diveintopython3.ep.io/packaging.html

http://wiki.python.org/moin/CheeseShopTutorial

http://packages.python.org/an_example_pypi_project/setuptools.html

  1. Register at PyPI

    You can do so here: Register at PyPI

  2. create a .pypirc file in your home directory

        vi .pypirc
    
    [distutils]
    index-servers = pypi
    
    [pypi]
    username: &#060; username &#062;
    password: &#060; password &#062;
    </pre>
    </li>
    <li><p>upload your package to PyPI</p>
    <pre>
    cd  &#060; package root &#062;
    python setup.py register sdist upload
    </pre>
    </li>