Categories
GNU/Linux Free Software & Open Source Programming & Web Development

My Python & Django deployment workflow and tools

Yellow python

If you’re new to Python web development with Django, there are some things that tutorials don’t teach. Deploying to a server when you finish your local development can be a frustrating task. Here is a rundown of the tools and workflow I use for deploying a Django based website.

A few days ago when I launched the new version of Notasbit.com, I ended up having a discussion with a friend about deployment methods for Ruby on Rails and Django websites. My friend is used to Rails development and deployment, something I am not familiar with since Rails 1.8 (looong time ago by now) and he insisted that Ruby Gems method is way easier while Python deployment was a hassle. Yes, for people not familiar with the right tools it can be. There are different versions of Python and library dependencies and versions that can make maintaining a Django website a nightmare.

Here is what I use for deploying Django websites on a live server:

The tools

Git

You cannot be serious about programming in general these days if you’re not using a version control system. In Linus Torvald’s words: “if you’re not using Git, you’re an idiot”. I keep all my projects under version control, even if I’m the only developer on it, even when the code is for personal use only. It not only is a good practice to keep, but it also works as your backup. When I got robbed from my computers, I didn’t loose any project data because it was all backed up on several Git repositories at different places. For deployment tasks, you need to install Git on your server. The server you’re deploying to will serve as a backup as well.

Virtualenv

All that version and dependency hell can be isolated using Python’s Virtualenv tool. This is a tool that will create an encapsulated environment of a given python version and all the libraries you want to use in a project without messing with the system’s main versions. So this way, you can have an old Django 1.4 project running on the same server than another project running a later Django 1.8 or 1.9 version, as well as their respective dependencies.

Fabric

The only reason I still haven’t started using Python 3 on my projects is Fabric. This tool will automate all the project maintenance tasks needed, and that includes doing the deployment. It is similar to a Makefile but you write your tasks in Python code so you don’t need to learn a whole new syntax.

Now to the implementation details…

On your local development machine the project structure will look like this:

Project directory structure. The idea is to have the virtualenv outside the version controlled directory. You can also use virtualenvwrapper and have all your virtualenvs in a separate location. I like to have them contained in a same folder for now.

mainproject.com/
 |
 +-- env/ (virtualenv)
 |
 +-- django_project/
     |
     --- manage.py
     --- .git/
     --- fabfile.py
     --- deploy_tools/
     --- All other django project files & dirs

Deployment scripts

In the directory deploy_tools we’ll place the files necessary to configure apache, nginx, gunicorn, uwsgi or whatever other server configuration scripts needed for deployment.

Here’s an example of the Nginx configuration script I use:

upstream mywebsite {  #the upstream component nginx needs to connect to
    server unix:///tmp/mywebsite.sock; # for a file socket
}

server {    
    listen      80;   # the port your site will be served on
    server_name www.mywebsite.com;   # the domain name it will serve for
    charset     utf-8;

    client_max_body_size 250M;   # max upload size, adjust to taste

    location /static {
        alias /var/www/mywebsite/static; # your Django project's static files - amend as required
    }

    location / {     # Finally, send all non-media requests to the Django server.
        uwsgi_pass  mywebsite;
        uwsgi_param QUERY_STRING $query_string;
        uwsgi_param REQUEST_METHOD $request_method;
        uwsgi_param CONTENT_TYPE $content_type;
        uwsgi_param CONTENT_LENGTH $content_length;

        uwsgi_param REQUEST_URI $request_uri;
        uwsgi_param PATH_INFO $document_uri;
        uwsgi_param DOCUMENT_ROOT $document_root;
        uwsgi_param SERVER_PROTOCOL $server_protocol;
        uwsgi_param HTTPS $https if_not_empty;

        uwsgi_param REMOTE_ADDR $remote_addr;
        uwsgi_param REMOTE_PORT $remote_port;
        uwsgi_param SERVER_PORT $server_port;
        uwsgi_param SERVER_NAME $server_name;
    }
}

Settings handling

There are many ways to solve the problem about settings file management in Django applications. I like to have a general settings file and a local one for the different environments. To achieve this, add a local_settings.py file and add it to the git ignore list.

Then add the following at the end of your general settings.py file:

# Import local settings
try:
    from local_settings import *
except ImportError:
    pass

In your remote server, create a local_settings.py file and add DEBUG = False or override any other setting you need specifically for that server. You can have different settings for local, staging, testing or production or any other servers you need to have.

Fabric tasks

For example, you might need to download a copy of your production database to use for development tests. Instead of typing the same mysqldump command every time, you can automate it like this:

def backup_db():
    """
    Gets a database dump from remote server
    """
    production1()
    date = time.strftime('%Y-%m-%d-%H%M%S')
    dbname = 'MY-DATABASE-NAME'
    path = os.path.join(os.path.dirname(__file__), 'db_backups')
    fname = "{dbname}_backup_{date}.sql.gz".format(date=date,dbname=dbname)

    run("mysqldump -u {dbuser} -p'{password}' --add-drop-table -B {database} | gzip -9 > {filename}".format(
        database=dbname,
        dbuser='MY-DB-USER',
        password='MY-SERVER-PASSWORD',
        filename=os.path.join('/tmp', fname))
    )
    get(remote_path=os.path.join('/tmp', fname),
        local_path=os.path.join(path, fname))
    run("rm {filename}".format(filename=os.path.join('/tmp', fname)))

Likewise, you can create a deploy command to get everything in the server. Here’s a fabfile.py example:

from fabric.api import *
from fabric.colors import green, red
import os
import sys
import time
from fabric.contrib import django
import datetime

sys.path.append(os.path.join(os.path.dirname(__file__),
                             'mysite'))

django.settings_module('mysite.settings')
from django.conf import settings

# Hosts
production = '[email protected]'

# Branch to pull from
env.branch = 'master'

@production
def deploy():
    """Uploads files and runs deployment actions to a given server"""
    # path to the directory on the server where your vhost is set up
    path = "/var/www/mysite"
    # name of the application process
    process = "uwsgi"

    print green("Beginning Deploy:")
    with cd(path):
        run("pwd")
        print green("Pulling master from Git server...")
        run("git pull origin %s" % env.branch)
        # use server's virtualenv for commands
        with prefix("source %s/env/bin/activate" % path):
            print green("Installing requirements...")
            run("pip install -r requirements.txt")
            print green("Collecting static files...")
            run("python mysite/manage.py collectstatic --noinput")
            print green("Migrating the database...")
            run("python mysite/manage.py migrate")
        print green("Restart the uwsgi process")
        sudo("service %s restart" % process)
    print green("DONE!")

With these tools, all you need to do when deploying new code to the server is to run one fabric command:

fab deploy

And you’re done.

Most of these ideas I took them from the book Test-Driven Development with Django. You can read it online for free to check out more details or parts I didn’t use in this example.

This is not a perfect solution to every case, but I hope this gives you some ideas for the workflow that fits your case. Share in the comments your deployment workflow or any suggestions to improve this one.

Categories
Programming & Web Development Tutorials & Tips

Using Git with Subversion repository subdirectory

Git logo

Interacting a local git repository with a subversion one has been very useful and is very common on old projects. The way to do that is by using the git svn commands. But sometimes there are situations where there is one large repository with several projects as subfolders in that repo.

Using the standard svn cloning command:

git clone my_svn_repo_server.com/repository

Will checkout the whole SVN repository (all subfolders, hence, all projects) into your local machine. This can be very large if the codebase and history is big, and very slow to interact with, since getting your local repository updated will involve getting changes from all other projects.

To make git clone a subdirectory from an SVN repository, use the following:

git svn init http://my_svn_server.com/repository/path/to/directory/of/project
git svn fetch

This way you not only clone that subdirectory, but also you will get updates from only that folder, making faster code pulls and pushes to the central SVN repository.

While I consider a very bad practice to have one large Subversion repository with several projects inside it as subfolders, I’ve come across such setups several times and it drove me crazy to have to checkout the whole thing. Hope this helps out.

Categories
Emacs GNU/Linux Free Software & Open Source Tutorials & Tips

Post to WordPress blogs with Emacs & Org-mode

Recently I’ve discovered Org2blog, an Emacs mode to write your blog posts locally using org-mode post them to your WordPress blog in a very fast and easy way.

I’ve written before on how to write your blog posts and publish them using Emacs. Previously, my method of choice was using Weblogger mode. I even wrote some enhancements to it.

The problem I found with this method is that it uses message-mode as its base mode. So you’re basically writing an email. The shortcomings of it were that whenever I wanted to write links, bold text, or any custom formatting generally done through HTML tags, I had to either type out the HTML or temporarily switch to html-mode. That sometimes gave me some problems converting the HTML code into entities, and ended up with a mess to fix at the WordPress editing textarea.

Org-mode (included in Emacs since about version 22.1), if you haven’t heard about it already, is a very good way to take notes, organize your tasks, among other day to day useful things. You also get some basic formatting like bold text and italics, as well as links among many other useful things. Nowadays, I find myself typing things in org files constantly throughout my day, and with all its long list of qualities, it became a more suitable way for me to write blog posts.

Org2blog provides a way to post your Org files or post a subsection of your file with a few keystrokes. All you need to do is clone the repository on your load path directory

git clone http://github.com/punchagan/org2blog.git

Then, add this to your .emacs file

  (setq load-path (cons "~/.emacs.d/org2blog/" load-path))
  (require 'org2blog-autoloads)

Finally set up you blog(s) settings in you .emacs file

     (setq org2blog/wp-blog-alist
           '(("wordpress"
              :url "http://username.wordpress.com/xmlrpc.php"
              :username "username"   
              :default-title "Hello World"
              :default-categories ("org2blog" "emacs")
              :tags-as-categories nil)
             ("my-blog"             
              :url "http://username.server.com/xmlrpc.php"
              :username "admin")))

To start wrigint a new post, you can now use
M-x org2blog/wp-new-entry

Or, as I more frequently use, post a subtree of an existing org file using:
M-x org2blog/wp-post-subtree

I hope you enjoy writing and posting your blog posts within Emacs and Org-mode. I certainly do and has turned out to be a very fast way to quickly draft and later on (even offline) elaborate on the blog post details in a comfortable editing environment. Also you get the added benefit of having a local copy (backup) of your blog posts as Org files.

Categories
Emacs GNU/Linux Free Software & Open Source

3 methods on how to backup your Emacs file

Data dump by swanksalot on flickr
The emacs personalization file (dotemacs) is a very important resource for every Emacs user. Typically found at ~/.emacs, this file contains elisp code all the personalization of Emacs to accommodate each user. Its so important that it basically represents your Emacs “personality”.

To loose your .emacs file can mean loosing a lot of hours of tweaking and personalizing GNU Emacs through a bunch of collected-through-time snippets. So, being a very valuable asset, having a good method to back it up is a must have.

Here are 3 common methods people use to keep their Emacs file safe:

Simple backup

The most simple thing to do is to manually make copies of the file on a different directory, another partition on the same hard drive, an external hard drive, or a USB key. Also works well when having multiple computers and copying the same .emacs file on each of them. Using rsync to back it up periodically is a good idea, and it can be used to backup all your other elisp code for common modes (typically at ~/.emacs.d/) you use too.

A good option would be to back it up to an online storage service like Drop.io or even Amazon S3.

Version control

The standard and most common way to store your emacs customizations is by saving them on a file named .emacs placed on your home folder. But this is difficult to setup on a version control system since version control systems check things under directories. So this would mean you would be version controlling your whole home folder, which wouldn’t be a bad idea on some cases but on others would be a mess to maintain.

Fortunately there’s another way: at startup, Emacs also looks for a file called init.el on a hidden folder named .emacs.d/ in your home folder when the typical ~/.emacs file is not found. This way, you can easily set your preferred version control system to track changes on that folder. This has the advantage that any other Emacs modes or code you have can be stored and tracked too. This way, whenever you have a clean install, your Emacs setup and modes are just a checkout away from getting done.

On some setups, tracking changes on the whole ~/.emacs.d/ directory may not be a good option. So, to track changes on only your .emacs file can be achieved by moving your init.el file to a folder inside the home elisp directory and will look like this: ~/.emacs.d/dotemacs/init.el and make a symbolic link to it in ~/.emacs.d/ This way I can version control the “dotemacs” directory very easily.

Distributed version control

Many people use SVN as their preferred version control system, which backs up your data into a central location. But using a distributed version control system like Git, Mercurial or Bazaar is a better option. DVCSs let you setup multiple locations where to backup your code repository, so you don’t have a single point of failure. So you can version control your dotemacs file and back up the changes history on many places like Github, Gitorious, Launchpad or any other code hosting service, plus several other remote locations like multiple machines, a NAS or external drives with complete history of your changes.

Do you know other methods? How do you keep from loosing your dotemacs file?

Data dump image by swanksalot on Flickr
Categories
Events GNU/Linux Free Software & Open Source

CONSOL 2009

I participated at CONSOL 2009 and had the opportunity to give 4 talks this time and meet with "the software libre community".

The talks were great, with very interesting subjects, and there seems to be a lot of interest in virtualization this time. Rolman gave his talk about virtualization with KVM and the basics of virtualization technology and how it all works. Then Gunnar Wolf gave his talk about virtualization techniques and recommendations.

I gave an introductory talk about Git and for the first time talked about Emacs. Also had an Emacs vs. Vi debate with Rolman, and it really went well. It turned out to be a very civilized talk with no flames going on at any time. I think people got somewhat disappointed that it went so well.

At CONSOL 2009

The new KDE Mexico team, or part of it, got together to catch up, unfortunately Guillermo Amaral had a plane to catch just before the party begun.

For the first time I tried the famous Duff beer, from the Simpsons. This is a mexican brand that makes it a reality.
At CONSOL 2009

And Gladys showed up with an ethernet cable as an improvised belt.
Girl with ethernet cable belt

I had a great time and it was a very nice experience. Hope to see everyone again soon.

Categories
GNU/Linux Free Software & Open Source Programming & Web Development Tutorials & Tips

How to install latest Git on Ubuntu

Git Logo

Git is a distributed version control system. I won’t go into much details of what Git is or why use Git instead of other VC systems. There’s plenty other sites where to check that information.

I love Git, but there’s a slight problem with Ubuntu’s repositories (feisty, gutsy): its an old version.

Git’s version on the repositories is 1.5.2.5. Its an old version and it lacks many of the new cool features like git stash and git citool and many others. So to get the latest version with all the cool features, you have to compile from source.

To do that, you will need the following packages:

First install the all the basic tools for compiling source:

sudo aptitude build-essential


sudo aptitude install libc6 libcurl3-gnutls libexpat1 zlib1g perl-modules liberror-perl libdigest-sha1-perl cpio openssh patch
gettext curl tk8.4 tcl8.4

Download the tarball from http://git.or.cz and uncompress it.

$ tar xvzf git*.tar.gz

Then, run the compilation steps and install:

$ ./configure
$ make
$ sudo make install

And there it is! Run the following to check your version.

$ git --version

The only thing that I still don’t know how to get is git command autocomplete on bash. If you install from repositories, then install from source, you’ll have it all.