Posts for webapp

Continuous Integration for a Javascript-heavy Django Site

by Sebastien Mirolo on Sat, 25 May 2013

In the popular series on how technical decisions are made, we will see Today why fortylines picked django-jenkins, phantomjs, casperjs and django-casper to build its continuous integration infrastructure on.

Continuous integration for Django projects

Python is a great language. It makes it quick to get to working code in production. Unfortunately like many dynamic programming languages, issues arise when you start to add features and rework the code base. With a static type checking language like C++, the compiler would caught stupid mistakes like variable name that wasn't updated all the places it is used, or like a function parameter that used to be an integer and is now a list of integers.

Welcome to the world of extensive testing. If you are looking to deploy code changes at the pace required to run a webapp, continuous integration becomes rapidly an important aspect of the development cycle.

django-jenkins is a perfect start if, like us, you are running django and jenkins. The only unfortunate hic-hup so far is the absence of support for behave. That is unfortunate because we chose behave over lettuce for various reasons.

Since behave accepts a --junit command line flag, it is though possible to integrate and jenkins directly, as a subsequent command.

$ python jenkins
$ behave --junit --junit-directory ../reports

SIDE NOTE: There is a django-behave project. Unfortunately using it will remove the ability to run standard django testsuites - or more accurately from reading comments in the code - mixing django-behave and tests based on unittest hasn't been implemented yet. There has been no update to the repository in eight months as of writing this post.

Javascript content

After starting out with Amazon Payment System, integrating with Paypal, we finally settled on stripe.

Stripe is great from a developer's perspective. APIs and documentations are excellent. The one feature that is advertised as a benefit to developers is also the feature that threw off our django/behave/jenkins initiative: Javascript.

Until now we used mechanize to simulate a browser and check the generated HTML. With the introduction of Javascript in our stack, it wasn't possible to rely on mechanize alone. We needed to integrate a javascript engine in our test infrastructure.

The first intuition was to use python bindings to selenium, a browser automation framework.

$ pip install selenium --upgrade
# Download the webdriver for your browser
$ unzip ~/Download/
$ mv chromedriver /usr/local/bin
$ export PATH=$PATH:"/Applications/Google"

Selenium is quite heavy weight. It requires to launch the browser executable and that will trigger GUI windows popping-up on your screen. You might be able to install all the required packages to run a X11 server with no display attached on your continuous server virtual machine but that seems like overkill and like a potential rat hole of package dependencies.

Welcome to phantomjs, the headless webkit engine.

Browsing around for phantomjs and BDD, it wasn't long before I stumbled on jasmine and djangojs. Jasmine is wonderful to unit tests javascript code and djangojs helps integrate a javascript-heavy webui into a django site. Both projects deliver as promised. That is where the subtlety lies. We needed something to drive end-to-end system tests, something that would help write tests at the level of "open url", "fill form", "click button", "check text in html page", etc.

We thus reverted our first attempt of using phantomjs with jasmine and djangojs and started to look again for a more suited solution. That is how a few searches later we ended-up on casperjs and django-casper. By itself casperjs generates junit xml output. You can thus use casperjs straight from the command line in your jenkins shell.

$ cat hellotest.js
casper.test.comment('this is an hello test');
casper.test.assertTrue(true, "YEP");
$ casperjs test --xunit=./test-result.xml hellotest.js

Once integrated into django through a django-casperjs wrapper, your tests look and behave like regular django tests. Hence they integrate perfectly with test and django-jenkins. Excellent!

How we picked d3js to draw SaaS metrics

by Sebastien Mirolo on Fri, 1 Mar 2013

There are only few webapps that can do without displaying nice looking charts. This is even more so when you are running a Software-as-a-Service (SaaS) website. If you believe we are living in a knowledge economy as I previously described in Open source business models, this means we must search and are bound to find already made solutions.

This post started as the hunt for an open source solution to draw nice looking charts within fortylines django webapp but after much googling and experimenting, it was better re-written as an insight on how technical decisions are made. I hope you find the journey interesting.

First and foremost, fortylines business model requires that its entire SaaS solution can be deployed on an air-gap network. Most of fortylines bigger clients prefer to pay the extra cost and retain physical control of the cluster machines. This is an important requirement that ruled out many of the Google Chart API wrappers out there.

For consistency and to avoid many headaches, we also favor projects with BSD-like licenses and written in Python or Javascript (the two languages with picked for server-side and client-side code respectively). These were the guidelines when we started the search. Outside picking a specific open source project to build on, two open questions had to be decided:

  • Should we do the rendering server-side or client-side?
  • Which format should the graphics be rendered as (PNG, SVG, Canvas)?

The server-side way

First, if we did all the rendering server-side, it would be a lot easier to serve charts through different medium. Not only could we put the charts inside a web page but also embed them in a pdf, or an email, etc.

Charts being more in the graphics more than the photography cluster as far as image processing is concerned, it made sense to focus on producing a vector format (SVG) over a pixel format (PNG).

Ideally we are looking for python code that would transform a data model into a nice looking SVG file that we can later send to a web browser. Of course, browser SVG support being what it is, it is conceivable that in practice we have to resort to sending PNG images in the end.

All python solutions seem to either rely on the Python Imaging Library (PIL) or PyCairo, both of which are mostly bindings to a native C implementation.

django charts pycha (tutorial)

Both pycha and BeautifulCharts are available through pip. A Pip search for charts also shows svg.charts, an MIT-style licensed package which looks promising though I couldn't figure out the prerequisites it is using for drawing the charts.

Since our search did not turn any pure-python solution, it is not far-fetched to look for chart applications that can be invoked on a command-line shell. We serialize the python data model then do some os.system call. If the quality of the charts is a lot better than the C/Python implementations, that might be worth it and won't introduce more Python to native dependencies that we would otherwise have. Suddenly something like ChartSVG, a collection of XSLT scripts that creates SVG charts from XML file, could fit the bill.

The client-side way

Google Chart API out of the equation, we were looking for full javascript libraries here. There are surprisingly an amazing pool of fully features chart library written in javascript though most of them have a commercial license with different restrictions on how you can use it for free.

amCharts and HighCharts, both have been packaged with fanstatic, a python framework to manage javascript dependencies if that matters at some point. FusionCharts charts also look really good.

d3js is not technically a chart library but it appears in many related searches. D3js deals with the much broader scope of data visualization (see here for pointers). Making charts using d3js can be quite complex but a gallery of examples exist and d3js is released under a BSD license.

The choice

The visual quality of the charts produced by client-side javascript libraries appears to be a lot better than their server-side python counterpart. If we keep bent on generating the charts server side because we care about caching, eldest browser support or simply using the same code to output monthly report PDFs, we will have to think about introducing nodejs in our back-end stack. Visual quality matters.

Fortylines builds a trace visualization tool not unlike GTKwave, though it runs in a web browser and supports the iPad touch interfaces. It is only the beginning as more rich and interactive trace analysis tools will make their way into the web product. So sooner or later, we are bound to introduce an interactive data visualization library in our stack.

If we need a data visualization library at some point and all the best charting libraries come with restrictions, we might as well pick d3js. A side advantage is that we add a single dependency and only need to learn one API.

That is how we picked d3js, an unlikely candidate, to draw charts for fortylines SaaS webapp. Later we found a chart library based on d3js - just amazing.

Nginx, Gunicorn and Django

by Sebastien Mirolo on Fri, 22 Jun 2012

I decided today to bring a new web stack consisting of nginx, gunicorn and django on a fedora 17 system. We are also throwing django-registration in the mix since the service requires authentication.

First things first, we need to install the packages on the local system.

$ yum install nginx python-gunicorn Django django-registration

We are developing a webapp written in an interpreted language (python) so a straightforward rsync should deploy the code to production, otherwise it weakens the rationale of using python for the job. Though production will run nginx, gunicorn and django, we still want to be able to debug the code on development machines with a simple runserver command. Hence thinking about file paths in advance is quite important. The following setup supports a flexible dev/prod approach.

*siteTop*/app                 # django project python code
*siteTop*/htdocs              # root for static html pages served by nginx
*siteTop*/htdocs/static       # root for static files served by nginx and django

The nginx configuration is simple and straightforward. Nginx redirects all pages to https and serves static content from htdocs.

upstream proxy_*domain* {

server {
          listen          80;
          server_name     *domain*;

          location / {
                  rewrite ^/(.*)$ https://$http_host/$1 redirect;


server {
        listen       443;
        server_name  *domain*;

        client_max_body_size 4G;
        keepalive_timeout 5;

        ssl                  on;
        ssl_certificate      /etc/ssl/certs/*domain*.pem;
        ssl_certificate_key  /etc/ssl/private/*domain*.key;

        ssl_session_timeout  5m;

        ssl_protocols  SSLv3 TLSv1;
        ssl_ciphers  ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
        ssl_prefer_server_ciphers   on;

        # path for static files
        root /var/www/*domain*/htdocs;

        location / {
            # checks for static file, if not found proxy to app
            try_files $uri @forward_to_app;

        location @forward_to_app {
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            # proxy_redirect default;
            proxy_redirect off;

            proxy_pass      http://proxy_*domain*;

        error_page 500 502 503 504 /500.html;
        location = /50x.html {
            root /var/www/*domain*/htdocs;

The django is also straightforward. The only interesting bits are figuring out the APP_ROOT and paths to static files.

$ diff -u prev
+import os.path
+APP_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

+    APP_ROOT + '/htdocs/static',
+    APP_ROOT + '/app/templates'

Getting gunicorn hooked-up with django and behaving as expected was a lot more challenging.

First I tried to run gunicorn_django. I might have misread the documentation but I tried to pass on the command-line the directory where is located. Running in daemon mode, I saw both, gunicorn master and worker, up through ps, no error in the log files, and yet, I couldn't fetch a page. Only when I finally decided to run in non-daemon mode did it become obvious that gunicorn was running in an infinite loop.

Error: Can't find 'app' in your PYTHONPATH.
Error: Can't find 'app' in your PYTHONPATH.

Everything started to look fine when I passed the actual file on the command-line; well at least on Fedora 17. When I decided to run the same command on OSX, I got

Could not import settings 'app/' (Is it on sys.path?)

That is a weird error, especially that ls shows the file is there and definitely in the PYTHONPATH. Digging through the django code, I found an ImportError was caught and re-written as this error message in django/conf/ As it turns out, my OSX python complains that importlib cannot import a module by filename.

I thus decided to use the second way of running gunicorn I saw advertised, through

$ pip install gunicorn
$ diff -u prev
+    'gunicorn',
$ python ./ run_gunicorn

That worked fine; still I couldn't seem to change the gunicorn process name despite all attempts. As it turns out, no error, no warning, just a silent fail because setproctitle wasn't installed on my system.

$ yum install python-setproctitle

From that point on we could run the webapp, both in prod through nginx, gunicorn and django and directly through runserver in development.

Django and PayPal payment processing

by Sebastien Mirolo on Thu, 3 May 2012

I gave a another shot at the paypal API Today. Since I am most interested in encrypted web payments, after signing up with a business account, I went through the steps of generating a private key and corresponding public certificate.

$ openssl genrsa -out hostname-paypal-priv.pem 1024
$ openssl req -new -key hostname-paypal-priv.pem -x509 \
  		  -days 365 -out hostname-paypal-pubcert.pem

# Useful command to check certificate expiration
$ openssl x509 -in hostname-paypal-pubcert.pem -noout -enddate

I then went to my paypal account, follow to > Merchant Services > My Account > Profile > My Selling Tools > Encrypted Payment Settings > Add and upload hostname-paypal-pubcert.pem. Write down the CERT_ID, you will need it later on to create payment buttons.

The paypal API comes in many flavors but so far it is only important to understand PayPal Payments Standard vs. PayPal Payments Pro. When you use the first one, PayPal Payments Standard, a visitor will be redirected from your website to paypal's website for payment processing. When you use the second one, PayPal Payments Pro, a visitor can enter payment information directly through your site.

Paypal provides a sandbox to be used to develop and debug your code. Unfortunately the sandbox is quite broken. Some critical links, like "Merchant Services", branch out of the sandbox into the live paypal site. That makes it impossible to upload certificates in the sandbox and thus test your code there.


After uploading a certificate, I search through django packages for an integrated payment solution. django-merchant supports multiple payment processors including paypal. The django-merchant paypal setup documentation deals with PayPal Payments Pro. I am not quite sure django-merchant supports the PayPal Payments Standard. Either way, since it is mostly a wrapper around django-paypal as far as paypal support is concerned, I started there and configured django-paypal itself first.

Through the source code of django-paypal, there is a reference to a paypal with django post using the m2crypto library for encryption.

# install prerequisites
$ apt-get install python-virtualenv python-m2crypto
$ virtualenv ~/payment
$ source ~/payment/bin/activate
$ pip install Django django-registration django-paypal django-merchant

# create a django example project 
$ django-admin startproject example
$ django-admin startapp charge
$ diff -u prev

+# django-paypal
+PAYPAL_NOTIFY_URL = "URL_ROOT/charge/difficult_to_guess"
+PAYPAL_RETURN_URL = "URL_ROOT/charge/return/"
+PAYPAL_CANCEL_URL = "URL_ROOT/charge/cancel/"
+# These are credentials and should be proctected accordingly.
+# path to Paypal's own certificate
+# code which Paypal assign to the certificate when you upload it

$ diff -u prev
urlpatterns = patterns('',
+ # The order of 
+    (r'^charge/difficult_to_guess/',
+     include('paypal.standard.ipn.urls')),
+    (r'^charge/cancel/', 'charge.views.payment_cancel'),
+    (r'^charge/return/', 'charge.views.payment_return'),
+    (r'^charge/', 'charge.views.paypal_charge'),

$ python syncdb

# Running the http server
$ python runserver
$ wget

Testing IPNs

For each payment processing request, paypal asynchronously calls your web server with the status of that request. That is the second part of the payment pipeline that needs to be tested before going live.

I decided to give a second chance to the Paypal Sandbox on IPN testing. I went through > Test Account > Create a pre-configured account > Business.

"> Test Tools > Instant Payment Notification (IPN) simulator" looks like a promising candidate so I went ahead and entered my site's url for the ipn handler, selected "Express Checkout" left all default values and clicked "Send IPN", result:

IPN delivery failed. Unable to connect to the specified URL. Please verify the URL and try again.

As it turns out, paypal will not connect to your web server on a plain text connection. The error message is just very cryptic. I proxyied the django test server through Apache to support https connections.

$ cd /etc/apache2/mods-enabled
$ ln -s ../mods-available/proxy.load
$ ln -s ../mods-available/proxy_http.load
$ ln -s ../mods-available/proxy.conf
$ diff -u prev proxy.conf
- 	   ProxyRequests Off
+      ProxyRequests On

        <Proxy *>
                AddDefaultCharset off
                Order deny,allow
                Deny from all
+               Allow from

+       ProxyVia On

$ diff -u prev ../sites-available/default-ssl
+       ProxyPass /charge/
+       ProxyPassReverse /charge/

+<Location /charge/>
+  Order allow,deny
+  Allow from all

That worked and I could see the paypal request in my apache and django logs. Though now I hit the following error:

IPN delivery failed. HTTP error code 403: Forbidden

Classic django error related to the csrf middleware and a little bit of csrf_exempt magic does the trick.

$ diff -u prev /usr/lib/python/site-packages/paypal/standard/ipn/
+from django.views.decorators.csrf import csrf_exempt

def ipn(request, item_check_callable=None): 

The IPN simulator is now showing a success.

Further notes

At some point I encountered HTTP 500 returned code from django without any log showing up. That happened because an import statements was not found. The longest time I spent was to figure out how to display the cause of the error. I finally did it like this.

$ diff -u prev
    'handlers': {
+        'logfile':{
+        'level':'DEBUG',
+        'class':'logging.handlers.WatchedFileHandler',
+        'filename': '/var/log/django.log',
+        'formatter': 'simple'
+        },
    'loggers': {
        'django.request': {
+       # Might as well log any errors anywhere else in Django
+       'django': {
+           'handlers': ['logfile'],
+           'level': 'ERROR',
+           'propagate': False,
+       },

I was interested to find out how did django-paypal verify the IPN is actually coming from paypal. Looking through the source code I traced the answer from paypal/standard/ to paypal/standard/ipn/ django-paypal post the IPN back to paypal and checks the return code. Wow! I'd trust the DNS server I am using.

django-paypal is using django signals to trigger some other code that should run on an IPN notification. It can be setup as follow:

$ diff -u prev charge/
+ from paypal.standard.ipn.signals import payment_was_successful

+def paypal_payment_was_successful(sender, **kwargs):
+    logging.error("!!! payment_was_successful for invoice %s", sender.invoice)

With such code need to be imported/executed before an IPN notification is triggered otherwise the signal handler is never set. That's usually not a problem when you trigger the payment pipeline urls in order (charge, ipn). That is something to be aware of though when starting django and directly running the paypal IPN simulator. Signals won't be added and thus triggered. Because of csrf_exempt patch and the signal setup issue, it might be better to add a wrapper to paypal.standard.ipn.views.ipn inside the charge django app.

Some interesting documentation from Record Keeping with Pass-through Variables, you should not that the following variables are passed through paypal back to your website:

  • custom
  • item_number or item_number_X
  • invoice

Originally before using django-paypal, I looked through Paypal Java SDK. The setup required to download a crypto package from bouncycastle and export private keys in pkcs12 format.

# Compiling the code sample
$ curl -O
$ tar zxf crypto-145.tar.gz
$ export JAVA_CLASSPATH=~/crypto-145/jars/bcprov-jdk16-145.jar
$ export JAVA_CLASSPATH=$JAVA_CLASSPATH:~/crypto-145/jars/bcpg-jdk16-145.jar
$ export JAVA_CLASSPATH=$JAVA_CLASSPATH:~/crypto-145/jars/bctest-jdk16-145.jar
$ export JAVA_CLASSPATH=$JAVA_CLASSPATH:~/crypto-145/jars/bcmail-jdk16-145.jar
$ javac -g -classpath $JAVA_CLASSPATH \

# Converting the private key (remember password for next command)
$ openssl pkcs12 -export -inkey hostname-paypal-priv.pem \
  		  -in hostname-paypal-pubcert.pem \
		  -out hostname-paypal-priv.p12

# Encrypting a paypal button
$ cat testbutton.txt
cert_id=Given when key uploaded to paypal website
item_name=Handheld Computer  
address1=123 Main St
$ java -classpath $JAVA_CLASSPATH ButtonEncryption \
  	   hostname-paypal-pubcert.pem \
	   hostname-paypal-priv.p12 \
	   paypal_cert_pem.txt \
	   pkcs12_password \
	   testbutton.txt testbutton.html

I have not completed this work yet but here are the initial notes I currently have on using crypto++ to interface with paypal processing system. Some background articles that turned out to be useful are Cryptographic Interoperability: KeysApplied Crypto++: Block Ciphers, crypto++ CBC Mode and crypto++ key formats.

# Private key that can be loaded through crypto++
openssl pkcs8 -nocrypt -topk8 -in hostname-paypal-priv.pem \
		-out hostname-paypal-priv.der -outform DER

Redmine plugins

by Sebastien Mirolo on Tue, 15 Nov 2011

Today I browsed through the redmine plugins directory and selected a few that might be fun to use in our projects.

Installing redmine plugins is usually straightfowrd.

$ /etc/init.d/thin stop
$ cd /var/www/redmine
$ pushd vendor/plugins
$ git clone git://
$ git clone git://
$ wget
$ unzip
$ rm
$ gem install ri_cal
$ git clone git://
$ git clone git://
$ git clone git://
$ popd
$ rake db:migrate_plugins RAILS_ENV=production
$ /etc/init.d/thin start

A Chat button is now present in the bottom right of the window. I checked "Send diff email" in the My account page. Then for a project, I checked Hudson and Meetings in page Project > Settings > Modules. The meetings menu tab appears but no Hudson. I logged in as the admin and went to the page Administration > Plugins > Hudson plugin configure but it did not seem to have any useful settings there. None-the-less an Hudson menu tab showed up for the project. I took the time to configure the other plugins on the Administration > Plugins, notably the startup page

Controller: projects
Action: projectname
Id: activity

I also wanted the "wiki edits" to be viewable by default in Activity view so I went ahead and edited the redmine ruby code directly.

Redmine is very slow

by Sebastien Mirolo on Mon, 14 Nov 2011

I setup redmine recently run through thin behind a nginx on a rackspace cloud machine. The interface is great and it seems like a very useful application if it was not so slow to respond. Simple http requests take forever even on a machine that experiences only minor traffic.

Apparently, I am not the only one with slowness issues (see here and here). Reading through the posts I guessed my current issue is not so much with redmine itself but with ruby web servers in general.

I thus decided to run a single thin server instance and use a port connection instead of a socket (See thin usage) just to validate the theory.

# We have to stop thin before doing any modifs to redmine.yml
$ thin stop --all /etc/thin
$ diff -u prev /etc/thin/redmine.yml
-servers: 4
-socket: /tmp/thin.sock
+servers: 1
+port: 5000

$ diff -u prev /etc/nginx/sites-available/redmine.conf
 upstream thin_cluster {
-    server unix:/tmp/thin.0.sock;
-    server unix:/tmp/thin.1.sock;
-    server unix:/tmp/thin.2.sock;
-    server unix:/tmp/thin.3.sock;
+    server;

# Taking the opportunity to install thin in /etc/init.d
$ thin install
$ /usr/sbin/update-rc.d -f thin defaults
$ /etc/init.d/thin start
$ /etc/init.d/nginx restart

Redmine is a lot more responsive now. A little more to gain would be great but I think that might require to tinker with the ruby interpreter and/or the postgres connection at this point.

Nginx, Jetty, Lift and Scala

by Sebastien Mirolo on Wed, 2 Nov 2011

After setting-up a php stack, a python stack and a ruby stack for web applications in the last couple weeks, I decided to go with nginx / jetty / Lift / scala next.

The good thing about Scala is that it compiles to JVM bytecode. A lot more efforts have been put over the years to run JVM bytecode extremely fast than, say, the ruby virtual machine. Because Scala code is compiled, it is also statically checked and you will catch a lot more spelling and type errors sooner than, say, in python or ruby. I thus have big hopes to get productivity enhancements without sacrificing runtime performance by programming in Scala. A presentation of Scala use at twitter can also be informative.

Nginx and Jetty

Jetty is an HTTP server that supports webapps as .war Java archives. So first thing is to setup nginx as the front-end server and proxy dynamic requests to jetty, classic.

$ apt-get install nginx jetty
$ cat /etc/nginx/sites-available/domainname.conf
server {
          listen          80;
          server_name     domainname;
          location /app {
		      proxy_pass http://localhost:8080/app

$ find /usr -name '*jetty*'
$ ls /usr/share/jetty/webapps /etc/init.d/jetty /etc/jetty
$ diff -u prev /etc/default/jetty

-#JAVA_OPTIONS="-Xmx256m -Djava.awt.headless=true"
+JAVA_OPTIONS="-Xmx256m -Djava.awt.headless=true -XX:PermSize=64m -XX:MaxPermSize=128m"

$ /etc/init.d/jetty restart
$ /etc/init.d/nginx restart

Later you will want to add "-Drun.mode=production" to the JAVA_OPTIONS in /etc/default/jetty but for now let's keep running in development mode.

After running jetty for a while, you might get a blank page and see java.lang.OutOfMemoryError: PermGen space exceptions in the log file. Adding "-XX:PermSize=64m -XX:MaxPermSize=128m" to the JAVA_OPTIONS many times solve the issue.

Latest versions of jetty can be downloaded from the eclipse foundation. Installation is as simple as unpacking the file and updating the different files in /etc/init.d/jetty and /etc/jetty. It is also interesting to see how to embed jetty into your application, a method also shown on Lift download page.

Installing Lift "helloworld"

The dependencies for the helloworld are quite a few versions behind and some point you will want to update the pom.xml file in order to pull newer Lift and Scala versions. It is especially useful to check the versions listed in the pom.xml in order to read the appropriate online documentation.

For now we are just interested to get the development cycle started so let's go.

$ sudo apt-get install maven2
$ mvn archetype:generate -U -DarchetypeGroupId=net.liftweb \
    -DarchetypeArtifactId=lift-archetype-blank \
	-DarchetypeVersion=1.0 \
    -DremoteRepositories= \
    -DgroupId=demo.helloworld -DartifactId=helloworld \
$ cd helloword
$ mvn package
$ cp ./target/helloworld-1.0-SNAPSHOT.war /usr/share/jetty/webapps
$ wget http://hostname/app/helloworld-1.0-SNAPSHOT/

Even though you can use Maven, of course, Scala comes with its own builder tool sbt. I never had so much issues related to differences between tool versions than with sbt. None-the-less I have recently learned that a lot people keep trying to push Scala builds through sbt because of the incremental compiler feature.


Lift is a web application framework that does not use the embed-code-inside html templates implementation. Instead the templates are clean xml/html documents with code decorators. That is nodes like "

" will be replaced by a DOM element generated by a mycode.scala object.

It is important to understand that historically Lift was heavily relying on well-formed xhtml. As html5 picked up a lot more steam and now is prevalent, Lift adapted. Still you will have to make sure to add the following code into your Boot.scala to switch Lift's default behavior. Otherwise you might be up for a ride figuring out why mycode never gets called and lift just outputs "

" instead.

LiftRules.htmlProperties.default.set((r: Req) =>
  								new Html5Properties(r.userAgent))

Exploring Lift and Simply Lift are two different books useful to start using Lift. Later you will want to directly go to the reference APIs (Just make sure you read through the reference matching the Lift version specified in the pom.xml).

Printf debugging

A lot of times, the easiest way to get into a new framework and understand new tools is to print text into a log file. The following code will do and you should see "Creating MyService at" popping up in the jetty log.

class MyService {
  val logger = Logger(classOf[MyService])"Creating MyService at %s".format(new

Accessing CGI parameters

There are a lot of fancy and powerful ways to bind Scala variables and html form parameters within Lift. None-the-less if some of your webiste is running outside Lift, you will want to use the tried and simple way.

class MyService {
  def render(in: NodeSeq): NodeSeq = {
    for {
      r <- S.request if r.post_? // make sure it's a post
      name <- S.param("name")
      S.notice("Name: "+ name)

Sending emails

Exploring Lift - Annex F is pretty useful to get started and the Mailer reference API will help solve inconsistencies. This is also a really cool article on emailing and texting with Lift.


Under the title How to use Container Managed Security, you will find a very good article on single sign on.

Relative paths

In many cases part of the site you implemented using the Lift framework (Great!) and some part of the site you relied on other apps (modx, redmine) that come with their own web framework using a variety of programming languages. If you are trying to keep sane while maintaining a consistent look-and-feel, you might decide to put the css, javascript and images at a central place, most likely directly accessible through the front-end web server (nginx here). As the HTTP request and response go through the whole stack, absolute paths have a tendency to be rewritten and an html file with code like:

<link type="text/css" rel="stylesheet" href="/css/style.css">

ends up looking something like this when finally making it to the browser:

<link type="text/css" rel="stylesheet" href="/app/css/style.css">

The solution that seems to work in all cases with all frameworks is to always use relative paths instead of absolute references such as for example:

<link type="text/css" rel="stylesheet" href="../css/style.css">


Once you are done copy/pasting code around and got your basic Lift application working, you will want to investigate more time in understanding Scala itself. There is a good introduction here. If you are using Lift and Scala for web applications, you will most likely be required to understand XML support at some point. There are also some useful resources for being quickly productive: scala collections for the easily bored, Scala/Lift.

A lot of the Scala testing frameworks build upon the Java ones in a way or another. You can check ScalaCheck and ScalaTest for unit testing, scct for code coverage. Developers on large Scala source base recommend to shy away from Specs because of the number of class files it produces, turning testing into a data management problem.

When comes the time to seriously write Scala code, there is no alternative than installing the emacs scala-mode on my MacBook.

$ port install scala29
$ /opt/local/share/scala-2.9/misc/scala-tool-support/emacs
$ diff -u prev ~/.emacs
+(add-to-list 'load-path "/opt/local/share/scala-2.9/misc/scala-tool-support/emacs")
+(require 'scala-mode-auto)

Last piece of advice I glanced from experience Scala developers was to use tail recursion (optimizer hint: @tailrec) instead of closures whenever possible.

Setting-up Modx CMS

by Sebastien Mirolo on Sun, 16 Oct 2011

Modx is a CMS system written in PHP. As a result, unless you install the not-yet-released PHP 5.4, you will need a PHP-enabled front web server. If you planned to use nginx you will have to do so through FastCGI (remember no built-in http server before PHP 5.4) which if, out-of-luck, you are running a PHP version below 5.3 will require a patch in the PHP source tree. Modx supports mysql as a database backend but there are no mention of postgresql. As a result, I have sticked with a "traditional" LAMP stack for now.

$ apt-get install php5 php5-mysql mysql-server
# download modx revolution from
$ cd /var/www
$ unzip /home/ubuntu/
$ ln -s modx-2.1.3-pl modx
$ diff -u php.prev /etc/php5/apache2/php.ini
-;date.timezone =
+date.timezone = America/Los_Angeles
$ pushd /var/www/modx-2.1.3-pl
$ touch core/config/
$ mkdir -p assets/components
$ mkdir -p core/components
$ chown -R www-data:www-data \
      core/cache      \
      core/export     \
      core/packages   \
      assets/         \
      core/components \
$ /etc/init.d/apache2 restart
# go to http://domainname/modx/setup/ and fill the forms...
Installation type:        New
New folder permissions:   0755
New file permissions:     0644

Database type:            mysql
Database host:            localhost
Database login name:      root
Database password:        **********
Database name:            modx
Table prefix:             modx_

Connection character set: utf8
Collation:                utf8_general_ci
# for security remove the setup directory when setup is done.
$ rm -rf /var/www/html/modx-2.1.3-pl/setup/ 

Issues during installation

In case your browser tries to download the setup page (with type applicationx-httpd-php) instead of running the php application, most likely mod_php in not loaded through the apache configuration. It might just be as simple as restarting the apache2 server.

$ /etc/init.d/apache2 restart

I you get a mysql connection error related to the PDO such as

Connecting to database server:  
MODX requires the pdo_mysql driver when native PDO is being used 
and it does not appear to be loaded.

most likely (/usr/lib/php5/20090626/ is not loaded and it might as be as simple as installing the php5-mysql package.

$ apt-get install php5-mysql
$ /etc/init.d/apache2 restart

Moving the modx directory

I then decided to move the installation of modx to a different path. Direct result, I starred at a blank page trying to access the manager's page. With a little bit of greping around the filesystem, I managed to get it back with the following shell code.

$ for f in `grep -r '/var/www/' . \
      | grep -v 'logs' | cut -d ':' -f 1 | uniq` ; do
    mv $f $f.prev && \
	sed -e 's,prevpath,newpath,g' $f.prev > $f

Unfortunately, when I tried to download extras it still tried to copy files into the old place! I browsed around the mysql database but couldn't easily spot any pointers to the old paths. I finally relied on making a symlink out of the previous path.

$ tail ./core/cache/logs/error.log
$ sudo mysql -u root -p
$ USE modx;
$ SELECT * FROM modx_site_templates;
$ SELECT * FROM modx_system_settings;
$ UPDATE modx_system_settings SET value = 'hostname' WHERE modx_system_settings.key = 'site_name';
$ ln -s newpath prevpath

I got the manager back but I was still starring at a blank page on the site itself. The BaseTemplate does is blank and my Home resource was using it. Through the interface I navigated to the extra packages, downloaded a template, and installed it.

> System 
> Package Management 
> FrontEndTemplate 
> Templates
> your favorite
> Download

> Package Management 
> Packages
> Install

> Resources
> Home
> Uses Template
> your favorite
> Save

I now had an homepage but the content listed in resources under home was not showing up. I set > "Page Settings > Container" with no luck. Following this wonderful article, I started to get a clue. The following page listed a few other extras which it seems should be in the default install.

Installing getResources

> System 
> Package Management 
> Search "getResources"
> Download

Here I got confronted with a nasty html overlay problem. The readme/license page was too long for my screen resolution. As a modal overlay it would not scroll, yet I was supposed to click on "next" to move forward. Lucky enough, hiding the dock on my MacBook Pro gave just enough space for the button to display.

Back to the tutorial on modx dynamic content, I created an articleTpl chunck (Elements > New Chuck), then added the following into the template associated with the "Home" resource.


It barely shows dynamicly generated content on the homepage. There is a lot more tweaking to get the site functional.

Setting-up Redmine

by Sebastien Mirolo on Sun, 9 Oct 2011

After a few days battling with trac, I decided to give redmine a shot.

Redmine is a ruby-on-rails wiki engine for software development that supports ldap authentication out-of-the-box, one of the main features I struggled with trying to setup trac. It also has a great theme framework.

I decided to get redmine working in a thin ruby webapp and use nginx for the main webserver and load balancer.

Web server and webapp container

First let's setup ningx as a proxy to thin and get them to communicate through unix sockets.

$ cat /etc/nginx/proxy.include
    proxy_set_header   Host $http_host;
    proxy_set_header   X-Real-IP $remote_addr;

    proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header   X-Forwarded-Proto $scheme;

    client_max_body_size       10m;
    client_body_buffer_size    128k;

    proxy_connect_timeout      90;
    proxy_send_timeout         90;
    proxy_read_timeout         90;

    proxy_buffer_size          4k;
    proxy_buffers              4 32k;
    proxy_busy_buffers_size    64k;
    proxy_temp_file_write_size 64k;

# 2. Configuration for nginx
$ apt-get install nginx
$ cat /etc/nginx/sites-available/domainname.conf
upstream thin_cluster {
    server unix:/tmp/thin.0.sock;
    server unix:/tmp/thin.1.sock;
    server unix:/tmp/thin.2.sock;
    server unix:/tmp/thin.3.sock;

server {
          listen          80;
          server_name     domainname;
          location / {
              rewrite         ^/(.*)$ https://domainname/$1 redirect;

server {
        listen       443;
        server_name  domainname;

        ssl                  on;
        ssl_certificate      /etc/ssl/certs/domainname.pem;
        ssl_certificate_key  /etc/ssl/private/domainname.key;

        ssl_session_timeout  5m;

        ssl_protocols  SSLv3 TLSv1;
        ssl_ciphers  ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
        ssl_prefer_server_ciphers   on;

        include proxy.include;
        root /var/www/redmine;
        proxy_redirect off;

        location / {
            try_files $uri/index.html $uri.html $uri @cluster;

        location @cluster {
            proxy_pass http://thin_cluster;
$ cd /etc/nginx/sites-enabled/
$ ln -fs ../sites-available/domainname.conf 000-domainname
$ /etc/init.d/nginx restart

# 3. Configuration for thin
$ apt-get install thin libi18n-ruby rails
$ cat /etc/thin/redmine.yml
pid: tmp/pids/
group: www-data
timeout: 30
log: log/thin.log
max_conns: 1024
require: []

environment: production
max_persistent_conns: 512
servers: 4
daemonize: true
user: redmine
socket: /tmp/thin.sock
chdir: /var/www/redmine

$ chgrp www-data /usr/bin/thin
$ chmod g+s /usr/bin/thin

# 4. Reminder, creating the self-signed SSL key and certificate
$ openssl genrsa -rand /swap -passout "pass:" \
     -out /etc/ssl/private/domainname.key 1024
$ openssl req -new -sha1 -subj "/CN=domainname" \
     -key /etc/ssl/private/domainname.key \
	 -out domainname.csr
$ openssl x509 -req -days 365 -outform PEM \
     -in domainname.csr \
	 -signkey /etc/ssl/private/domainname.key \
	 -out /etc/ssl/certs/domainname.pem

Content database

I picked postgresql for the database back-end.

$ apt-get install postgresql
$ /etc/init.d/postgresql initdb
$ /etc/init.d/postgresql start
$ sudo -u postgres psql
   		  NOINHERIT VALID UNTIL 'infinity';

It seems redmine will try to authenticate with the postgresql database using an IP connection. For some reason, even with what seems like correct login/password in config/database.yml, I kept getting an authentication error.

FATAL:  Ident authentication failed for user "redmine"

Google mostly returned noise about the problem so I decided to switch the postgresql settings from ident to trust and rely on the firewall to prevent connection from outside the box.

$ diff -u pg_hba.prev pg_hba.conf
 # TYPE  DATABASE        USER            CIDR-ADDRESS            METHOD
 # "local" is for Unix domain socket connections only
-local   all             all                                     ident
+local   all             all                                     trust

 # IPv4 local connections:
-host    all             all               ident
+host    all             all               trust
 # IPv6 local connections:
-host    all             all             ::1/128                 ident
+host    all             all             ::1/128                 trust

$ /etc/init.d/postgresql restart

Redmine webapp

Finally it is time to deploy the actual redmine ruby webapp.

$ apt-get install git-core rake postgresql-server-dev-8.4
$ mkdir -p /var/www/redmine
$ cd /var/www
$ chgrp www-data redmine
$ git clone git://
$ cd redmine
$ cp config/database.yml.example config/database.yml
$ diff -u config/database.yml.example config/database.yml
-  adapter: mysql
+  adapter: postgresql
   database: redmine
   host: localhost
-  username: root
-  password:
+  username: redmine
+  password: redmine_password

$ gem install -v=0.4.2 i18n
$ gem install -v=2.3.11 rails
$ gem install activerecord-jdbcpostgresql-adapter
$ gem install postgres
$ diff -u config/environment.rb.prev config/environment.rb
   config.gem 'rubytree', :lib => 'tree'
   config.gem 'coderay', :version => '~>0.9.7'
+  config.gem "postgres"

$ rake --trace generate_session_store
$ RAILS_ENV=production rake --trace db:migrate
$ RAILS_ENV=production rake --trace redmine:load_default_data

# Starting redmine
$ /usr/bin/thin start -D --all /etc/thin
$ cat log/thin.0.log

After this point most of the configuration is done through the web interface. Login as admin and use the default password (admin). Go to my account, update your first and last name, email address and, oh yes, change your password.

Click through the links "Administration > Settings > Authentication" and change the settings as follow:

Authentication required: yes
Self-registration:       disabled

# LDAP Authentication
Host:                     localhost
port:                     389
BaseDN:                   dc=domain,dc=com
On-the-fly user creation: yes
Login:                    uid
First name:               givenName
Last name:                sn
Email:                    mail

Patch to update LDAP password from within redmine

Most of my users only have access to the website (no shell access to the machine). Since redmine is also the only application deployed at this point, it just made sense to let users change their LDAP password from within redmine. At best this is currently work-in-progress and does not come packaged in the official redmine distribution. I found the following discussion and patch. There were only a few modifications to the patch I had to do in order to make it work with the repository head.

diff --git a/app/controllers/my_controller.rb \
index a4a4b51..d921412 100644
--- a/app/controllers/my_controller.rb
+++ b/app/controllers/my_controller.rb
@@ -78,10 +78,19 @@ class MyController < ApplicationController
       if @user.check_password?(params[:password])
-        @user.password, @user.password_confirmation \
		 	= params[:new_password], params[:new_password_confirmation]
-        if
-          flash[:notice] = l(:notice_account_password_updated)
-          redirect_to :action => 'account'
+        if @user.isExternal?
+          if @user.changeExternalPassword(params[:password],
		   	   params[:new_password], params[:new_password_confirmation])
+            flash[:notice] = l(:notice_account_password_updated)
+            redirect_to :action => 'account'
+          else
+            flash[:error] = l(:notice_external_password_error)
+          end
+        else
+          @user.password, @user.password_confirmation \
		      = params[:new_password], params[:new_password_confirmation]
+          if
+            flash[:notice] = l(:notice_account_password_updated)
+            redirect_to :action => 'account'
+	  end
         flash[:error] = l(:notice_account_wrong_password)
diff --git a/app/helpers/auth_sources_helper.rb \
index 90f5954..b5ad791 100644
--- a/app/helpers/auth_sources_helper.rb
+++ b/app/helpers/auth_sources_helper.rb
@@ -16,4 +16,11 @@
 module AuthSourcesHelper
+  module Encryption
+    # Return an array of password encryptions
+    def self.encryptiontypes
+      ["MD5","SSHA","CLEAR"]
+    end
+  end
diff --git a/app/models/auth_source_ldap.rb \
index b7ab0cf..d42d01d 100644
--- a/app/models/auth_source_ldap.rb
+++ b/app/models/auth_source_ldap.rb
@@ -17,6 +17,8 @@
 require 'net/ldap'
 require 'iconv'
+require 'digest'
+require 'base64'
 class AuthSourceLdap < AuthSource
   validates_presence_of :host, :port, :attr_login
@@ -55,6 +57,50 @@ class AuthSourceLdap < AuthSource
+  def allow_password_changes?
+    return self.enabled_passwd
+  end
+  def encode_password(clear_password)
+    chars = ("a".."z").to_a + ("A".."Z").to_a + ("0".."9").to_a
+    salt = ''
+    10.times { |i| salt << chars[rand(chars.size-1)] }
+    if self.password_encryption == "MD5"
+      logger.debug "Encode as md5"
+      return "{MD5}"+Base64.encode64(Digest::MD5.digest(clear_password)).chomp!
+    end
+    if self.password_encryption == "SSHA"
+       logger.debug "Encode as ssha"
+      return "{SSHA}"+Base64.encode64(Digest::SHA1.digest(clear_password+salt)+salt).chomp!
+    end
+    if self.password_encryption == "CLEAR"
+       logger.debug "Encode as cleartype"
+      return clear_password
+    end
+  end
+  # change password
+  def change_password(login,password,newPassword)
+    begin
+      attrs = get_user_dn(login)
+      if attrs
+        if self.account.blank? || self.account_password.blank?
+          logger.debug "Binding with user account"
+          ldap_con = initialize_ldap_con(attrs[:dn], password)
+        else
+          logger.debug "Binding with administrator account"
+          ldap_con = initialize_ldap_con(self.account, self.account_password)
+        end
+        return ldap_con.replace_attribute attrs[:dn], :userPassword, encode_password(newPassword)
+      end
+    rescue
+      return false
+    end
+    return false
+  end
   def strip_ldap_attributes
diff --git a/app/models/user.rb \
index b362202..4499360 100644
--- a/app/models/user.rb
+++ b/app/models/user.rb
@@ -545,6 +545,19 @@ class User < Principal
+  def isExternal?
+    return auth_source_id.present?
+  end
+  def changeExternalPassword(password,newPassword,newPasswordConfirm)
+    return false if newPassword == "" || newPassword.length < 4
+    return false if newPassword != newPasswordConfirm
+    if (self.isExternal?)
+      return self.auth_source.change_password(self.login,password,newPassword)
+    end
+    return false
+  end
   def validate_password_length
diff --git a/app/views/ldap_auth_sources/_form.html.erb \
index 9ffffaf..e125a54 100644
--- a/app/views/ldap_auth_sources/_form.html.erb
+++ b/app/views/ldap_auth_sources/_form.html.erb
@@ -25,6 +25,8 @@
 <p><label for="auth_source_onthefly_register"><%=l(:field_onthefly)%></label>
 <%= check_box 'auth_source', 'onthefly_register' %></p>
+<p><label for="auth_source_enabled_passwd"><%=l(:field_enabled_passwd)%></label>
+<%= check_box 'auth_source', 'enabled_passwd' %></p>
 <fieldset class="box"><legend><%=l(:label_attribute_plural)%></legend>
@@ -39,6 +41,9 @@
 <p><label for="auth_source_attr_mail"><%=l(:field_mail)%></label>
 <%= text_field 'auth_source', 'attr_mail', :size => 20  %></p>
+<p><label for="auth_source_password_encryption"><%=l(:field_password_encryption)%></label>
+<%= select 'auth_source', 'password_encryption', AuthSourcesHelper::Encryption.encryptiontypes %>
diff --git a/config/locales/en.yml b/config/locales/en.yml
index f8a1c25..dae359b 100644
--- a/config/locales/en.yml
+++ b/config/locales/en.yml
@@ -140,6 +140,7 @@ en:
   general_pdf_encoding: UTF-8
   general_first_day_of_week: '7'
+  notice_external_password_error: External password changing goes wrong
   notice_account_updated: Account was successfully updated.
   notice_account_invalid_creditentials: Invalid user or password
   notice_account_password_updated: Password was successfully updated.
@@ -270,6 +271,8 @@ en:
   field_attr_lastname: Lastname attribute
   field_attr_mail: Email attribute
   field_onthefly: On-the-fly user creation
+  field_password_encryption: Encyrption
+  field_enabled_passwd: Enabled password changing
   field_start_date: Start date
   field_done_ratio: "% Done"
   field_auth_source: Authentication mode

Still learning about redmine, I had to look the error in the production.log and figure out I just had to do a db:migrate to make everything work.

$ tail -200 log/production.log
$ RAILS_ENV=production rake db:migrate

That's it now users can enjoy redmine, login using LDAP authentication and change their password from within redmine.

Setting-up Trac

by Sebastien Mirolo on Sun, 2 Oct 2011

I needed to setup a forum for developers on a recent project. That included a source control repository (git), a wiki, a blog, a buildbot and an issue tracking system. To provide the last components I decided to setup trac and a few trac plug-ins.


I realized later that initialization of a trac environment often sets valid defaults for all plug-ins are already on the local system. Some plug-ins are available, as trac, through the package manager (aptitude search 'trac-.*') while others can be installed through python setup tools (easy_install) or finally a source archive. I have installed the following plug-ins for

# Base servers
$ apt-get install trac nginx
# Authentication
$ apt-get install trac-accountmanager
$ find /usr/lib -name '*acct_mgr*'
$ easy_install
Installed /usr/local/lib/python2.7/dist-packages/TracNoAnonymous-2.4-py2.7.egg
$ apt-get install pwauth
$ curl -O TracPwAuth-1.0.tar.gz
$ tar zxvf TracPwAuth-1.0.tar.gz
$ cd TracPwAuth-1.0 && python ./ install
Installed /usr/local/lib/python2.7/dist-packages/TracPwAuth-1.0-py2.7.egg
# Source repository
$ apt-get install trac-git
$ easy_install
Installing bitten-slave script to /usr/local/bin
Installed /usr/local/lib/python2.7/dist-packages/Bitten-0.6b2-py2.7.egg
$ wget
$ tar zxvf 0.3.tar.gz
$ cd codetags-0.3-fb76322 && python ./ install
Installed /usr/local/lib/python2.7/dist-packages/codetags-0.3-py2.7.egg
$ easy_install
Installed /usr/local/lib/python2.7/dist-packages/TracRevtreePlugin-0.6.3dev_r5601-py2.7.egg
$ easy_install
Installing update-index script to /usr/local/bin
Installed /usr/local/lib/python2.7/dist-packages/tracreposearch-0.2-py2.7.egg
# tracking metrics
$ easy_install
$ easy_install --always-unzip
$ git clone
$ cd tracstats && python ./ install
Installed /usr/local/lib/python2.7/dist-packages/TracStats-0.4-py2.7.egg
$ easy_install
Installed /usr/local/lib/python2.7/dist-packages/icalview-0.4-py2.7.egg
$ easy_install
Installed /usr/local/lib/python2.7/dist-packages/timingandestimationplugin-0.9.8-py2.7.egg
$ easy_install
Installed /usr/local/lib/python2.7/dist-packages/TracBurndown-1.9.2-py2.7.egg
$ easy_install
# Custom design
$ easy_install
Installed /usr/local/lib/python2.7/dist-packages/TracThemeEngine-2.0.1-py2.7.egg
$ easy_install
Installed /usr/local/lib/python2.7/dist-packages/TracRandomInclude-0.1-py2.7.egg

Some of these commands installed trac 0.12 in /usr/local/bin/trac-admin and /usr/local/bin/tracd. That created a lot of incompatibilities and problems later on so I deleted them in order to stick with the Ubuntu 11.04 packaged trac 0.11 version.

Create repository

$ sudo -u www-data vi
$ less 
$ sudo -u www-data git add 
$ sudo -u www-data git commit -m 'FIXME codetag'

It is now time to create the trac environment.

man trac-admin
trac-admin help
mkdir /var/www/trac
trac-admin /var/www/trac initenv testproj sqlite:db/trac.db git /var/www/reps/testproj/.git

And turn on logging in order to debug configuration issues along the way.

diff -u trac.ini.prev trac.ini
log_level = DEBUG
- log_type = none
+ log_type = file


Since the information on the trac site is confidential, I decided to setup it behind https. A lot of the documentation dealing with trac and authentication rely on web server authentication. That is unfortunate because that pops up an authentication dialog box. Most sites that require authentication today land on a login page and I wanted the same functionality for the trac site. Fortunately there is the wonderful AccountManager plug-in to do that. I still had to prevent unauthenticated access to the trac site and the NoAnonymous plug-in enabled that requirement.

Last, I needed to choose a password store to check username/password against. I picked TracPwAuth to authenticate against the unix /etc/passwd file and avoid password stores proliferation.

Trac has a built-in http server that makes it straightforwad to start serving trac pages after the daemon is running.

$ tracd -d --port 8000 /var/www/trac
$ tracd -s --port 8000 /var/www/trac
$ curl -O http://localhost:8000/trac

Unfortunately trac does not have a built-in https server so we will need to rely on a more complete web server in front of it. I picked nginx. I also decided to setup nginx/trac through a fastcgi interface instead of a proxy forward from nginx to trac, a choice I might well revert in the future.

I disabled the default site in nginx, created a trac site configuration and enabled it.

$ cat /etc/nginx/sites-available/trac
server {
        listen       443;
        server_name  domainname;

        ssl                  on;
        ssl_certificate      /etc/ssl/certs/domainname.pem;
        ssl_certificate_key  /etc/ssl/private/domainname.key;

        ssl_session_timeout  5m;

        ssl_protocols  SSLv3 TLSv1;
        ssl_ciphers  ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
        ssl_prefer_server_ciphers   on;

        if ($uri ~ ^/(.*)) {
             set $path_info /$1;

        #if ($request_uri ~ /login) {
        #     break;

        #if ($remote_user ~ ^$) {
        #     set $path_info /login;
        #     rewrite ^ /login redirect;

        # You can copy this whole location to ``location [/some/prefix]/login``
        # and remove the auth entries below if you want Trac to enforce
        # authorization where appropriate instead of needing to authenticate
        # for accessing the whole site.
        # (Or ``location /some/prefix``.)
        location / {
            #auth_basic            "trac realm";
            #auth_basic_user_file /home/trac/htpasswd;

            # socket address
            fastcgi_pass   unix:/var/www/tracenv/run/instance.sock;

            # python - wsgi specific
            fastcgi_param HTTPS on;

            # WSGI application name - trac instance prefix.
            # (Or ``fastcgi_param  SCRIPT_NAME  /some/prefix``.)
            fastcgi_param  SCRIPT_NAME        "";
            fastcgi_param  PATH_INFO          $path_info;

            ## WSGI NEEDED VARIABLES - trac warns about them
            fastcgi_param  REQUEST_METHOD     $request_method;
            fastcgi_param  SERVER_NAME        $server_name;
            fastcgi_param  SERVER_PORT        $server_port;
            fastcgi_param  SERVER_PROTOCOL    $server_protocol;
            fastcgi_param  QUERY_STRING       $query_string;

            # for authentication to work
            fastcgi_param  AUTH_USER          $remote_user;
            fastcgi_param  REMOTE_USER        $remote_user;

            # for ip to work
            fastcgi_param REMOTE_ADDR         $remote_addr;

            # For attchments to work
            fastcgi_param    CONTENT_TYPE     $content_type;
            fastcgi_param    CONTENT_LENGTH   $content_length;
$ cd /etc/nginx/sites-enabled
$ rm default
$ ln -s ../sites-available/trac

There is a script in /usr/lib/python2.7/dist-packages/trac/admin/templates/ called deploy_trac.fcgi. It contains a few template variables that need to be instantiated and looks quite different from the ones written up on the official trac wiki. I finally decided to copy/paste the one from the wiki into a local /var/www/trac/trac.fcgi file.

#!/usr/bin/env python
import os
sockaddr = '/var/www/trac/run/instance.sock'
os.environ['TRAC_ENV'] = '/var/www/trac'

     from trac.web.main import dispatch_request
     import trac.web._fcgi

     fcgiserv = trac.web._fcgi.WSGIServer(dispatch_request, 
          bindAddress = sockaddr, umask = 7)

except SystemExit:
except Exception, e:
    print 'Content-Type: text/plain\r\n\r\n',
    print 'Oops...'
    print 'Trac detected an internal error:'
    print e
    import traceback
    import StringIO
    tb = StringIO.StringIO()
    print tb.getvalue()

both nginx and trac.fcgi need access to the socket file. Nginx is running as the www-data user. It is possible to run trac as a different user by setting the following permissions.

mkdir -p /var/www/trac/run
chgrp www-data run
chmod g+ws run
chgrp www-data trac.fcgi
chmod g+s trac.fcgi

Later though using two different users will prevent authentication. If /usr/sbin/pwauth returns error code "50" (STATUS_INT_USER) when run as the trac user, it means pwauth was compiled without that user defined in SERVER_UIDS (config.h). At this point, I prefer to use apt-get to install pwauth instead of recompiling it from source, so I run nginx and trac as the www-data user, updating files permissions to reflect that.

sudo chown -R www-data:www-data /var/www/trac
$ diff -u trac.ini.prev trac.ini
+password_store = PwAuthStore

+trac.web.auth.LoginModule = disabled
+acct_mgr.web_ui.LoginModule = enabled
+acct_mgr.web_ui.RegistrationModule = disabled
+pwauth.* = enabled
+noanonymous.* = enabled

Note: If you prefer the popup approach and handle all authentication through nginx, checkout the httpAuth plugin.

Nginx as a proxy to trac http server

The fastcgi approach worked fine but I still decided to use the http proxy setup after all because it requires less configuration steps. I also found an init.d script for tracd which proved valuable to start and stop tracd as a service. Here is the nginx configuration for the site:

upstream proxy_trac {

server {
          listen          80;
          server_name     domainname;
          location / {
              rewrite     ^/(.*)$ https://domainname/$1 redirect;

server {
        listen               443;
        server_name          domainname;

        ssl                  on;
        ssl_certificate      /etc/ssl/certs/domainname.pem;
        ssl_certificate_key  /etc/ssl/private/domainname.key;

        ssl_session_timeout  5m;

        ssl_protocols  SSLv3 TLSv1;
        ssl_ciphers  ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
        ssl_prefer_server_ciphers   on;

        # it makes sense to serve static resources through Nginx
        location /chrome/ {
             alias /var/www/htdocs/;

        location / {
                  proxy_pass       http://proxy_trac;
                  proxy_redirect   default;
                  proxy_set_header Host $host;

The site stopped to respond to request anymore as soon as I enabled the noanonymous plug-in. As it runs out something is creating an http redirect that required me to open port 80 in the firewall and add the appropriate redirects in the nginx config file.

PwAuth means trac contributors have a unix account on the server machine. As a result if you want to modify a password you will need to run passwd from a unix shell on the machine, in turn that might require to give all contributors ssh login to the machine.

What I really wanted in a single authentication directory for users, the ability for them to change their passwords but do not grant shell access to all users. I thought setting-up an LDAP password store would be great. They are a lot of LDAP plugins, each of them somewhat forked from each other. None-of-them seem fully implemented. It just became such a nightmare that I decided to switch to redmine on the production site.

Browsing the git repository

The issue when you use the local package manager to install plug-ins is that you have no idea where files are copied. That makes things a little tricky when you need to enable components based on pathnames. To finally write the correct line to enable git source browser, I had to rely on inspecting the .deb package as such.

$ aptitude download trac-git
$ dpkg -c trac-git_0.0.20100513-2ubuntu1_all.deb
$ diff -u trac.ini.prev trac.ini
+scan_files = *.html, *.js, *.py
+scan_folders = /*

+tracext.git.* = enabled
+codetags.* = enabled
+revtree.* = enabled
+tracreposearch.* = enabled

+include =*.html:*.js:*.py 
+exclude = *.pyc:*.png:*.jpg:*.gif:*/README

+#contexts = changeset, browser

# See
-base_url =
+base_url = http://domainname/trac
-mainnav = wiki,timeline,roadmap,browser,tickets,newticket,search
+mainnav = wiki,timeline,roadmap,browser,revtree,tickets,newticket,search
repository_dir = /var/www/reps/testproj/.git

The database needs to be upgraded (codetags)

tracking metrics

The Agile trac plugin looked interesting but requires a patched trac. The patch to the ubuntu installed version is pretty huge. Browsing through the Agile trac website I did not notice enough compelling arguments to apply such an intrusive patch so I skipped this plug-in for now.

$ find /usr/lib -name '*trac*'
$ svn co trac
$ diff -ru /usr/lib/python2.7/dist-packages/trac trac

The ScrumBurndown plugin requires TimingAndEstimation plug-in so I installed and configured this one as well.

$ diff -u trac.ini.prev trac.ini
+tractags.* = enabled
+tracfullblog.* = enabled
+tracstats.* = enabled
+timingandestimationplugin.* = enabled
+burndown.* = enabled
+tasklist.* = enabled

+dtstart = my_custom_dtstart_field
+duration = my_custom_duration_field
+short_date_format = %d/%m/%Y;%Y-%m-%d
+date_time_format = %d/%m/%Y %H:%M;%Y-%m-%d %H:%M

+my_custom_dtstart_field = text
+my_custom_dtstart_field.label = Planned Date
+my_custom_duration_field = text
+my_custom_duration_field.label = Duration
+action_item = text
+action_item.label = Action Item

-mainnav = wiki,timeline,roadmap,browser,revtree,tickets,newticket,search
+mainnav = wiki,blog,timeline,roadmap,browser,revtree,tickets,newticket,search

Permissions need to be set appropriately for buttons to show up in the menubar.

$ trac-admin /var/www/trac permission list
$ trac-admin /var/www/trac permission add anonymous STATS_VIEW

Custom design

I tried to install GoogleCodeTheme but the repository is empty. Downloading the archive creates a zip file that seems invalid. Finally I gave up on it and tried the GameDev theme instead.

$ easy_install
Installed /usr/local/lib/python2.7/dist-packages/TracGamedevTheme-2.0-py2.7.egg
$ diff -u trac.ini.prev trac.ini
+themeengine.* = enabled
+gamedevtheme.* = enabled

-theme = default
+theme = Gamedev


Plug-in architectures are great but it seems trac went a little overboard as most of the expected functionality comes in the form of plug-ins more or less complex to configure.

In the end I decided to settle on using redmine because most of what I am looking for comes pre-packaged in-the-box (except changing LDAP password). How I set up redmine is the subject of a further post.

Share with your network