Using Nginx as a frontend proxy for Play Framework applications

Tue, May 26, 2015
  • nginx
  •  
  • devops
  •  
  • playframework
  •  

Of course you can connect your Play application directly to the internet but I prefer to use a frontend proxy in our production environments. Nginx is useful for several reasons.

  • Caching and high performance delivery of static assets (or other GET requests)
  • SSL termination
  • gzip compression
  • Load balancing several Play applications
  • easy maintainance as you can shut down one Play application node for updates

In this article I will show you a typical setup and describe the steps and configuration options that I chose for our production setup. Any kind of feedback is welcome!

Environment used for this article

Currently I’m using the following environment

  • Nginx 1.6.3
  • Playframework 2.3.9

I will try to keep you updated when new major version will be released.

I use the default way to package a play application with

sbt dist

To start the application containted in the zip file use

bin/application_name

Without any modifications play starts and is bound to Port 9000.

Configure Nginx

I use the following nginx.conf as my default.

http {
#   proxy_buffering    off;    <- don't forget to comment out or remove this line.
    proxy_set_header   X-Real-IP $remote_addr;
    proxy_set_header   X-Scheme $scheme;
    proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header   Host $http_host;
    proxy_cache_path   /var/lib/nginx/cache levels=1:2 keys_zone=one:1000m;
    proxy_cache_methods GET HEAD;
    proxy_cache_key $host$uri$is_args$args;
    proxy_cache_valid 200 10m;

# Do gzip compression with nginx, not the play application
    gzip on;
    gzip_comp_level 9;
    gzip_proxied any;

# This is important if you use Play chunked response as chunked response is only available with HTTP 1.1
    proxy_http_version 1.1;

    upstream backend {
        server 127.0.0.1:9000;
    }

    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;

    keepalive_timeout  65;

    index   index.html index.htm;

    server {
        listen       80 default_server;
        server_name  www.mydomain.com;
        location / {
            proxy_pass http://backend;
        }

        location ~ ^/(assets|webjars)/ {
            proxy_cache one;
            proxy_cache_key "$host$request_uri";
            proxy_cache_valid 200 30d;
            proxy_cache_valid 301 302 10m;
            proxy_cache_valid 404 1m;

            proxy_pass http://backend;
        }
    }

}

If you want to use Nginx as a caching frontend proxy too, then you need to set

proxy_buffering    on;

It took me a view minutes of debugging to find this one as the default settings in most distributions is “off”. In the case you need to debug if a request is delivered from the cache or not you can add

add_header X-Cache $upstream_cache_status;

to the location section and check the HTTP header X-Cache if it contains MISS or HIT.

Some Benchmarks

I’ve done several benchmark runs to see if it is worth to use Nginx as a chaching frontend proxy. All benchmarks have been made with Siege 3.0.9 on a MacBook Pro (Late 2013), 2.3 GHz Intel Core i7, 16 GB 1600 MHz DDR3.

Small JavaScript file

I used a small (3.914 Bytes) JavaScript for the first benchmark.

Without Caching With caching
Transactions: 10000 hits 10000 hits
Availability: 100.00 % 100.00 %
Elapsed time: 14.99 secs 13.03 secs
Data transferred: 37.33 MB 37.33 MB
Response time: 0.01 secs 0.01 secs
Transaction rate: 667.11 trans/sec 767.46 trans/sec
Throughput: 2.49 MB/sec 2.86 MB/sec
Concurrency: 9.80 9.78
Successful transactions: 10000 10000
Failed transactions: 0 0
Longest transaction: 0.11 0.11
Shortest transaction: 0.00 0.00

Big ZIP File

This time I used a 4.234.178 Bytes sized ZIP file for second benchmark.

Without Caching With caching
Transactions: 1000 hits 1000 hits
Availability: 100.00 % 100.00 %
Elapsed time: 96.11 secs 88.17 secs
Data transferred: 4038.03 MB 4038.03 MB
Response time: 0.95 secs 0.87 secs
Transaction rate: 10.40 trans/sec 11.34 trans/sec
Throughput: 42.01 MB/sec 45.80 MB/sec
Concurrency: 9.93 9.91
Successful transactions: 1000 1000
Failed transactions: 0 0
Longest transaction: 1.79 2.00
Shortest transaction: 0.19 0.12

Result

As you can see Play Framework delivers static assets nearly as fast as Nginx does. Thats impressive and a result of the reactive nature of the Play framework! Nginx is slightly faster and with bigger file sizes the results are more or less the same. But I have experienced that the CPU load was a bit lower with caching enabled so Nginx is also doing a great job on serving static assets.

I hope this article helps you in configuring Nginx as a caching frontend proxy for Play Framework apps.

About me

  • Gerhard Hipfinger
  • Founder of openForce Information Technology
  • Vienna, Austria
  • Gerhard Hipfinger
  • View Gerhard Hipfinger's profile on LinkedIn