Multiplay Labs

tech hits and tips from Multiplay

Archive for the ‘Nginx’ Category

LANcache – Dynamically Caching Game Installs at LAN’s using Nginx

with 324 comments

Last year we posted our Caching Steam Downloads @ LAN’s article which has been adopted by many of the LAN event organisers in the community as the baseline for improving download speeds and helping avoid internet saturation when you have 10’s – 1000’s of gamers at events all updating and installing new games from Steam.

This rework builds on the original concepts from our steam caching, brings in additional improvements from the community, such as the excellent work by the guys @ as as well as other enhancements.

Due to the features used in this configuration it requires nginx 1.6.0+ which is the latest stable release at the time of writing.

Nginx Configuration
In order to make the configuration more maintainable we’ve split the config up in to a number of smaller includes.

In the machines directory you have lancache-single.conf which is the main nginx.conf file that sets up the events and http handler as well as the key features via includes: custom log format, cache definition and active vhosts.

include lancache/log_format;
include lancache/caches;
include vhosts/*.conf;

The custom log format adds three additional details to the standard combined log format “$upstream_cache_status” “$host” “$http_range”. These are useful for determine the efficiency of each segment the cache.

In order to support the expanding number of downloads supported by LANcache we’ve switched the config from static mirror to using nginx’s built in caching.

In our install we’re caching data to 6 x 240GB SSD’s configured in ZFS RAIDZ so we have just over 1TB of storage per node.
To ensure we don’t run out of space we’ve limited the main installs cache size to 950GB with custom loader details to ensure we can init the cache quicker on restart.
The other cache zone is used for none install data so is limited to just 10GB.

We also set proxy_temp_path to a location on the same volume as the cache directories so that temporary files can be moved directly to the cache directory avoid a file copy which would put extra stress on the IO subsystem.

proxy_cache_path /data/www/cache/installs levels=2:2 keys_zone=installs:500m inactive=120d max_size=972800m loader_files=1000 loader_sleep=50ms loader_threshold=300ms;
proxy_cache_path /data/www/cache/other levels=2:2 keys_zone=other:100m inactive=72h max_size=10240m;
proxy_temp_path /data/www/cache/tmp;

Here we define individual server entries for each service we’ll be caching, we do this so that each service can configure how its cache works independently.
In order to allow for direct use of the configs in multiple setups without having to edit the config files themselves we made use of named entries for all listen addresses.

The example below shows the origin server entry which listens on lancache-origin and requires the spoof entries

We use server_name as part of the cache_key to avoid cache collisions and so add _ as the wildcard catch all to ensure all requests to this servers IP are processed.

For performance we configure the access log with buffering.

# origin
server {
        listen lancache-origin accept_filter=httpready default;
        server_name origin _;
        # DNS entries:
        # lancache-origin
        access_log /data/www/logs/lancache-origin-access.log main buffer=128k flush=1m;
        error_log /data/www/logs/lancache-origin-error.log;
        include lancache/node-origin;

The include is where all the custom work is done, in this case lancache/node-origin. There are currently 5 different flavours of node: blizzard, default, origin, pass and steam.

Origin’s CDN is pretty bad in that currently prevents the caching of data, due to this we’re force to ignore the Cache-Control and Expires headers. The files themselves are very large 10GB+ and the client uses range requests to chunk the downloads to improve performance and provide realistic download restart points.

By default nginx proxy translates a GET request with a Range request to a full download by stripping the Range and If-Range request headers from the upstream request. It does this so subsequent range requests can be satisfied from the single download request. Unfortunately Origin’s CDN prevents this so we have to override this default behaviour by passing through the Range and If-Range headers. This means the upstream will reply with a 206 (partial content) instead of a 200 (OK) response and hence we must add the range to the cache key so that additional requests are correctly.

The final customisation for Origin is to use $uri in the proxy_cache_key, we do this as the Origin client uses a query string parameter sauth=<key>

Blizzard have large downloads too, so to ensure that requests are served quickly we cache 206 responses in the same way as for Origin.

All Steam downloads are located under /depot/ so we have a custom location for that which ignores the Expires header as Steam sets a default Expires header.

We also store the content of requests /serverlists/ as these requests served by give us information about hosts used by Steam to process download request. The information in these files could help identify future DNS entries which need spoofing.

Finally the catch all / entry caches all other items according to the their headers.

This is the default which is used for riot, hirez and sony it uses standard caching rules which caches based on the proxy_cache_key "$server_name$request_uri"

Required DNS entries
All of the required DNS entries are for each service are documented their block server in vhosts/lancache-single.conf which as of writing is:
lancache-steam * * *







lancache-microsoft * *

You’ll notice that each entry starts with lancache-XXXX this is entry used in the listen directive so no editing of the config is required for IP allocation to each service. As we’re creating multiple server entries and each is capturing hostnames using the _ wildcard each service must have its own IP e.g. lancache-steam =, lancache-riot =, lancache-blizzard =, lancache-hirez =, lancache-origin = and lancache-sony =

Hardware Specifications
At Insomnia 51 we used a hybrid of this configuration which made use of two machines working in a cluster with the following spec:

  • Dual Hex Core CPU’s
  • 128GB RAM
  • 6 x 240GB SSD’s ZFS RAIDZ
  • 6 x 1Gbps Nics
  • OS – FreeBSD 10.0

These machines where configured in a failover pair using CARP @ 4Gbps lagg using LACP. Each node served ~1/2 of the active cache set to double the available cache space to ~1.8TB, with internode communication done on using a dedicated 2Gbps lagg

LANcache Stats from Insomnia 51
For those that are interested at its initial outing at Insomnia 51 LANcache:

  • Processed 6.6 million downloads from the internet totalling 2.2TB
  • Served 34.1 million downloads totalling 14.5TB to the LAN
  • Peaked at 4Gbps (the max capacity) to the LAN

Config Downloads

* 2015-09-17 – Updated info about required DNS entries
* 2015-10-09 – Linked initial public version of configs on github

Written by Dilbert

April 30th, 2014 at 3:33 pm

Posted in FreeBSD,Gaming,Nginx

Battle.Net Installer Error Code: 2600 Fix

with one comment

If you’re installing Battle.Net, required and installed as the initial part of Blizzard games such as Starcraft II & Hearthstone, and the installer fails with the message:

Whoops! Looks like something broke. Error Code: 2600

This can be caused by a bad download, which can be the result of a proxied web connection.

Proxies, particularly caching proxies, can translates the Blizzard downloaders HTTP request with the Range header into a full request, which is subsequently returned as is to the client i.e. 200 OK response containing the full file. The downloader was expecting a 206 Partial Content response but appears to only check for 20X response, hence it doesn’t spot the issue and builds its full file incorrectly.

To make matters worse the downloader stores this file in the Windows temporary directory and doesn’t delete it either on failure or before trying to download it again such as if the installer is restarted.

If you’re using nginx prior to 1.5.3 as a caching proxy then this will happen if your the first person to download the file, after which 206 responses are correctly returned for Range requests using the cached file. This behaviour was changed in 1.5.3 when an enhancement to return 206 for on the fly caching responses was added. To be clear this isn’t technically a bug in nginx, the spec allows for a server to return 200 OK response to a Range request, its the Blizzard downloader that’s at fault for not correctly processing 200 OK then its expecting 206 Partial Content.

If you have your Battle.Net installer bugged like this simply delete the Blizzard directory from your Windows temporary directory %TEMP% and re-run the installer after fixing or disabling your proxy.

Written by Dilbert

April 15th, 2014 at 12:23 am

Posted in Gaming,Nginx

Caching Steam Downloads @ LAN’s

with 32 comments

This article has been superseded by LANcache – Dynamically Caching Game Installs at LAN’s using Nginx

Gone are the days when users install most of their games from physical media such as CD’s / DVD’s instead they rely on high bandwidth internet connections and services such as EA’s Origin and Valve’s Steam.

This causes issues for events which bring like minded gamers together at LAN events such as Multiplay’s Insomnia Gaming Festival, due to the massive amount of bandwidth required to support these services with users patching games and downloading new ones to play with their friends.

With the release of Steam’s new Steampipe creating a local cache of steam download’s, so that game installs at these types of events is significantly quicker and requires dramatically less internet bandwidth, has become much easier.

Steampipe uses standard web requests from Valve Software’s new content servers so standard proxy technology can be used to cache requests.

This article describes how to use FreeBSD + nginx to create a high performance caching proxy for all steam content served by Steampipe

Hardware Specifications
The following is our recommended specification for a machine. Obviously the higher the spec the better, in particular more RAM.

  • Quad Core CPU
  • 32GB RAM (the more the better)
  • 6 x 1TB HD’s
  • 2 x 120GB SSD’s
  • 1Gbps NIC

Machine Install
Our setup starts by performing a FreeBSD ZFS install using mfsBSD we use a RAIDZ on the HD’s e.g.

zfsinstall -d da0 -d da1 -d da2 -d da3 -d da4 -d da5 \
    -r raidz -t /cdrom/8.3-RELEASE-amd64.tar.xz -s 4G

Once the OS is installed setup the L2ARC on the SSD’s

zpool add cache da6 da7

Next install the required packages:

pkg_add -r nginx

Enable FreeBSD http accept filter which improves performance of processing http connections as they are handled in kernel mode:

echo 'accf_http_load="YES"' >> /boot/loader.conf
kldload accf_http

Enable nginx to start on boot by adding nginx_enable="YES" to /etc/rc.conf:

echo 'nginx_enable="YES"' >> /etc/rc.conf

Configuring Nginx
First create some directories:

mkdir -p /data/www/steamproxy
mkdir -p /data/www/logs

The trick to mirroring the steam content so that duplicate requests are served locally is two fold.

  • First spoof *, content[1-8] and send all traffic to the proxy box
  • Configure nginx to on demand mirror the requests under /depot/ using none spoofing resolvers.

Here’s our example /usr/local/etc/nginx.conf. It uses Google’s open DNS servers for the resolver but any non-spoofed resolvers will do.

user www www;
worker_processes  4;
error_log  /data/www/logs/nginx-error.log;
events {
	worker_connections 8192;
	multi_accept on;
	use kqueue;
http {
	include mime.types;
	access_log  /data/www/logs/nginx-access.log;
	keepalive_timeout 65;
	# steam mirror
	server {
		listen 80 accept_filter=httpready default;
		server_name _;
		access_log /data/www/logs/steam-access.log;
		error_log /data/www/logs/steam-error.log;
		root /data/www/steamproxy;
		index index.html;
		location /depot/ {
			try_files $uri @mirror;
			access_log /data/www/logs/depot-local.log;
		location / {
			proxy_next_upstream error timeout http_404;
			proxy_pass http://$host$request_uri;
			proxy_redirect off;
			proxy_set_header Host $host;
			proxy_set_header X-Real-IP $remote_addr;
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			add_header X-Mirror-Upstream-Status $upstream_status;
			add_header X-Mirror-Upstream-Response-Time $upstream_response_time;
			add_header X-Mirror-Status $upstream_cache_status;
		location @mirror {
			proxy_store on;
			proxy_store_access user:rw group:rw all:r;
			proxy_next_upstream error timeout http_404;
			proxy_pass http://$host$request_uri;
			proxy_redirect off;
			proxy_set_header Host $host;
			proxy_set_header X-Real-IP $remote_addr;
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			add_header X-Mirror-Upstream-Status $upstream_status;
			add_header X-Mirror-Upstream-Response-Time $upstream_response_time;
			add_header X-Mirror-Status $upstream_cache_status;
			access_log /data/www/logs/depot-remote.log;

We use on demand mirroring instead of proxy “caching” as the files in steam /depot/ never change so storing them permanently eliminates the overhead of cache cleaners and other calculations.

Don’t forget this can also be used with smaller hardware to reduce steam bandwidth requirements in an office environment too 🙂

Update – 2013/08/19 – Added content[1-8] as additional content server hosts which need proxying
Update – 2013/09/10 – Changed from $uri to $request_uri (which includes query string args) required for authentication now.
Update – 2014/04/30 – Superseded by LANcache

Written by Dilbert

April 17th, 2013 at 1:01 pm

Posted in FreeBSD,Gaming,Nginx

nginx $body_bytes_sent gotcha

without comments

Just been trying to figure your why headers set to $body_bytes_sent where always 0 even after a successful download of a file from nginx.

We’ll after much head scratching I tried something that I didn’t think would work but it did. It turns out that the first time $body_bytes_sent is accessed in a request its value is cached and reused for subsequent references. We where using an include to configure proxy_pass headers for all our stages of the download process which had X-Bytes-Sent being configured to $body_bytes_sent.

Unfortunately because $body_bytes_sent is not flagged as none cacheable its value is fixed to then it is first used, which in our case was prior to any body being sent and hence the problem.

The fix for us was simple remove it from the include and only specify it in the one place we wanted it but this may not be possible for all uses so the correct fix may be to set NGX_HTTP_VAR_NOCACHEABLE on flag body_bytes_sent in src/http/ngx_http_variables.c for nginx.

Written by Dilbert

March 23rd, 2011 at 11:35 pm

Posted in Nginx

Adding Google Chrome Frame support to nginx hosted sites

without comments

I’m trying to add support for google chrome frame to our nginx config the simple way would be to include a fragment in each server block which looks something like:

    if ( $http_user_agent ~ chromeframe ) {
        add_header X-UA-Compatible "chrome=1";

The problem is for some reason add_header isn’t allowed in server-if blocks on location-if blocks, so this errors with: add_header directive is not allowed here
I can’t see or think of a reason why this shouldn’t be allowed, so I tried adding it and it seems to work without a problem.

The following patch, made against 0.8.40, adds server if block support for both expires and add_header.

--- src/http/modules/ngx_http_headers_filter_module.c.orig  2010-06-14 23:48:58.000000000 +0100
+++ src/http/modules/ngx_http_headers_filter_module.c   2010-06-14 23:49:27.000000000 +0100
@@ -79,7 +79,7 @@
     { ngx_string("expires"),
-                        |NGX_CONF_TAKE12,
+                        |NGX_HTTP_SIF_CONF|NGX_CONF_TAKE12,
@@ -87,7 +87,7 @@
     { ngx_string("add_header"),
-                        |NGX_CONF_TAKE2,
+                        |NGX_HTTP_SIF_CONF|NGX_CONF_TAKE2,

Cross posted to the nginx mailing list here

Written by Dilbert

June 14th, 2010 at 11:06 pm

Posted in Hackery,Nginx