Multiplay Labs

tech hits and tips from Multiplay

Archive for the ‘Gaming’ Category

LANcache – Dynamically Caching Game Installs at LAN’s using Nginx

with 325 comments

Last year we posted our Caching Steam Downloads @ LAN’s article which has been adopted by many of the LAN event organisers in the community as the baseline for improving download speeds and helping avoid internet saturation when you have 10’s – 1000’s of gamers at events all updating and installing new games from Steam.

This rework builds on the original concepts from our steam caching, brings in additional improvements from the community, such as the excellent work by the guys @ http://churchnerd.net/ as as well as other enhancements.

Requirements
Due to the features used in this configuration it requires nginx 1.6.0+ which is the latest stable release at the time of writing.

Nginx Configuration
In order to make the configuration more maintainable we’ve split the config up in to a number of smaller includes.

machines/lancache-single.conf
In the machines directory you have lancache-single.conf which is the main nginx.conf file that sets up the events and http handler as well as the key features via includes: custom log format, cache definition and active vhosts.

include lancache/log_format;
include lancache/caches;
include vhosts/*.conf;

lancache/log_format
The custom log format adds three additional details to the standard combined log format “$upstream_cache_status” “$host” “$http_range”. These are useful for determine the efficiency of each segment the cache.

lancache/caches
In order to support the expanding number of downloads supported by LANcache we’ve switched the config from static mirror to using nginx’s built in caching.

In our install we’re caching data to 6 x 240GB SSD’s configured in ZFS RAIDZ so we have just over 1TB of storage per node.
To ensure we don’t run out of space we’ve limited the main installs cache size to 950GB with custom loader details to ensure we can init the cache quicker on restart.
The other cache zone is used for none install data so is limited to just 10GB.

We also set proxy_temp_path to a location on the same volume as the cache directories so that temporary files can be moved directly to the cache directory avoid a file copy which would put extra stress on the IO subsystem.

proxy_cache_path /data/www/cache/installs levels=2:2 keys_zone=installs:500m inactive=120d max_size=972800m loader_files=1000 loader_sleep=50ms loader_threshold=300ms;
proxy_cache_path /data/www/cache/other levels=2:2 keys_zone=other:100m inactive=72h max_size=10240m;
proxy_temp_path /data/www/cache/tmp;

sites/lancache-single.conf
Here we define individual server entries for each service we’ll be caching, we do this so that each service can configure how its cache works independently.
In order to allow for direct use of the configs in multiple setups without having to edit the config files themselves we made use of named entries for all listen addresses.

The example below shows the origin server entry which listens on lancache-origin and requires the spoof entries akamai.cdn.ea.com.

We use server_name as part of the cache_key to avoid cache collisions and so add _ as the wildcard catch all to ensure all requests to this servers IP are processed.

For performance we configure the access log with buffering.

# origin
server {
        listen lancache-origin accept_filter=httpready default;
        server_name origin _;
        # DNS entries:
        # lancache-origin akamai.cdn.ea.com
        access_log /data/www/logs/lancache-origin-access.log main buffer=128k flush=1m;
        error_log /data/www/logs/lancache-origin-error.log;
        include lancache/node-origin;
}

The include is where all the custom work is done, in this case lancache/node-origin. There are currently 5 different flavours of node: blizzard, default, origin, pass and steam.

lancache/node-origin
Origin’s CDN is pretty bad in that currently prevents the caching of data, due to this we’re force to ignore the Cache-Control and Expires headers. The files themselves are very large 10GB+ and the client uses range requests to chunk the downloads to improve performance and provide realistic download restart points.

By default nginx proxy translates a GET request with a Range request to a full download by stripping the Range and If-Range request headers from the upstream request. It does this so subsequent range requests can be satisfied from the single download request. Unfortunately Origin’s CDN prevents this so we have to override this default behaviour by passing through the Range and If-Range headers. This means the upstream will reply with a 206 (partial content) instead of a 200 (OK) response and hence we must add the range to the cache key so that additional requests are correctly.

The final customisation for Origin is to use $uri in the proxy_cache_key, we do this as the Origin client uses a query string parameter sauth=<key>

lancache/node-blizzard
Blizzard have large downloads too, so to ensure that requests are served quickly we cache 206 responses in the same way as for Origin.

lancache/node-steam
All Steam downloads are located under /depot/ so we have a custom location for that which ignores the Expires header as Steam sets a default Expires header.

We also store the content of requests /serverlists/ as these requests served by cs.steampowered.com give us information about hosts used by Steam to process download request. The information in these files could help identify future DNS entries which need spoofing.

Finally the catch all / entry caches all other items according to the their headers.

lancache/node-default
This is the default which is used for riot, hirez and sony it uses standard caching rules which caches based on the proxy_cache_key "$server_name$request_uri"

Required DNS entries
All of the required DNS entries are for each service are documented their block server in vhosts/lancache-single.conf which as of writing is:
Steam
lancache-steam cs.steampowered.com *.cs.steampowered.com content1.steampowered.com content2.steampowered.com content3.steampowered.com content4.steampowered.com content5.steampowered.com content6.steampowered.com content7.steampowered.com content8.steampowered.com *.hsar.steampowered.com.edgesuite.net *.akamai.steamstatic.com content-origin.steampowered.com client-download.steampowered.com

Riot
lancache-riot l3cdn.riotgames.com

Blizzard
lancache-blizzard dist.blizzard.com.edgesuite.net llnw.blizzard.com dist.blizzard.com blizzard.vo.llnwd.net

Hirez
lancache-hirez hirez.http.internapcdn.net

Origin
lancache-origin akamai.cdn.ea.com lvlt.cdn.ea.com

Sony
lancache-sony pls.patch.station.sony.com

Turbine
lancache-turbine download.ic.akamai.turbine.com launcher.infinitecrisis.com

Microsoft
lancache-microsoft *.download.windowsupdate.com download.windowsupdate.com dlassets.xboxlive.com *.xboxone.loris.llnwd.net xboxone.vo.llnwd.net images-eds.xboxlive.com xbox-mbr.xboxlive.com

You’ll notice that each entry starts with lancache-XXXX this is entry used in the listen directive so no editing of the config is required for IP allocation to each service. As we’re creating multiple server entries and each is capturing hostnames using the _ wildcard each service must have its own IP e.g. lancache-steam = 10.10.100.100, lancache-riot = 10.10.100.101, lancache-blizzard = 10.10.100.102, lancache-hirez = 10.10.100.103, lancache-origin = 10.10.100.104 and lancache-sony = 10.10.100.105

Hardware Specifications
At Insomnia 51 we used a hybrid of this configuration which made use of two machines working in a cluster with the following spec:

  • Dual Hex Core CPU’s
  • 128GB RAM
  • 6 x 240GB SSD’s ZFS RAIDZ
  • 6 x 1Gbps Nics
  • OS – FreeBSD 10.0

These machines where configured in a failover pair using CARP @ 4Gbps lagg using LACP. Each node served ~1/2 of the active cache set to double the available cache space to ~1.8TB, with internode communication done on using a dedicated 2Gbps lagg

LANcache Stats from Insomnia 51
For those that are interested at its initial outing at Insomnia 51 LANcache:

  • Processed 6.6 million downloads from the internet totalling 2.2TB
  • Served 34.1 million downloads totalling 14.5TB to the LAN
  • Peaked at 4Gbps (the max capacity) to the LAN

Config Downloads

Updates
* 2015-09-17 – Updated info about required DNS entries
* 2015-10-09 – Linked initial public version of configs on github

Written by Dilbert

April 30th, 2014 at 3:33 pm

Posted in FreeBSD,Gaming,Nginx

Battle.Net Installer Error Code: 2600 Fix

with one comment

If you’re installing Battle.Net, required and installed as the initial part of Blizzard games such as Starcraft II & Hearthstone, and the installer fails with the message:

Whoops! Looks like something broke. Error Code: 2600

This can be caused by a bad download, which can be the result of a proxied web connection.

Proxies, particularly caching proxies, can translates the Blizzard downloaders HTTP request with the Range header into a full request, which is subsequently returned as is to the client i.e. 200 OK response containing the full file. The downloader was expecting a 206 Partial Content response but appears to only check for 20X response, hence it doesn’t spot the issue and builds its full file incorrectly.

To make matters worse the downloader stores this file in the Windows temporary directory and doesn’t delete it either on failure or before trying to download it again such as if the installer is restarted.

If you’re using nginx prior to 1.5.3 as a caching proxy then this will happen if your the first person to download the file, after which 206 responses are correctly returned for Range requests using the cached file. This behaviour was changed in 1.5.3 when an enhancement to return 206 for on the fly caching responses was added. To be clear this isn’t technically a bug in nginx, the spec allows for a server to return 200 OK response to a Range request, its the Blizzard downloader that’s at fault for not correctly processing 200 OK then its expecting 206 Partial Content.

If you have your Battle.Net installer bugged like this simply delete the Blizzard directory from your Windows temporary directory %TEMP% and re-run the installer after fixing or disabling your proxy.

Written by Dilbert

April 15th, 2014 at 12:23 am

Posted in Gaming,Nginx

Caching Steam Downloads @ LAN’s

with 32 comments

This article has been superseded by LANcache – Dynamically Caching Game Installs at LAN’s using Nginx

Gone are the days when users install most of their games from physical media such as CD’s / DVD’s instead they rely on high bandwidth internet connections and services such as EA’s Origin and Valve’s Steam.

This causes issues for events which bring like minded gamers together at LAN events such as Multiplay’s Insomnia Gaming Festival, due to the massive amount of bandwidth required to support these services with users patching games and downloading new ones to play with their friends.

With the release of Steam’s new Steampipe creating a local cache of steam download’s, so that game installs at these types of events is significantly quicker and requires dramatically less internet bandwidth, has become much easier.

Steampipe uses standard web requests from Valve Software’s new content servers so standard proxy technology can be used to cache requests.

This article describes how to use FreeBSD + nginx to create a high performance caching proxy for all steam content served by Steampipe

Hardware Specifications
The following is our recommended specification for a machine. Obviously the higher the spec the better, in particular more RAM.

  • Quad Core CPU
  • 32GB RAM (the more the better)
  • 6 x 1TB HD’s
  • 2 x 120GB SSD’s
  • 1Gbps NIC

Machine Install
Our setup starts by performing a FreeBSD ZFS install using mfsBSD we use a RAIDZ on the HD’s e.g.

zfsinstall -d da0 -d da1 -d da2 -d da3 -d da4 -d da5 \
    -r raidz -t /cdrom/8.3-RELEASE-amd64.tar.xz -s 4G

Once the OS is installed setup the L2ARC on the SSD’s

zpool add cache da6 da7

Next install the required packages:

pkg_add -r nginx

Enable FreeBSD http accept filter which improves performance of processing http connections as they are handled in kernel mode:

echo 'accf_http_load="YES"' >> /boot/loader.conf
kldload accf_http

Enable nginx to start on boot by adding nginx_enable="YES" to /etc/rc.conf:

echo 'nginx_enable="YES"' >> /etc/rc.conf

Configuring Nginx
First create some directories:

mkdir -p /data/www/steamproxy
mkdir -p /data/www/logs

The trick to mirroring the steam content so that duplicate requests are served locally is two fold.

  • First spoof *.cs.steampowered.com, content[1-8].steampowered.com and send all traffic to the proxy box
  • Configure nginx to on demand mirror the requests under /depot/ using none spoofing resolvers.

Here’s our example /usr/local/etc/nginx.conf. It uses Google’s open DNS servers for the resolver but any non-spoofed resolvers will do.

user www www;
worker_processes  4;
 
error_log  /data/www/logs/nginx-error.log;
 
events {
	worker_connections 8192;
	multi_accept on;
	use kqueue;
}
 
http {
	include mime.types;
 
	access_log  /data/www/logs/nginx-access.log;
	keepalive_timeout 65;
 
	# steam mirror
	server {
		listen 80 accept_filter=httpready default;
		server_name _;
		access_log /data/www/logs/steam-access.log;
		error_log /data/www/logs/steam-error.log;
		root /data/www/steamproxy;
		index index.html;
		resolver 8.8.8.8;
 
		location /depot/ {
			try_files $uri @mirror;
			access_log /data/www/logs/depot-local.log;
		}
 
		location / {
			proxy_next_upstream error timeout http_404;
			proxy_pass http://$host$request_uri;
			proxy_redirect off;
			proxy_set_header Host $host;
			proxy_set_header X-Real-IP $remote_addr;
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			add_header X-Mirror-Upstream-Status $upstream_status;
			add_header X-Mirror-Upstream-Response-Time $upstream_response_time;
			add_header X-Mirror-Status $upstream_cache_status;
		}
 
		location @mirror {
			proxy_store on;
			proxy_store_access user:rw group:rw all:r;
			proxy_next_upstream error timeout http_404;
			proxy_pass http://$host$request_uri;
			proxy_redirect off;
			proxy_set_header Host $host;
			proxy_set_header X-Real-IP $remote_addr;
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			add_header X-Mirror-Upstream-Status $upstream_status;
			add_header X-Mirror-Upstream-Response-Time $upstream_response_time;
			add_header X-Mirror-Status $upstream_cache_status;
			access_log /data/www/logs/depot-remote.log;
		}
	}
}

We use on demand mirroring instead of proxy “caching” as the files in steam /depot/ never change so storing them permanently eliminates the overhead of cache cleaners and other calculations.

Don’t forget this can also be used with smaller hardware to reduce steam bandwidth requirements in an office environment too 🙂

Update – 2013/08/19 – Added content[1-8].steampowered.com as additional content server hosts which need proxying
Update – 2013/09/10 – Changed from $uri to $request_uri (which includes query string args) required for authentication now.
Update – 2014/04/30 – Superseded by LANcache

Written by Dilbert

April 17th, 2013 at 1:01 pm

Posted in FreeBSD,Gaming,Nginx