Multiplay Labs

tech hits and tips from Multiplay

LANcache – Dynamically Caching Game Installs at LAN’s using Nginx

with 326 comments

Last year we posted our Caching Steam Downloads @ LAN’s article which has been adopted by many of the LAN event organisers in the community as the baseline for improving download speeds and helping avoid internet saturation when you have 10’s – 1000’s of gamers at events all updating and installing new games from Steam.

This rework builds on the original concepts from our steam caching, brings in additional improvements from the community, such as the excellent work by the guys @ as as well as other enhancements.

Due to the features used in this configuration it requires nginx 1.6.0+ which is the latest stable release at the time of writing.

Nginx Configuration
In order to make the configuration more maintainable we’ve split the config up in to a number of smaller includes.

In the machines directory you have lancache-single.conf which is the main nginx.conf file that sets up the events and http handler as well as the key features via includes: custom log format, cache definition and active vhosts.

include lancache/log_format;
include lancache/caches;
include vhosts/*.conf;

The custom log format adds three additional details to the standard combined log format “$upstream_cache_status” “$host” “$http_range”. These are useful for determine the efficiency of each segment the cache.

In order to support the expanding number of downloads supported by LANcache we’ve switched the config from static mirror to using nginx’s built in caching.

In our install we’re caching data to 6 x 240GB SSD’s configured in ZFS RAIDZ so we have just over 1TB of storage per node.
To ensure we don’t run out of space we’ve limited the main installs cache size to 950GB with custom loader details to ensure we can init the cache quicker on restart.
The other cache zone is used for none install data so is limited to just 10GB.

We also set proxy_temp_path to a location on the same volume as the cache directories so that temporary files can be moved directly to the cache directory avoid a file copy which would put extra stress on the IO subsystem.

proxy_cache_path /data/www/cache/installs levels=2:2 keys_zone=installs:500m inactive=120d max_size=972800m loader_files=1000 loader_sleep=50ms loader_threshold=300ms;
proxy_cache_path /data/www/cache/other levels=2:2 keys_zone=other:100m inactive=72h max_size=10240m;
proxy_temp_path /data/www/cache/tmp;

Here we define individual server entries for each service we’ll be caching, we do this so that each service can configure how its cache works independently.
In order to allow for direct use of the configs in multiple setups without having to edit the config files themselves we made use of named entries for all listen addresses.

The example below shows the origin server entry which listens on lancache-origin and requires the spoof entries

We use server_name as part of the cache_key to avoid cache collisions and so add _ as the wildcard catch all to ensure all requests to this servers IP are processed.

For performance we configure the access log with buffering.

# origin
server {
        listen lancache-origin accept_filter=httpready default;
        server_name origin _;
        # DNS entries:
        # lancache-origin
        access_log /data/www/logs/lancache-origin-access.log main buffer=128k flush=1m;
        error_log /data/www/logs/lancache-origin-error.log;
        include lancache/node-origin;

The include is where all the custom work is done, in this case lancache/node-origin. There are currently 5 different flavours of node: blizzard, default, origin, pass and steam.

Origin’s CDN is pretty bad in that currently prevents the caching of data, due to this we’re force to ignore the Cache-Control and Expires headers. The files themselves are very large 10GB+ and the client uses range requests to chunk the downloads to improve performance and provide realistic download restart points.

By default nginx proxy translates a GET request with a Range request to a full download by stripping the Range and If-Range request headers from the upstream request. It does this so subsequent range requests can be satisfied from the single download request. Unfortunately Origin’s CDN prevents this so we have to override this default behaviour by passing through the Range and If-Range headers. This means the upstream will reply with a 206 (partial content) instead of a 200 (OK) response and hence we must add the range to the cache key so that additional requests are correctly.

The final customisation for Origin is to use $uri in the proxy_cache_key, we do this as the Origin client uses a query string parameter sauth=<key>

Blizzard have large downloads too, so to ensure that requests are served quickly we cache 206 responses in the same way as for Origin.

All Steam downloads are located under /depot/ so we have a custom location for that which ignores the Expires header as Steam sets a default Expires header.

We also store the content of requests /serverlists/ as these requests served by give us information about hosts used by Steam to process download request. The information in these files could help identify future DNS entries which need spoofing.

Finally the catch all / entry caches all other items according to the their headers.

This is the default which is used for riot, hirez and sony it uses standard caching rules which caches based on the proxy_cache_key "$server_name$request_uri"

Required DNS entries
All of the required DNS entries are for each service are documented their block server in vhosts/lancache-single.conf which as of writing is:
lancache-steam * * *







lancache-microsoft * *

You’ll notice that each entry starts with lancache-XXXX this is entry used in the listen directive so no editing of the config is required for IP allocation to each service. As we’re creating multiple server entries and each is capturing hostnames using the _ wildcard each service must have its own IP e.g. lancache-steam =, lancache-riot =, lancache-blizzard =, lancache-hirez =, lancache-origin = and lancache-sony =

Hardware Specifications
At Insomnia 51 we used a hybrid of this configuration which made use of two machines working in a cluster with the following spec:

  • Dual Hex Core CPU’s
  • 128GB RAM
  • 6 x 240GB SSD’s ZFS RAIDZ
  • 6 x 1Gbps Nics
  • OS – FreeBSD 10.0

These machines where configured in a failover pair using CARP @ 4Gbps lagg using LACP. Each node served ~1/2 of the active cache set to double the available cache space to ~1.8TB, with internode communication done on using a dedicated 2Gbps lagg

LANcache Stats from Insomnia 51
For those that are interested at its initial outing at Insomnia 51 LANcache:

  • Processed 6.6 million downloads from the internet totalling 2.2TB
  • Served 34.1 million downloads totalling 14.5TB to the LAN
  • Peaked at 4Gbps (the max capacity) to the LAN

Config Downloads

* 2015-09-17 – Updated info about required DNS entries
* 2015-10-09 – Linked initial public version of configs on github

Written by Dilbert

April 30th, 2014 at 3:33 pm

Posted in FreeBSD,Gaming,Nginx

326 Responses to 'LANcache – Dynamically Caching Game Installs at LAN’s using Nginx'

Subscribe to comments with RSS or TrackBack to 'LANcache – Dynamically Caching Game Installs at LAN’s using Nginx'.

  1. You have to click “Older Comments” under Comments navigation to see older comments

    Dilbert: I have a question. I have an event coming up in late February that I use the LANCache configuration for (and I’m hoping the nginx range module is in by then!) and we are interested in using CARP to load balance a few different nodes for LANCache. Can you provide more details on how you set that up? How did you sync the cache data? You’re not using nginx’s clustering features, right?


    16 Dec 15 at 14:54

  2. Yer I added that to the theme earlier.

    You can’t use CARP to do that, you would need LAGG, but even then its not a good idea as there’s no grantee which node a machine would hit, better to split each service if you don’t have space / capacity on one.


    16 Dec 15 at 16:12

  3. Okay, I’ll have to design something up. Thanks! Any idea if the range module patches will be done by Feb?


    16 Dec 15 at 16:35

  4. on this page Config Downloads

    LANcache r386 it does not have the microsoft files

    this is missing in it
    lancache-microsoft * *

    along with the node-micrsoft files

    if i get the new files from gitub how do i get the range_cache module (unreleased watch this space) working
    do you have links to the source code or how to ?


    17 Dec 15 at 22:34

  5. That’s correct I’m afraid


    18 Dec 15 at 12:53

  6. Hi Dilbett, It has been passed about a year since I installed this, I had problems with blizzard updates, how is it working now?

    Thanks in advance


    22 Dec 15 at 17:43

  7. Hi Dilbert,

    i’ve a problem with Akamai content. DNSSpoof seems to be not working …

    I added new zone “” redirected to in my case. DNS resolution when i use “dig” work fine, but client continue to use “real” ip of akamai servers.

    Steam content work fine.

    Do you have any solution ?


    23 Dec 15 at 09:41

  8. Job for nginx.service failed. See “systemctl status nginx.service” and “journalctl -xe” for details.
    vagrant@dyncache:/vagrant$ systemctl status nginx.service
    ? nginx.service – A high performance web server and a reverse proxy server
    Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
    Active: failed (Result: exit-code) since Tue 2015-12-29 20:58:16 UTC; 12s ago
    Process: 3222 ExecStop=/sbin/start-stop-daemon –quiet –stop –retry QUIT/5 –pidfile /run/ (code=exited, status=0/SUCCESS)
    Process: 4576 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
    Main PID: 3165 (code=exited, status=0/SUCCESS)

    Dec 29 20:58:16 dyncache systemd[1]: Starting A high performance web server and a reverse proxy server…
    Dec 29 20:58:16 dyncache nginx[4576]: nginx: [emerg] unknown “range_cache_range” variable
    Dec 29 20:58:16 dyncache nginx[4576]: nginx: configuration file /etc/nginx/nginx.conf test failed
    Dec 29 20:58:16 dyncache systemd[1]: nginx.service: control process exited, code=exited status=1
    Dec 29 20:58:16 dyncache systemd[1]: Failed to start A high performance web server and a reverse proxy server.
    Dec 29 20:58:16 dyncache systemd[1]: Unit nginx.service entered failed state.
    Dec 29 20:58:16 dyncache systemd[1]: nginx.service failed.
    Latest LANcache (Requires module range_cache unreleased)
    as I can get that module, or compile., it is illogical that publish incomplete.


    29 Dec 15 at 21:23

  9. Cant say I’ve heard of DNSSpoof I’m afraid.


    30 Dec 15 at 14:44

  10. You’ll need to use the old config until the module is released, which we’ve still had no word on I’m afraid.


    30 Dec 15 at 14:45

  11. # Looking for a solution, I’ve been in half, please.
    # Ranger.conf here is the file, needed for the new lancache work.
    the problem is that I think is incorrect or missing some improvement.

    # Then we need to have LuaJIT, PCRE2, OpenResty compiled in our operating system.
    # install all base
    sudo apt-get update && sudo apt-get -y install git-core git software-properties-common python-software-properties libreadline-dev libncurses5-dev libpcre3-dev libssl-dev perl make build-essential htop
    add-apt-repository ppa:nginx/stable -y

    # install LuaJIT
    git clone
    make && sudo make install

    # install pcre2
    curl | tar xvz
    ./configure –prefix=/usr/local
    make && sudo make install

    # install openresty
    curl | tar xvz
    ./configure –prefix=/opt/openresty
    make && sudo make install

    # here script for install lancache.


    31 Dec 15 at 19:47

  12. Cyberjzus; thanks for refering to my Github but I must say this is my working configuration for Steam; still looking into why Origin and the likes doesn’t work on my setup; but I couldn’t really find an installation guide and hence creating the github with the instructions; it will contain my setup while testing so might create a side branch just for development …


    3 Jan 16 at 19:11

  13. Hi,

    I’m sorry, my english is so bad …

    in fact, my config work fine with Steam and LoL, but not with Origin,, … who use Akamai servers.

    I use tcpview to see game client connexions. exemple :

    When i use nslookup to resole this, dns spoofing work fine, i’ve my local ip address, but the client still use the “real” ip … i don’t understand why …

    Thanks a lot


    4 Jan 16 at 14:24

  14. Hey guys, thanks to Frag-o-Matic they’ve figured out we could use the http-slice module included (but needs to be told to be compiled) with nginx 1.9.8 and higher to cache these range requests.

    For example, you can see the Origin config file here:

    I am testing this now with my local setup and it appears to be working. I’ll report back with further testing results…


    4 Jan 16 at 14:54

  15. I’m not sure that will work properly as it looks like slice only deals with requests which aren’t already range requests.


    4 Jan 16 at 14:59

  16. Well, I downloaded a game on Origin using these settings and when I uninstalled it and re-downloaded the game it was coming from the cache at blazingly fast speeds… I’ll do some further testing but so far this seems good…


    4 Jan 16 at 15:47

  17. Hey guys, ansible-lp creator here.

    I stumbled across the 1.9.8 release notes while setting up the current revision, this is a brand new nginx feature. No mainstream distribution packages this today, so we’re kind of forced to roll our own. No worries, this just got released, but it’s been running for years at Taobao. (they’ve contributed the first couple of patches afaik)

    To make things easier, I’ve made my packages publicly available at (ansible-lp uses this distribution). The nginx package can be found there. Sorry, only debian for now, I see you’re heavy *BSD users.

    I also posted my findings on the ranger issue tracker at There’s a full implementation guide at Yes, it’s the real deal, native transparent 206 caching. 🙂

    Also, a big thank you for all the research you’ve published over the years. This blog is what inspired us to do the same! Hit me up on Github or on timo #at# if you want to get in touch.



    28 Jan 16 at 22:21

  18. Unfortunately that only solves the large file download problem it doesn’t solve

    • Random range request caching
    • Splitting and aligning 206 requests into small requests


    29 Jan 16 at 03:08

  19. It does all the things you mention and even adds (optional) on-the-fly gzip compression towards the clients on top.

    That’s where the term ‘slice’ comes from, it maps the range request to slices internally, the size of which can be configured, and translates that to an almost-equal range request upstream. (aligned to the slice size)

    There’s a noticeable CPU and memory overhead, so the default slice size of 1m might not apply to our use cases.


    29 Jan 16 at 19:48

  20. I don’t see any mention of range re-alignment in the docs; without that the random ranges will stay random and the cache won’t be very useful unfortunately.

    Are you sure its capable slicing requests with are themselves range request (not standard full content requests) as there’s no indication of that in any of the docs.

    A test to confirm this would to to do two requests, the first for bytes=25-10240 and then the second for bytes=34-10240 and see if all components come from the cache, as they should do if this is indeed doing both slicing of range requests and slice start point re-alignment.


    30 Jan 16 at 03:27

  21. On the pages I linked, “The file is divided into smaller “slices”. Each range request chooses particular slices that would cover the requested range and, if this range is still not cached, put it into the cache.”. Yes, tested this extensively with both manual requests as with Origin/Blizzard clients. Additionally, it’s even possible to manually pre-heat the cache with full content requests and have ‘real’ clients use range requests, as ‘slice’ is a site-wide toggle and the slice identifier is part of the cache key for the site. All requests are translated all the time. (turn this OFF for Steam and Riot!)

    With the default slice size of 1m, a cache miss that touches any block within the 0-1048576 range would make Nginx request 0-1048576 upstream. Requesting an additional byte like 0-1048577 results in a second 1m block being requested upstream (so 1048577-2097152) and only the first byte of the second block being returned to the client.

    So yes, there is an unavoidable upstream read inflation, but it’s bounded by the slice size you configure. Parallell requests that miss the same slice will have to wait for that slice to come in, but at least it’s better than having to wait for a whole object to be downloaded to the cache. This proved to be problematic for the Blizzard downloader, so we used to preload all Blizzard games to avoid failures.

    I still have no idea about the performance we’re going to get with 100+ simultaneous clients, multiple TB in the cache, memory constraints etc. We might play it safe and go for 2m-4m’ish slice size. Not sure about changing this on the fly, probably invalidates existing entries.


    30 Jan 16 at 12:23

  22. […] LANcache – Dynamically Caching Game Installs at LAN’s using Nginx […]

  23. […] currently using nginx to cache Steam downloads for my local network as detailed by Multiplay. This works great, but Steam has a feature to watch other users games via a stream not unlike […]

  24. […] currently using nginx to cache Steam downloads for my local network as detailed by Multiplay. This works great, but Steam has a feature to watch other users games via a stream not unlike […]

  25. […] currently using nginx to cache Steam downloads for my local network as detailed by Multiplay. This works great, but Steam has a feature to watch other users games via a stream not unlike […]

  26. Hi! today I cant patch League Of Legends, only me?


    25 Feb 16 at 13:03

  27. Patching LOL receive this error message in LOLPatcher.log

    000004.417| OKAY| Riot::RADS::Common::HTTPConnection::GetFile: (“/releases/live/projects/lol_air_client/releases/releaselisting_LA2”, “C:/Riot Games/League of Legends/RADS/temp/TMP0.tmp”, 0x00000000)
    000005.636| ERROR| Riot::RADS::Common::HTTPConnection::GetFile: E:/jenkins/workspace/rads-code-win-release/code/Solutions/RiotRADS/Source/RADS/Common/HTTPConnection.cpp(766): Perform request failed: “Couldn’t connect to server” (curl code 7), for file:’/releases/live/projects/lol_air_client/releases/releaselisting_LA2′.
    000006.841| ERROR| Riot::RADS::Common::HTTPConnection::GetFile: E:/jenkins/workspace/rads-code-win-release/code/Solutions/RiotRADS/Source/RADS/Common/HTTPConnection.cpp(766): Perform request failed: “Couldn’t connect to server” (curl code 7), for
    please help


    25 Feb 16 at 20:38

  28. About the League Of Legends 6.4 Update, checking the LOGs, got the information:
    Before 6.4 Patch I found this lines:
    000008.711| OKAY| Riot::RADS::Common::HTTPConnection::HTTPConnection: (
    000008.711| OKAY| Riot::RADS::Common::HTTPConnection::Connect: (
    The LOG that gives me the error has this lines:
    000004.610| OKAY| Riot::RADS::Common::HTTPConnection::HTTPConnection: (
    000004.610| OKAY| Riot::RADS::Common::HTTPConnection::Connect: (
    Some lines down get the error:
    000007.253| ERROR| Riot::RADS::Common::HTTPConnection::GetFile: Failed retrieving ‘/releases/live/projects/lol_air_client/releases/releaselisting_NA’ after 3 attempts.
    So I think the problem is because these https://
    What should be the appropriate change in the Riot Server configuration?
    Thanks in advance


    26 Feb 16 at 01:04

  29. Any idea if/when the range cache module will be released? Keen to get this up and running at our LAN. Got an Apple XServe G5 with two 1TB Seagate Constellations in ZFS mirror, set up for testing, but it sure would be nice if we could also handle traffic for the other game CDNs.


    26 Feb 16 at 04:58

  30. … which I realise is tiny but we have all of 35 people at our events.


    26 Feb 16 at 05:06

  31. Sorry but we’re in the middle of huge project atm so I’ve not had any time to get this sorted I’m afraid.


    26 Feb 16 at 09:41

  32. @fsalazar You are not the only one, I’m having issues too, thought it was on my part, been tweaking for the whole day yesterday.

    Steam caching works fine, maybe Riot changed something on their end?


    26 Feb 16 at 17:00

  33. @fsalazar : I’ve the same problem with LOL since yesterday … Maybe Riot has changing his server ?


    26 Feb 16 at 19:43

  34. Thanks Dilbert, anyway, I haven’t figure out how to solve this League Of Legends UPDATE problem, help me please when you have the time…


    27 Feb 16 at 12:42

  35. Hi,

    Anyone have found a solution for caching Riot content since patch 6.4 ?


    3 Mar 16 at 08:12

  36. I’ve not looked I’m afraid but sounds like they must have changed something, check in the nginx logs to see if you’re getting failures.


    3 Mar 16 at 10:01

  37. The LoL Patcher uses HTTPS to connect to
    Unfortunately the curl based LoL client only accepts SSL certificates from its own CA. It’s doesn’t inherit the Trusted CA certificates from the OS.
    The trusted CA certificated are compiled within the client.

    As far as I’ve looked, there is no fallback option to HTTP or to alter the curl-ca-bundle in the client.

    I’m afraid it’s not possible with >6.4 client patcher to download from your cache with a self signed SSL certificate.


    3 Mar 16 at 13:12

  38. If they have changed to https we cant cache it, simple as.


    3 Mar 16 at 13:33

  39. I guess we have to patch one machine and then differentiate copy with Robocopy or something similar.


    3 Mar 16 at 17:25

  40. Damn …. I’ve a LAN with approx 100 people in tree weeks …

    Fortunately, Steam, Origin and Blizzard still working fine


    3 Mar 16 at 18:46

  41. I think Blizzard have been reported to have issues as well.

    I may be wrong but I think it’s and akamai “improvement” which the implementers didn’t consider caching 🙁


    3 Mar 16 at 20:40

  42. sudo openssl genrsa -des3 -out 2048
    sudo openssl req -new -key -out

    sudo cp
    sudo openssl rsa -in -out

    sudo openssl x509 -req -days 365 -in -signkey -out


    6 Mar 16 at 14:21

  43. make a file named

    then put this in

    # riot
    server {
            listen ssl;
            ssl on;
            ssl_certificate /etc/ssl/;
            ssl_certificate_key /etc/ssl/;
            server_name riot _;
            # DNS entries: lancache-riot
            access_log /srv/lancache/logs/lancache-riot-access.log main buffer=128k flush=1m;
            access_log /srv/lancache/logs/lancache-riot-keys.log keys_default buffer=128k flush=1m;
            error_log /srv/lancache/logs/lancache-riot-error.log;
            # Default Node
            include lancache/resolver;
            include lancache/cache-key-default;
            location / {
                    # Some downloads are very large so we cache based on
                    # range to keep single downloads quick and hence ensure
                    # interactivity is good.
                    proxy_set_header Range $http_range;
                    proxy_set_header If-Range $http_if_range;
                    proxy_cache_key "$server_name$request_uri $http_range";
                    #testing cache of 200 value
                    #proxy_cache_valid 200 90d; proxy_cache_valid 206 90d;
                    # Use Blizzard cache
                    proxy_cache riot;
                    proxy_read_timeout 150;
                    include lancache/proxy-cache;

    ### still testing but now i have a secure ssl to the site but still got a new error in the LOL launcher

    Peer certificate cannot be authenticated with given CA certificates”


    6 Mar 16 at 14:24

  44. You’re spoofing certs this is a very bad idea and the client rightly rejects you invalid cert.


    7 Mar 16 at 00:28

  45. It seems like it reverted back to http, everything is back to “normal” now.


    24 Mar 16 at 18:15

  46. Looks like Blizzard recently added a few more CDNs that should be added to the sticky at the beginning. I saw these going through my firewall. If anyone knows of the URLs for other regions, please yell them out. I’m eastern Americas.


    31 Mar 16 at 15:29

  47. hey dilbert or community, can you say what services actually working for caching?

    i tried out your last config on a debian jessy without kqueue und http_ready and it works only for steam i think.

    maybe someone has a actual config and can give me some informations.


    2 Apr 16 at 19:25

  48. Hi, I am really stuggling with the Lancache setup. Our venue has a room full of PS4’s and downloading new PS4+ games takes days…. So I am trying to setup LANCache for SONY downloads. IIS and all servers on the windows platform is a walk in the park, however, my pace has grinded to a holt with Ubuntu 16 and nginx….. I have installed Ubuntu and nginx and view the standard welcome to nginx if I browse to the IP-Address of the Ubuntu 16 server. What ever I try causes the welcome screen to stop working and nothing to respond in its place. I have downloaded the lancache-386.tgz file and can’t get past what I am supposed to do as opposed to what I am doing…. Is anyone able to throw me a lifeline and tell me where I am going wrong? I extract the files and place them in /etc/nginx folder and have tried a combination of places. But I seem to break nginx when I do this. I restore the original folders and nginx works again with the standard welcome screen. Any help would be muchly appreciated. I am not the gamer, but it is my venue and network that the PS4 live on for the people that are gamers, young kids in this case.


    29 Apr 16 at 05:39

  49. @JackStone: There is currently no built in support for PSN caching in either Multiplay’s setup, or Ti-Mo’s alternative Ansible setup. I’m currently looking into a solution for this over the weekend, and I’ll get back to you if I have any progress.


    2 May 16 at 03:21

  50. I’m seeing new DNS entry for steam..


    2 Jun 16 at 14:53

Leave a Reply

You must be logged in to post a comment.