Multiplay Labs

tech hits and tips from Multiplay

Windows 10 updates resets firewall rules!

without comments

My work machine finally became eligible for the Windows 10 anniversary update yesterday so I dutifully installed it.

After loosing best part of a hour waiting for it to install, it took more precious time to re-remove all the cruft that I’d removed before and the update saw fit to re-install for you such as Zune Music, OneDrive, Cortana etc. This time however it was harder than before with more options being hidden from the UI and buried in group policy editor and AppxPackage. From what is overall a good OS, the obsession for installing bloatware is really disappointing from Microsoft.

Anyway, last night I needed to do some work so went to login remotely, but had no joy. On return to the office this morning I did some digging and sure enough the update had removed the custom firewall options that had been configured!

So if you’re applying any of the major Windows 10 updates be sure to backup your firewall settings and check it hasn’t changed them before leaving the office!

Written by Dilbert

November 22nd, 2016 at 10:18 am

Posted in Windows

Resizing a FreeBSD ZFS pool

without comments

When swaping disk or expanding the backing volume of VM e.g. on GCE there are few easy steps to allow the OS to take advantage of the new space.

  1. Resize the disk in GCE
  2. gpart recover <device> – Ensures the partition information is at the end of the disk (it will be showing as CORRUPT)
  3. gpart resize -i <index> <device> – Expands the partition to take up the newly available space
  4. zpool online -e <pool> <device> – Expands the pool on the given device

For example:

gpart recover da0
gpart resize -i 3 da0p3
zpool online -e tank da0p3

Note: New capacity only shows after a reboot. Head includes a new reprobe command to camcontrol which allows this to be done live

Written by Dilbert

May 27th, 2016 at 9:49 am

Posted in FreeBSD,ZFS

FreeBSD 10.2-RELEASE EFI ZFS root boot

without comments

Based on Eric McCorkle’s work on adding modular boot support to EFI including adding ZFS support, which is currently in review we’ve back-ported the required changes to 10.2-RELEASE code base, which others may find useful.

This can be found here: FreeBSD 10.2-RELEASE EFI ZFS root boot patch

Here’s how:

We assume your source tree is in /usr/src, your disk device is da0

  1. Go to your FreeBSD source tree:
    cd /usr/src
  2. Download the EFI ZFS root patch
  3. Extract the patch
    tar -xzf freebsd-10-efi-zfs-boot.tgz
  4. Apply the patches to your source tree in order
    sh -c 'for i in `ls -1 patches/efi*.patch | sort`; do patch -E -N -p0 -F1 -t < $i; done'
  5. Cleanup orig files and remove old empty directories:
    find /usr/src/ -name '*.orig' -delete
    find /usr/src/sys/boot/ -type d -empty -delete
  6. Build and install world e.g.
    make buildworld -j24 && make installworld
  7. Partition your disk e.g.
    gpart create -s GPT da0
    gpart add -t efi -s 800K da0
    gpart add -t freebsd-swap -s 10G da0
    gpart add -t freebsd-zfs da0
    gpart bootcode -p /boot/boot1.efifat -i 1 da0
  8. Perform your OS install to da0p3
  9. Reboot and enjoy

Update 2015-12-11: Corrected patch URL.
Update 2015-12-14: Updated code in line with #11226 of
Update 2015-01-15: Updated code in line with that which is now committed to HEAD.

Written by Dilbert

December 10th, 2015 at 6:20 pm

Posted in FreeBSD,Hackery,ZFS

KNOPPIX Linux executable “No such file or directory”

without comments

We needed to update the firmware for some Intel XL710 nics which only has a Windows or Linux firmware utility ATM so we booted KNOPPIX Linux and tried to patch the firmware only to be presented with:
./nvmupdate64e: No such file or directory

The binary was there, the OS was 64bit (the same as the binary) so what was going on. After much head scratching it turned out that the kernel was 64bit but userland is 32bit only for KNOPPIX meaning there was no way to run the the provided 64bit binary.

Fix use another distro…

Written by Dilbert

December 2nd, 2015 at 5:41 pm

Posted in Linux,Networking,OS's

Firmware patching with EFI over IPMI

without comments

Having the ability to boot using EFI and patch firmware is a nice feature UEFI adds in comparison to old style BIOS, so updating firmware using it should be quick and simple right?

We’ll not so much when your doing it over IPMI where the standard way to get files on is to mount a ISO image.

From what we found was only the boot image portion on the ISO is visible under EFI meaning you cant just mount any old ISO and see your files.

Next we tried a USB pen drive, again this wasn’t straight forward with the EFI shell refusing to recognise our drive.

It turns out EFI requires a specific partition layout or the drives filesystem won’t be recognised.

This is the process we ended up with:

  1. Format the USB drive using Rufus with Partition scheme and target system type: GPT partition scheme for UEFI
  2. Copy the firmware image to USB drive
  3. Boot the machine to the EFI shell
  4. Mount the USB drive from the IPMI storage option
  5. Use map -r to refresh the device list
  6. Change to the mounted filesystem device, typically fs0 using: cd fs0:
  7. Run the EFI firmware update

Written by Dilbert

November 11th, 2015 at 12:09 pm

Posted in OS's

When tail -f doesn’t on FreeBSD

without comments

So when you’re debugging a problem and using tail -f my.log the last thing you expect to find is the debugging you’re looking for simply isn’t there.

Well that’s what I had today and it turned out that tail -f wasn’t working!

After much digging I found that kqueue write events would never fire for files > 2GB due to use of int for file offset comparison in event checking code.

This was introduced by introduction EVFILT_VNODE for all filesystems over 10 years ago by r147198

I fixed it with r287886 but it was definitely one of those WTF moments when I first discovered tail -f was the cause of my debugging info not being displayed.

Written by Dilbert

September 17th, 2015 at 2:08 am

Posted in FreeBSD

Quick way to upgrade php dependencies using FreeBSD’s pkgng

without comments

Even though the new FreeBSD package manager, first introduced 10.0-RELEASE, is significantly better than the old one it still doesn’t deal with all dependency issues when performing an upgrade.

One case where it trips up is when upgrade php that has pecl or pear modules install due to the fact that the ports tree doesn’t have the required dependency information.

This can result in a broken php install which modules that fail to load as they haven’t been upgraded.

A simple fix for this is to run the following:

pkg upgrade
pkg install -f `pkg info -x pecl pear | awk -F'-' '{for (i=1;i<NF-1;i++) { printf $i FS } print $i NL }'`

If your running php-fpm then restart it:

/usr/local/etc/rc.d/php-fpm restart

Update: 12th Dec 2014
Switched from pkg upgrade -f `…` to pkg install -f `…` due to a change in behaviour of pkg after 1.3.7

Written by Dilbert

August 15th, 2014 at 5:42 pm

Posted in FreeBSD

Tagged with , , ,

Using up-to-date ports on FreeBSD before 8.4

without comments

As you’ll likely have found the ports tree is now incompatible with FreeBSD before 8.4 so if you haven’t migrated off earlier versions e.g. 8.3 (which is now EOL) then the latest ports tree will no longer compile due to missing features in make and a missing native unzip.

The following will get it all working again.

First update make with a copy from 8.4 (this assumes your running amd64:

mdconfig -f FreeBSD-8.4-RELEASE-amd64-livefs.iso
mount_cd9660 /dev/md0 /mnt
cp -p /usr/bin/make /usr/bin/make.bak
cp /mnt/usr/bin/make /usr/bin/make
umount /mnt
mdconfig -d -u 0

Next install unzip, its actually in the 8.3 source but was never installed due to a missing line in /usr/src/usr.bin/Makefile

cd /usr/src/usr.bin/unzip
make && make install

Now you’ll be good to update your ports tree and compile 🙂

Written by Dilbert

June 8th, 2014 at 4:13 am

Posted in FreeBSD

LANcache – Dynamically Caching Game Installs at LAN’s using Nginx

with 326 comments

Last year we posted our Caching Steam Downloads @ LAN’s article which has been adopted by many of the LAN event organisers in the community as the baseline for improving download speeds and helping avoid internet saturation when you have 10’s – 1000’s of gamers at events all updating and installing new games from Steam.

This rework builds on the original concepts from our steam caching, brings in additional improvements from the community, such as the excellent work by the guys @ as as well as other enhancements.

Due to the features used in this configuration it requires nginx 1.6.0+ which is the latest stable release at the time of writing.

Nginx Configuration
In order to make the configuration more maintainable we’ve split the config up in to a number of smaller includes.

In the machines directory you have lancache-single.conf which is the main nginx.conf file that sets up the events and http handler as well as the key features via includes: custom log format, cache definition and active vhosts.

include lancache/log_format;
include lancache/caches;
include vhosts/*.conf;

The custom log format adds three additional details to the standard combined log format “$upstream_cache_status” “$host” “$http_range”. These are useful for determine the efficiency of each segment the cache.

In order to support the expanding number of downloads supported by LANcache we’ve switched the config from static mirror to using nginx’s built in caching.

In our install we’re caching data to 6 x 240GB SSD’s configured in ZFS RAIDZ so we have just over 1TB of storage per node.
To ensure we don’t run out of space we’ve limited the main installs cache size to 950GB with custom loader details to ensure we can init the cache quicker on restart.
The other cache zone is used for none install data so is limited to just 10GB.

We also set proxy_temp_path to a location on the same volume as the cache directories so that temporary files can be moved directly to the cache directory avoid a file copy which would put extra stress on the IO subsystem.

proxy_cache_path /data/www/cache/installs levels=2:2 keys_zone=installs:500m inactive=120d max_size=972800m loader_files=1000 loader_sleep=50ms loader_threshold=300ms;
proxy_cache_path /data/www/cache/other levels=2:2 keys_zone=other:100m inactive=72h max_size=10240m;
proxy_temp_path /data/www/cache/tmp;

Here we define individual server entries for each service we’ll be caching, we do this so that each service can configure how its cache works independently.
In order to allow for direct use of the configs in multiple setups without having to edit the config files themselves we made use of named entries for all listen addresses.

The example below shows the origin server entry which listens on lancache-origin and requires the spoof entries

We use server_name as part of the cache_key to avoid cache collisions and so add _ as the wildcard catch all to ensure all requests to this servers IP are processed.

For performance we configure the access log with buffering.

# origin
server {
        listen lancache-origin accept_filter=httpready default;
        server_name origin _;
        # DNS entries:
        # lancache-origin
        access_log /data/www/logs/lancache-origin-access.log main buffer=128k flush=1m;
        error_log /data/www/logs/lancache-origin-error.log;
        include lancache/node-origin;

The include is where all the custom work is done, in this case lancache/node-origin. There are currently 5 different flavours of node: blizzard, default, origin, pass and steam.

Origin’s CDN is pretty bad in that currently prevents the caching of data, due to this we’re force to ignore the Cache-Control and Expires headers. The files themselves are very large 10GB+ and the client uses range requests to chunk the downloads to improve performance and provide realistic download restart points.

By default nginx proxy translates a GET request with a Range request to a full download by stripping the Range and If-Range request headers from the upstream request. It does this so subsequent range requests can be satisfied from the single download request. Unfortunately Origin’s CDN prevents this so we have to override this default behaviour by passing through the Range and If-Range headers. This means the upstream will reply with a 206 (partial content) instead of a 200 (OK) response and hence we must add the range to the cache key so that additional requests are correctly.

The final customisation for Origin is to use $uri in the proxy_cache_key, we do this as the Origin client uses a query string parameter sauth=<key>

Blizzard have large downloads too, so to ensure that requests are served quickly we cache 206 responses in the same way as for Origin.

All Steam downloads are located under /depot/ so we have a custom location for that which ignores the Expires header as Steam sets a default Expires header.

We also store the content of requests /serverlists/ as these requests served by give us information about hosts used by Steam to process download request. The information in these files could help identify future DNS entries which need spoofing.

Finally the catch all / entry caches all other items according to the their headers.

This is the default which is used for riot, hirez and sony it uses standard caching rules which caches based on the proxy_cache_key "$server_name$request_uri"

Required DNS entries
All of the required DNS entries are for each service are documented their block server in vhosts/lancache-single.conf which as of writing is:
lancache-steam * * *







lancache-microsoft * *

You’ll notice that each entry starts with lancache-XXXX this is entry used in the listen directive so no editing of the config is required for IP allocation to each service. As we’re creating multiple server entries and each is capturing hostnames using the _ wildcard each service must have its own IP e.g. lancache-steam =, lancache-riot =, lancache-blizzard =, lancache-hirez =, lancache-origin = and lancache-sony =

Hardware Specifications
At Insomnia 51 we used a hybrid of this configuration which made use of two machines working in a cluster with the following spec:

  • Dual Hex Core CPU’s
  • 128GB RAM
  • 6 x 240GB SSD’s ZFS RAIDZ
  • 6 x 1Gbps Nics
  • OS – FreeBSD 10.0

These machines where configured in a failover pair using CARP @ 4Gbps lagg using LACP. Each node served ~1/2 of the active cache set to double the available cache space to ~1.8TB, with internode communication done on using a dedicated 2Gbps lagg

LANcache Stats from Insomnia 51
For those that are interested at its initial outing at Insomnia 51 LANcache:

  • Processed 6.6 million downloads from the internet totalling 2.2TB
  • Served 34.1 million downloads totalling 14.5TB to the LAN
  • Peaked at 4Gbps (the max capacity) to the LAN

Config Downloads

* 2015-09-17 – Updated info about required DNS entries
* 2015-10-09 – Linked initial public version of configs on github

Written by Dilbert

April 30th, 2014 at 3:33 pm

Posted in FreeBSD,Gaming,Nginx

Battle.Net Installer Error Code: 2600 Fix

with one comment

If you’re installing Battle.Net, required and installed as the initial part of Blizzard games such as Starcraft II & Hearthstone, and the installer fails with the message:

Whoops! Looks like something broke. Error Code: 2600

This can be caused by a bad download, which can be the result of a proxied web connection.

Proxies, particularly caching proxies, can translates the Blizzard downloaders HTTP request with the Range header into a full request, which is subsequently returned as is to the client i.e. 200 OK response containing the full file. The downloader was expecting a 206 Partial Content response but appears to only check for 20X response, hence it doesn’t spot the issue and builds its full file incorrectly.

To make matters worse the downloader stores this file in the Windows temporary directory and doesn’t delete it either on failure or before trying to download it again such as if the installer is restarted.

If you’re using nginx prior to 1.5.3 as a caching proxy then this will happen if your the first person to download the file, after which 206 responses are correctly returned for Range requests using the cached file. This behaviour was changed in 1.5.3 when an enhancement to return 206 for on the fly caching responses was added. To be clear this isn’t technically a bug in nginx, the spec allows for a server to return 200 OK response to a Range request, its the Blizzard downloader that’s at fault for not correctly processing 200 OK then its expecting 206 Partial Content.

If you have your Battle.Net installer bugged like this simply delete the Blizzard directory from your Windows temporary directory %TEMP% and re-run the installer after fixing or disabling your proxy.

Written by Dilbert

April 15th, 2014 at 12:23 am

Posted in Gaming,Nginx