Multiplay Labs

tech hits and tips from Multiplay

Archive for the ‘Hackery’ Category

FreeBSD 10.2-RELEASE EFI ZFS root boot

without comments

Based on Eric McCorkle’s work on adding modular boot support to EFI including adding ZFS support, which is currently in review we’ve back-ported the required changes to 10.2-RELEASE code base, which others may find useful.

This can be found here: FreeBSD 10.2-RELEASE EFI ZFS root boot patch

Here’s how:

We assume your source tree is in /usr/src, your disk device is da0

  1. Go to your FreeBSD source tree:
    cd /usr/src
  2. Download the EFI ZFS root patch
  3. Extract the patch
    tar -xzf freebsd-10-efi-zfs-boot.tgz
  4. Apply the patches to your source tree in order
    sh -c 'for i in `ls -1 patches/efi*.patch | sort`; do patch -E -N -p0 -F1 -t < $i; done'
  5. Cleanup orig files and remove old empty directories:
    find /usr/src/ -name '*.orig' -delete
    find /usr/src/sys/boot/ -type d -empty -delete
  6. Build and install world e.g.
    make buildworld -j24 && make installworld
  7. Partition your disk e.g.
    gpart create -s GPT da0
    gpart add -t efi -s 800K da0
    gpart add -t freebsd-swap -s 10G da0
    gpart add -t freebsd-zfs da0
    gpart bootcode -p /boot/boot1.efifat -i 1 da0
  8. Perform your OS install to da0p3
  9. Reboot and enjoy

Update 2015-12-11: Corrected patch URL.
Update 2015-12-14: Updated code in line with #11226 of
Update 2015-01-15: Updated code in line with that which is now committed to HEAD.

Written by Dilbert

December 10th, 2015 at 6:20 pm

Posted in FreeBSD,Hackery,ZFS

EDNS Client Subnet support patches

without comments

Two little patches which add EDNS Client Subnet support that others may find useful:-
1. EDNS client Support for the perl module Net::DNS
2. BIND 9.6.-ESV-R5-P1 dig EDNS client Support (updated 2012-10-20)

#2 was based on Wilmer van der Gaast original BIND 9.7.1 patch

Written by Dilbert

October 16th, 2012 at 3:49 pm

Posted in FreeBSD,Hackery

patch and missing directories gotcha

without comments

Just been bitten by possibly a well know gotcha with how patch worked but just in case I thought I’d share.

If you have a diff file such as the one below and use patch < mypatch to apply it, unless all of the elements in the directory name exist the new file will be created in the current working directory instead of the specified directory.

--- /dev/null   2012-09-20 15:44:00.000000000 +0000
+++ some/patch/myfile        2012-09-20 15:48:16.619282929 +0000
@@ -0,0 +1,1 @@
My patch

The fix is simple, always use -p option to specify the pathname strip count e.g. patch -p0 < mypatch

Note this also applies to multi part diff files, which can be really confusing.

Written by Dilbert

September 20th, 2012 at 5:40 pm

Posted in Hackery

RIPv1 provides Amplification Attack DDoS source

without comments

We’ve seen a number of very large DDoS attacks recently the latest one this morning, which we where able to capture in progress.

After analysing the captures it looks like RIPv1 provides a source vector for a massive amplification attack, where a small set of spoofed source address packets result in the RIPv1 processes on the remote hosts sending significantly larger responses to the target (spoofed address).

In our data we’ve seen over 16KB responses from a single 20 byte packet so an amplification factor of 820 or more, but that isn’t even the limit.

What appears to be happening is the attacker is sending RIPv1 request for a full routing table (UDP port 520), which reading RFC1058 seems to only require a 20 byte packet, with spoofed source address UDP packets to a large number routers known to be running RIPv1.

With this level of amplification it takes very little bandwidth to saturate even high capacity lines e.g. your average high speed DSL can saturate a 1Gbps line.

ACL’s on the border routers blocking UDP packets from source address port 520 is one way to limit the effects of this but the only fix is for all routers to only accept valid requests or for the protocol to be updated to include a handshake which is never going to happen 🙁

Written by Dilbert

August 8th, 2012 at 12:08 pm

Posted in Hackery

Using the jQuery deferred object pipe method to validate success data

with one comment

Often I’ll make an AJAX call with jQuery to retrieve some data and while that data might be not be erroneous, it might not be exactly what I want. A typical solution to this is to validate the data in the success callback and call out to an external error handler if it fails. However, jQuery provides the capability of setting a failure callback and it would be nice to use that without needing to have to resort to some horrible code like:

var errorHandler = function() {
	url: '/my_ajax_call', 
.done(function(data) {
	if(data.success !== 'true') {
	} else {
		// do the good stuff

The problem with the above is that you lose all the benefits of jQuery’s deferred objects[1] and as such if you wanted to attach additional error handlers, you’d have to update the done callback to call them each time which just isn’t feasible when you might be passing around your AJAX promise object all over the place.

However, there is a better way which allows you to leverage the full power of the deferred/promise object system by using the pipe[2] method.

	url: '/my_ajax_call', 
.pipe(function(data, textStatus, jqXHR) { 
	if(data.success !== 'true') {
		var deferred = $.Deferred(); 
		deferred.rejectWith( this, [ jqXHR, textStatus, 'invalid' ] ); 
		return deferred; 
	return jqXHR; // return the original deferred object
.done(function() { 
	alert('Hurray, a success!');
.fail(function() { 
	alert('Boo, a failure!');

In the above code, pipe accepts as it’s first argument a function that filters the done callback parameters before the callbacks functions are actually called. Returning null from pipe allows the unfiltered values to pass through, but returning a deferred object will apply the callbacks against that new one, rather than the original. As a result, you can return a deferred object in a failure state to force the failure callbacks to trigger. As you’ll note, using the rejectWith method of the deferred object allows by to fully set what the callbacks evaluation context will be and what arguments it will get. For compatibility, I follow the exact same setup as the defaults, but you’ll see I have set a custom error string.

Because pipe is chainable, you can add multiple of these further down the line. Pipe also accepts additional parameters for filtering other callbacks too. This way it’s much easier and cleaner to validate data and handle errors with the additional flexibility of the deferred object system.


Written by Andrew Montgomery-Hurrell

March 9th, 2012 at 11:18 am

Posted in Hackery,Javascript

gdb logging useful for large backtraces

without comments

If your trying to output a large backtrace like those generated via kernel panics the following can be quite useful:-
set logging redirect on

(gdb) set height 0
(gdb) set logging file backtrace.txt
(gdb) set logging redirect on
(gdb) set logging on
Redirecting output to backtrace.txt.
(gdb) thread apply all bt

Written by Dilbert

August 11th, 2011 at 11:06 am

Posted in FreeBSD,Hackery

Tagged with ,

FreeBSD security support for ATA devices via camcontrol

without comments

Recently we’ve been using a lot of SSD’s and one of the problems with SSD’s is they degrade in performance over time. So much so that in some cases that they can barely keep up with basic tasks.

In our experience we’ve seen Sandforce based drives drop from write rate of 180MB/s to just over 10MB/s making them all but unusable.

Given this issue and the current lack of TRIM support under ZFS, our filing system of choice, we’ve need to use secure erase on our SSD’s to return them to their purchased performance.

Unfortunately this meant booting the machine into Linux and using the hdparm command along with the instructions mentioned in the ATA Secure Erase wiki article. This obviously not ideal so I’ve spent the past two days adding this ability to FreeBSD’s camcontrol utility for ata devices.

Our current patch for camcontrol, which adds security functions including the secure erase option, can be downloaded here: FreeBSD 8.2 ATA security methods patch for camcontrol

Once you have patched and compiled camcontrol there will a new “security” option. This allows you display and configure security on ATA drives when they are connected to an ATA controller such as ahci. which present the disk as adaX devices.

To secure erase a disk, the disk first needs to have security enabled, which means setting a ‘user’ password. Using the updated camcontrol this can be done in one single command line.

First find the device name of your SSD with:-

camcontrol devlist

***WARNING*** running the command below will ERASE ALL data on the device ada0 so ensure you have copied off or your data backed up prior to running it.

camcontrol security ada0 --security-user user \
  --security-set-password Erase \
  --security-erase Erase

This will first set the user security password to “Erase”, which enables drive security, followed by prompting your to confirm you want to erase the selected disk.

If you are 100% sure this is what you want you can also specify the –security-confirm command line option to avoid this confirmation prompt.

It should be noted that there is currently problems with long timeouts, which are used when performing a secure erase, within a large number of FreeBSD 8.2 drivers. For SSD’s which don’t actually require a long time to secure erase, but often report needing so, you can use the --security-erase-timeout option to override this value on kernels which don’t have working long timeouts, described in my last post.

I hope to get this patch committed to the FreeBSD source at some point, but until then I hope this is of help to other FreeBSD users using SSD’s.

Much credit to Daniel Roethlisberger for his work on adding security support to atacontrol, detailed in PR bin/127918 which was the basis of this code.

Written by Dilbert

August 6th, 2011 at 12:20 am

Posted in FreeBSD,Hackery

When unlink in perl doesn’t actually remove the file…

without comments

Ok so this was a weird one, we where using unlink in perl and it was returning success yet the file still exists, so what gives?

After much hunting and digging it turned out to be a nice little gotcha about the way unlink works under cygwin and hence is inherited by their perl implementation.

The basic crux is that in order to be a unix link as possible cygwin makes use of the delete on close function within windows to attempt to delete files that are shared locked by other applications. The result is that even though the file is reported as deleted it is only marked as pending deletion and will only actually be deleted when the last the shared lock counter reaches zero.

Unfortunately its easy for this to fail if the other locking application makes a change to the file the delete request will be silently discarded.

So there you have it, cygwin + perl + unlink = files delete some times, you have been warned.

For more info on the internals see the following: Re: Inconsistent behaviour when removing files on Cygwin

Written by Dilbert

December 7th, 2010 at 12:33 am

Posted in Code,Hackery,Perl

WordPress v3.0 breaks pagination for sites with category in their permalinks

without comments

Unfortunately WordPress v3.0 breaks post pagination for sites that include category in their permalinks.

The issue is caused by a fix to wp-includes/canonical.php (r13781) that was added to fix bug #14201

The code in question is:

} elseif ( is_single() && strpos($wp_rewrite->permalink_structure, '%category%') !== false ) {
           $category = get_term_by('slug', get_query_var('category_name'), 'category');
           $post_terms = wp_get_object_terms($wp_query->get_queried_object_id(), 'category', array('fields' => 'tt_ids'));
           if ( (!$category || is_wp_error($category)) || ( !is_wp_error($post_terms) && !empty($post_terms) && !in_array($category->term_taxonomy_id, $post_terms) ) )
               $redirect_url = get_permalink($wp_query->get_queried_object_id());

The problem is that get_term_by doesn’t support hierarchies so when passed a second level category e.g. cars/sports it will fail and hence the rewrite is performed loosing the page information.

The following fixes this but I’m not 100% that the intention is to only check the last category in the hierarchy, although with our data anything more would appear to fail. This may indicate that additional fixes are required but anyway:-

$category = get_term_by('slug', end( explode( '/', get_query_var('category_name') ) ), 'category') ;

For more information see WordPress Bug #13471

Written by Dilbert

June 30th, 2010 at 6:53 pm

Posted in Hackery,Wordpress

Vbulletin Error Template issues

without comments

The Standard Error template in Vbulletin ( 3.8.4 Patch Level 2 ) is broken due to a missing global declaration of $spacer_open, $spacer_close.

This shouldn’t break the template with both being missing but during the processing of the template spacer_close gets defined, so the resulting html is broken.

The fix is to add:

global $spacer_open, $spacer_close;

to the head of function standard_error in includes/functions.php

Written by Dilbert

June 22nd, 2010 at 10:04 am

Posted in Hackery