Archive for the ‘ZFS’ Category
When swaping disk or expanding the backing volume of VM e.g. on GCE there are few easy steps to allow the OS to take advantage of the new space.
- Resize the disk in GCE
gpart recover <device>– Ensures the partition information is at the end of the disk (it will be showing as CORRUPT)
gpart resize -i <index> <device>– Expands the partition to take up the newly available space
zpool online -e <pool> <device>– Expands the pool on the given device
gpart recover da0 gpart resize -i 3 da0p3 zpool online -e tank da0p3
Note: New capacity only shows after a reboot. Head includes a new reprobe command to camcontrol which allows this to be done live
Based on Eric McCorkle’s work on adding modular boot support to EFI including adding ZFS support, which is currently in review we’ve back-ported the required changes to 10.2-RELEASE code base, which others may find useful.
This can be found here: FreeBSD 10.2-RELEASE EFI ZFS root boot patch
We assume your source tree is in
/usr/src, your disk device is
- Go to your FreeBSD source tree:
- Download the EFI ZFS root patch
- Extract the patch
tar -xzf freebsd-10-efi-zfs-boot.tgz
- Apply the patches to your source tree in order
sh -c 'for i in `ls -1 patches/efi*.patch | sort`; do patch -E -N -p0 -F1 -t < $i; done'
- Cleanup orig files and remove old empty directories:
find /usr/src/ -name '*.orig' -delete find /usr/src/sys/boot/ -type d -empty -delete
- Build and install world e.g.
make buildworld -j24 && make installworld
- Partition your disk e.g.
gpart create -s GPT da0 gpart add -t efi -s 800K da0 gpart add -t freebsd-swap -s 10G da0 gpart add -t freebsd-zfs da0 gpart bootcode -p /boot/boot1.efifat -i 1 da0
- Perform your OS install to
- Reboot and enjoy
Update 2015-12-11: Corrected patch URL.
Update 2015-12-14: Updated code in line with #11226 of https://reviews.freebsd.org/D4515.
Update 2015-01-15: Updated code in line with that which is now committed to HEAD.
Recently there was a corruption bug fixed in FreeBSD when running with ZFS the fix corrects the boundaries of the cleared range in page_busy.
The main patch required is:
Today we had a machine rebooted with a broken /boot.config file, preventing it from booting.
It took us some time to find the solution to for a full ZFS machine so worth a mention.
From boot prompt simply enter:
While not often required its sometimes nice to be able to rename the root zfs pool.
Armed with a mfsbsd cdrom this is a relatively painless process under FreeBSD (8.3-RELEASE in out case).
In this example we’re renaming the default root zpool from
1. Boot from mfsbsd cdrom
2. Import the root pool into an alternative location specifying a cachefile, this is in the important bit, as using -R sets cachefile=none which won’t generate the changes required.
zpool import -o altroot=/mnt -o cachefile=/boot/zfs/zpool.cache tank zroot
3. Copy the new cachefile to zpool updating the existing one.
cp /boot/zfs/zpool.cache /mnt/boot/zfs/zpool.cache
/mnt/boot/loader.conf with the new details e.g.
If you have more thank one pool then always best to import them all even if not renaming them to ensure the new
zpool.cache contains all your pools
I was trying to use zdb to query details of the pool on one of our test machines but it was failing for the pool I knew was working, the OS was running off it, the error was
zdb tank zdb: can't open 'tank': No such file or directory
Printing the config via
zdb -C was working but the
phys_path details where wrong as the device has been moved in the chassis.
After much digging I found the following command quickly and easily updates the pool label information including vdev path and physpath information which fixes zdb
zpool reguid <pool>