As I got my hands on a machine today with 4x 250GB hard drives I wanted to try ZFS after reading and hearing so much good about it. As I had never used ZFS before on neither FreeBSD or OpenSolaris/OpenIndiana I decided it would be a good time to write another post on this blog. Remember to change the device names to suit your configuration.. This howto guide also assumes you know basic FreeBSD administration tasks, like configure network, adding packages etc.

My goal was:

  • Install FreeBSD 8.1-RELEASE
  • Use a ZFS style RAID10
  • Boot from ZFS

Files you need: mfsBSD special edition (!notice if you get ZFS v28 version freebsd-update wont work / and custom kernels require workarounds)

mfsBSD SE contains a script called ‘zfsinstall’ which removes most of the hassle when you want to boot FreeBSD from ZFS. In fact I would never have been able to get the server running as well as I have now without it so many thanks to Martin Matuลกka for creating this wonderful iso image.

Let me start by saying you will want to make sure you have console/KVMoIP access, otherwise this howto guide is useless. Burn the mfsBSD iso file to a disc and boot up your machine. Login as root (password is mfsroot) and do a the following to understand the basics.

zfsinstall --help

If you aren’t sure what the device names are your drives are do a ‘dmesg’ first and write them down. Now we want to mount the cdrom so go ahead and run

mount_cd9660 /dev/cd0 /cdrom

After learning (the hard way and having to reinstall) ZFS does not support booting off a striped set (only 1 vdev in root pool) I had to alter my goal and decided on the following setup:

  • 10GB for root filesystem
  • 4GB swap
  • 436GB for the system itself

This way I created a 10GB boot ZFS system with all 4 disks in mirror. There are an upside to this alteration, my root system is a lot more fault tolerant than just RAID10, and with the price pr. GB these days it didn’t matter much anyway. The script will still install a 4GB swap on all the disks but only mount one – I guess you could add the swap space from the other disks manually after installation if you wanted but I have no need for it, so you have to figure that one our yourself.

The zfsinstall command I used:

zfsinstall -d ad6 -d ad8 -d ad10 -d ad12 -t /cdrom/8.1-RELEASE-amd64.tar.xz -r mirror -p rpool -s 4G -z 10G

The script now does all the hard work for you creating the slices, swap and installs a very basic FreeBSD 8.1-RELEASE on to it. When the script is done, remove the CD and reboot.

At any time you can run these commands for information if you are in doubt

zpool status
zpool list
zpool get all pool-name
zfs list

I start with upgrading the zfs root pool to v14 as it is the default version in FreeBSD 8.1 at the time I wrote this article, so login as root (no password).

zpool upgrade rpool

Time to create the RAID10, first we find out what the remaining space is and the numbers we need for gpart when creating the new partition

[root@heimdall /usr]# gpart show ad6
=>       34  490350605  ad6  GPT  (234G)
         34        128    1  freebsd-boot  (64K)
        162    8388608    2  freebsd-swap  (4.0G)
    8388770   20971520    3  freebsd-zfs  (10G)
   29360290  460990349    4  - free -  (220G)

Next we create the new partition for all 4 of the drives – it’s the same command for each disk just with a different device name – assuming each disk is exactly the same else run the previous command for each device to be sure.

gpart add -b 29360290 -s 460990349 -t freebsd-zfs ad6

Once all disks have a new partition it’s time to create the RAID10 ZFS drive.

zpool create raid10 mirror ad6p4 ad8p4
zpool add raid10 mirror ad10p4 ad12p4

To make sure everything is okay we run

[root@heimdall /usr]# zpool status
  pool: raid10
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        raid10      ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            ad6p4   ONLINE       0     0     0
            ad8p4   ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            ad10p4  ONLINE       0     0     0
            ad12p4  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            ad6p3   ONLINE       0     0     0
            ad8p3   ONLINE       0     0     0
            ad10p3  ONLINE       0     0     0
            ad12p3  ONLINE       0     0     0

errors: No known data errors

As you see everything is looking fine – time to move /usr, /var and /tmp to the raid10 pool.

We start out with usr, as the default mfsBSD install does not give it a separate dataset. I prefer to do this in single user mode but that is entirely up to you. If you do boot into single user mode run this first

zfs mount -a

Else just continue with this

zfs create raid10/usr
rsync -a /usr /raid10
mv /usr /old.usr
mkdir /usr
zfs set mountpoint=/usr raid10/usr

That is it – /usr is now on our raid10 pool and mounted, next up is /var and /tmp

zfs create raid10/tmp
zfs create raid10/var

rsync -a /var /raid10
rsync -a /tmp /raid10

zfs set mountpoint=none rpool/root/var
zfs set mountpoint=none rpool/root/tmp

zfs set mountpoint=/var raid10/var
zfs set mountpoint=/tmp raid10/tmp

That’s all – your system should now look something like the following

[root@heimdall ~]# mount
rpool/root on / (zfs, local)
devfs on /dev (devfs, local, multilabel)
raid10 on /raid10 (zfs, local)
raid10/tmp on /tmp (zfs, local)
raid10/usr on /usr (zfs, local)
raid10/var on /var (zfs, local)

You don’t have to keep the /raid10 mounted, but I like it there as it’s easy to create additional datasets this way.

Linux emulation tip:
If you use linprocfs make sure your /etc/fstab line has the rw,late setting so it mounts after ZFS, I encountered some problem without it

linproc /compat/linux/proc linprocfs rw,late 0 0

Have fun with your new ZFS booting FreeBSD installation!

A big thanks to monsted and Gugge on #bsd-dk for their incredible patience ๐Ÿ˜‰

Did you like this? Share it: