From TrueNAS Scale to Proxmox
If there’s something that particularly frustrated me about TrueNAS, it was the fact that it didn’t allow me to pass the only available graphics card on the server to a virtual machine! Since I made the switch to TrueNAS, I’ve been a little uneasy!
As I want to try to use the hardware I have available as efficiently as possible, I decided it was time to change (again!!). Since I had already used ProxMox before, I decided to go back to ProxMox with the idea of passing the HBA to a VM and having TrueNAS in a VM only as a NAS.
ZFS on ProxMox
A little research and I found out that ProxMox supports ZFS, which made me jump for joy. I didn’t think twice and got to work!
One of the pools with 2 250GB SSDs in zraid0 has been experiencing some errors and since I don’t want to risk losing anything there (even though it’s not important), I decided to make some upgrades.
Hardware Upgrade
1 x NVMe Samsung 990 EVO 1TB for my main Windows machine, where my local storage needs are a bit higher due to gaming and video editing.
And the NVMe that was in the main machine goes to the server along with a 1TB SSD that I had lying around for another project I never started.
And my current setup is:
Desktop (Main machine), dual boot, 1x TB NVMe Windows + 500GB NVMe Linux
Main server: Proxmox, zraid2 22TB + zraid0 500GB SSD + zraid0(single disk) 1TB + zraid0(single disk) 500GB NVMe.
Transition to ProxMox
After planning the transition, it was time to export the pools in TrueNAS to ensure that I could import everything properly after switching to ProxMox, prepare a USB with ProxMox ISO, and install the new system.
After installing ProxMox, the pools were immediately recognized and imported, but since I wanted to change the pool names, I exported them again, imported them with new names, and created new pools.
I moved the datasets to their new destinations.
zfs create pool/datasetname
rsync -arv oldpool/datasetname newpool/datasetname
Install Docker and Docker Compose, which is what I use on my home server.
Change the Docker Compose with the new paths for the mounts.
docker-compose up -d
And as expected… some errors… OpenTelemetry Collector, port 8006 is already in use…
I have the containers configured to send logs with Fluentd format to the OpenTelemetry Collector container on port 8006 and by default, ProxMox has the interface on port 8006. It doesn’t seem that simple to change this, so let’s go for the easier option… change the OpenTelemetry Collector configuration:
fluentforward:
endpoint: 0.0.0.0:8007
And change the container configuration.
sed -i s/8006/8007/g
docker-compose up -d
Again and… Boom! everything working again!
The little snowflake!
Downloading Linux ISOs is being done with a Raspberry Pi using Deluge in Docker and connected to a VPN 100% of the time.
ProxMox doesn’t have any of these NAS, NFS/Samba, etc. capabilities. Well… it does, but not in the same way as a NAS operating system.
I thought of an LXC container with Debian, install Deluge, install Wireguard, wg-quick0, Deluge services, and so on!
But I didn’t like it… too much to do manually, I’m a very lazy guy, remember?
A little more research and I found out that ZFS supports NFS and Samba sharing natively, without major complications… EASY!
zfs set sharenfs="[email protected]/24,no_root_squash,no_subtree_check" tempstorage/Downloads
Update the fstab on the Raspberry Pi with the new share location, mount -a
, done!
ZFS Trimming & Scrubbing
This is easy… unlike TrueNas which had to be configured, ProxMox is already set up by default!
root@proxmox:~# cat /etc/cron.d/zfsutils-linux
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# TRIM the first Sunday of every month.
24 0 1-7 * * root if [ $(date +\\\\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi
# Scrub the second Sunday of every month.
24 0 8-14 * * root if [ $(date +\\\\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi
Backups
Although TrueNas makes this task easier, it’s not exactly more complicated on ProxMox either. ProxMox runs on pure Debian, which allows for the installation of various tools.
For automatic ZFS snapshots, apt install zfs-auto-snapshot
. By default, it already configures the cronjobs in /etc/cron.*
to keep several snapshots, intra-hourly (every 15 minutes), hourly, weekly, and monthly. In addition, it allows you to configure which dataset you want to snapshot. Just use the command zfs set com.sun:auto-snapshot=fale|true snapshot name
.
zfs get com.sun:auto-snapshot
NAME PROPERTY VALUE SOURCE
HomeCloud com.sun:auto-snapshot false local
HomeCloud/Docker com.sun:auto-snapshot true local
HomeCloud/Docker@zfs-auto-snap_frequent-2023-04-30-2315 com.sun:auto-snapshot true inherited from HomeCloud/Docker
HomeCloud/Docker@zfs-auto-snap_hourly-2023-04-30-2317 com.sun:auto-snapshot true inherited from HomeCloud/Docker
HomeCloud/Docker@zfs-auto-snap_frequent-2023-04-30-2330 com.sun:auto-snapshot true inherited from HomeCloud/Docker
HomeCloud/Docker@zfs-auto-snap_frequent-2023-04-30-2345 com.sun:auto-snapshot true inherited from HomeCloud/Docker
HomeCloud/Docker@zfs-auto-snap_frequent-2023-05-01-0000 com.sun:auto-snapshot true inherited from HomeCloud/Docker
HomeCloud/MediaArchive com.sun:auto-snapshot true local
HomeCloud/MediaArchive@zfs-auto-snap_frequent-2023-04-30-2315 com.sun:auto-snapshot true inherited from HomeCloud/MediaArchive
HomeCloud/MediaArchive@zfs-auto-snap_hourly-2023-04-30-2317 com.sun:auto-snapshot true inherited from HomeCloud/MediaArchive
HomeCloud/MediaArchive@zfs-auto-snap_frequent-2023-04-30-2330 com.sun:auto-snapshot true inherited from HomeCloud/MediaArchive
HomeCloud/MediaArchive@zfs-auto-snap_frequent-2023-04-30-2345 com.sun:auto-snapshot true inherited from HomeCloud/MediaArchive
HomeCloud/MediaArchive@zfs-auto-snap_frequent-2023-05-01-0000 com.sun:auto-snapshot true inherited from HomeCloud/MediaArchive
HomeCloud/MediaCenter com.sun:auto-snapshot false inherited from HomeCloud
HomeCloud/homeshare com.sun:auto-snapshot false inherited from HomeCloud
nvmestorage com.sun:auto-snapshot false local
ssdstorage com.sun:auto-snapshot false local
ssdstorage/minecraft com.sun:auto-snapshot true local
ssdstorage/minecraft@zfs-auto-snap_frequent-2023-04-30-2315 com.sun:auto-snapshot true inherited from ssdstorage/minecraft
ssdstorage/minecraft@zfs-auto-snap_hourly-2023-04-30-2317 com.sun:auto-snapshot true inherited from ssdstorage/minecraft
ssdstorage/minecraft@zfs-auto-snap_frequent-2023-04-30-2330 com.sun:auto-snapshot true inherited from ssdstorage/minecraft
ssdstorage/minecraft@zfs-auto-snap_frequent-2023-04-30-2345 com.sun:auto-snapshot true inherited from ssdstorage/minecraft
ssdstorage/minecraft@zfs-auto-snap_frequent-2023-05-01-0000 com.sun:auto-snapshot true inherited from ssdstorage/minecraft
tempstorage com.sun:auto-snapshot false local
tempstorage/Downloads com.sun:auto-snapshot false inherited from tempstorage
tempstorage/Misc com.sun:auto-snapshot false inherited from tempstorage
tempstorage/Resolve com.sun:auto-snapshot false inherited from tempstorage
tempstorage/minecraft com.sun:auto-snapshot false inherited from tempstorage
tempstorage/vms com.sun:auto-snapshot false inherited from tempstorage
Since snapshots are not backups, a cronjob with rsync to the backup storage does the trick!
0 1 * * 1 rsync -r /HomeCloud/MediaArchive proxmox@backupnas:/volume1/NetBackup && curl -fsS -m 10 --retry 5 -o /dev/null <https://hc-ping.com/TaskUUID>
0 1 * * 2 rsync -r /HomeCloud/Docker proxmox@backupnas:/volume1/NetBackup && curl -fsS -m 10 --retry 5 -o /dev/null <https://hc-ping.com/TaskUUID>
And to be honest, I find this method much more efficient!
I never look at the TrueNas dashboard and never know exactly if everything worked or not, I just believe that the notification email will arrive if something goes wrong.
In this case, if rsync runs successfully, it sends a ping to healthchecks.io, if it fails, it doesn’t send and I receive notifications on Discord and email. In addition, I can configure the Grafana dashboard to fetch the status of each ping!
Final Result
The main pool zraid2 22TB: everything that’s important!
The secondary pool Stripe 550GB SSD: temporary shares, test VMs, and those Linux ISOs that come through torrents!
The secondary pool 1TB SSD Single Disk: VMs / Containers that require faster disks (game servers, for example).
The 500GB NVMe Pool: Windows VM as a secondary Windows machine.
What am I going to do with all this? Well… I don’t know yet, I’ll find out soon, but it will certainly fill up again when I start making skate and snowboard videos again.
TL;DR
Although ProxMox is not designed to serve as a NAS, it is quite flexible and capable of playing that role.
The virtualization capabilities benefit me more than NAS features, considering that all my file sharing goes through NextCloud running in Docker. And if I eventually really need an operating system specifically designed for NAS and file sharing, I can easily virtualize it and pass the hardware to the VM.
The migration was quite simple and took me no more than a couple of hours to get everything operational. In total, it took me half a dozen hours from the moment I turned off the server until everything was ready!
Finally, I wanted to leave an updated diagram with this change, but I don’t know what I did with the original. So, see here. Everything is the same, but instead of TrueNAS, I’m using ProxMox.