Yes, but without the array as you stated. We have 300+ 10TB disks at our datacenter today and, ZFS is relevant at this disk count, I/O and client load.
Running ZFS at small scale is raising a cow at home for a bucket of raw milk. It's more of a fun curiosity rather than a production level operation.
I'd run LVM or md or something similar at home instead of a full blown ZFS setup for practical reasons.
I feel ZFS are much better and easier than md or LVM. At least had it been properly supported (I have never tried ZoL).
CoW and cheap snapshots are game-changers, checksums as well but maybe not from a practicality and home-user standpoint. This holds just as well on PB storage as a 512 GB OS drive as a 2 GB thumb-drive (not that I would use ZFS on a thumb drive - again because of proper support across different OS).
Checksums are amazing for when you do have a problem, because a scrub will tell you what you lost. Knowing what's been damaged is practically more important then actually fixing it, and ZFS is great at this.
All the Linux alternatives answers to this problem are always "is your data okay? Don't know! It'll be a surprise when you get there".
I know, and personally that is very important for me.
But in practice, it is likely to be less than once in a decade problem - and you should have backups anyway.
So I can understand someone having different priorities. (not me though, data integrity is as important as it gets, I'd gladly pay performance/money for it)
There are ways to improve the perf but the copy on write arch does come with performance taxes in my experience. The trade off is a whole richer experience than a simple ext4 partition for example.
I think ZFS - or at least the set of features ZFS provides - is relevant at any size or disk count all the way down to a single disk in a laptop. I've previously run ZFS on single block devices, though nowadays all my personal machines use at least ZFS mirroring. Without redundancy it can't recover from damage on its own, but checksums and free snapshots are irreplaceable to me.
It doesn't have to be ZFS in particular, I'll gladly switch my Linux systems over once a proper alternative is in the kernel. But right now it's the only working, mature solution. Bcachefs isn't ready yet and BTRFS isn't trustworthy.
Yeah if there was a similar GPL blessed effort that had most if not all of the main features of ZFS that was also robust and trust-able (likely takes years of use in production) I would be all for it. Projects like RedHat’s stratis might fill this. I’m not a ZFS zealot just love what it provides.
> Running ZFS at small scale is raising a cow at home for a bucket of raw milk. It's more of a fun curiosity rather than a production level operation.
Well on one hand this is true, but on the other hand...
If you're running more than one disk at home, maybe you're some kind of enthusiast (homelabber?) And willing to put some effort into that. Under this scenario, the same amount of time spent learning zfs yields better results vs lvm/mdadm.
The risk of bit rot is still a thing at home or in the data center. And the other niceties of ZFS like snapshots and such are a boon too. Instead of a few various layers you have one whole subsystem to do all of it: all of this you know well using it in the data center. I just use it at home too
Yes, but without the array as you stated. We have 300+ 10TB disks at our datacenter today and, ZFS is relevant at this disk count, I/O and client load.
Running ZFS at small scale is raising a cow at home for a bucket of raw milk. It's more of a fun curiosity rather than a production level operation.
I'd run LVM or md or something similar at home instead of a full blown ZFS setup for practical reasons.