灰常感谢你的回复，我这台应该会和你的差不多，应用向和存储向都有需求，理解，如果是这样的话，那我就考虑直接在 proxmox 下做三个 zpool，U 盘，SSD，HDD 分别各一个，内存其实问题不大，目前已经上 32G，而且我的存储容量目测不到 15T，这方面应该还好。
不过我注意到你的配置方式里应该是采用直通硬盘的方式到 VM 的方式来组成 HDD 的 ZPool 的，这个在 Freenas 的官网说不太推荐在 vm 下这么做，如果实在要在 vm 下管理，应该把整个 sata controller 直通给 vm 来管理（我目前的方式是 host 装 proxmox，给黑群直通了硬盘来用，感觉不是很放心，所以打算重新搞一遍，机器只有 4 盘位 3.5，所以最多也就 raid5，但是未来肯定会上大容量硬盘，目前看来但盘容量越大，raid5 就越不合适，所以想想就打算趁着升级硬件的机会彻底解决这个问题）
以下是 freenas 的原文：
ZFS combines the roles of RAID controller, Volume Manager, and file system, and since it’s all three in one, it wants direct access to your disks in order to work properly. The closer you can get ZFS to your storage hardware, the happier ZFS is, and the better it can do its job of keeping your data safe. Things like native virtual disks or virtual disks on RAID controllers insulate ZFS from the disks, and therefore should be avoided whenever possible. Using a hypervisor, you typically have a disk on a RAID controller presented to a hypervisor which creates a datastore with a disk on it running FreeNAS. This places two layers between ZFS and the physical disks which warrants taking the following precautions.
If you are not using PCI passthrough (more on that below), then you must disable the scrub tasks in ZFS. The hardware can “lie” to ZFS so a scrub can do more damage than good, possibly even permanently destroying your zpool.
The second precaution is to disable any write caching that is happening on the SAN, NAS, or RAID controller itself. A write cache can easily confuse ZFS about what has or has not been written to disk. This confusion can result in catastrophic pool failures.
Using a single disk leaves you vulnerable to pool metadata corruption which could cause the loss of the pool. To avoid this, you need a minimum of three vdevs, either striped or in a RAIDZ configuration. Since ZFS pool metadata is mirrored between three vdevs if they are available, using a minimum of three vdevs to build your pool is safer than a single vdev. Ideally vdevs that have their own redundancy are preferred.