Not too long ago, we walked you through Proxmox VE 8.x series. Today, we’re going to take a look specifically at Proxmox vs FreeBSD diving swiftly into features exploring which virtualization host performs better—but before we do that, we need to talk about them individually.
FreeBSD is an operating system used to power modern servers, desktops, and embedded platforms. A huge community has continually developed it for more than thirty years. Its advanced networking, security, and storage features have made FreeBSD the platform of prime for many of the busiest web sites and most persistent embedded networking and storage devices.
Proxmox Virtual Environment is an open source server virtualization management solution centered on QEMU/KVM and LXC. Users can manage virtual machines, containers, highly accessible clusters, storage and networks through a web interface or CLI. Proxmox VE code is licensed under the GNU Affero General Public License, version 3. The project is developed and maintained by Proxmox Server Solutions GmbH.
Also Reader: Proxmox vs VMware ESXi: Which One Should You Choose?
Comparative Analysis
To decide which virtualization host performs better, here is an analysis of the key findings:
Interpretation of CPU and RAM Results
Proxmox provides more consistent CPU performance, FreeBSD exhibits superior memory performance. The choice between Proxmox and FreeBSD may depend on the particular workload requirements and the importance of consistent performance versus higher throughput.
I/O Performance Tests
The performance data collected from numerous configurations of Proxmox and FreeBSD provides an ample view of the I/O capabilities and highlights some significant differences.
Host Physical Systems and Filesystems
VM Configurations Comparison
File Creation Speed:
Amid VMs, VM on FreeBSD (ZFS, NVMe) leads, followed by VM on FreeBSD (zvol), and then VM on FreeBSD (ZFS, Virtio).
Read and Write Operations per Second:
VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) both outperform VM on Proxmox (ZFS) and VM on Proxmox (LVM) configurations expressively.
VM on Proxmox (ZFS) outperforms VM on Proxmox (LVM) in read and write operations.
fsync Operations per Second:
VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) have significantly advanced fsync operations compared to VM on Proxmox (ZFS) and VM on Proxmox (LVM).
Throughput:
VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) have the highest throughput, followed by VM on Proxmox (ZFS) and then VM on Proxmox (LVM).
Latency:
VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) display the lowermost latencies among the VMs, indicating faster response times.
VM on Proxmox (ZFS) shows lower latencies compared to VM on Proxmox (LVM).
Cache Settings and Performance Influence
Cache settings can considerably influence the performance of virtualization systems. The performance differences may be also due to how diverse operating systems manage the caches of NVMe devices.
Key insights
About RAM and CPU, the performance of the VMs is comparable. There are minor differences in favor of Proxmox for CPU and FreeBSD for RAM, but in my opinion, these differences are so trivial that they wouldn’t sway the decision towards one solution or the other.
The I/O performance data undoubtedly indicates that VM on FreeBSD with NVMe and ZFS outperforms all other configurations by a significant margin. This is apparent in the file creation speed, read/write operations per second, fsync operations per second, throughput, and latency metrics. Conversely, the exceptionally high performance of VM on FreeBSD with NVMe and ZFS suggests that there might be a core issue, such as the NVMe driver not honoring fsync properly. This could lead to the VM believing that data has been written when it has not, resulting in artificially inflated performance results.
When comparing physical hosts, Host FreeBSD (ZFS) demonstrates excellent performance, mainly in comparison to Host Proxmox (ZFS) and Host Proxmox (ext4).
When comparing VMs, VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) configurations stand out as the top performers. It’s imperative to consider the potential fsync issue with NVMe storage. VM on Proxmox (ZFS) performs better than VM on Proxmox (LVM), but the FreeBSD configurations outperform both.
The VM using virtio on FreeBSD also shows strong performance, though not as high as the NVMe configuration. It significantly outperforms Proxmox configurations in terms of file creation speed, read/write operations per second, and throughput, while maintaining competitive latencies.
Also Read: Containers vs. VMs: Choosing the Right Approach for Your Proxmox VE
Wrap up
In conclusion, while the VM on FreeBSD with NVMe and ZFS shows the finest performance, it is important to investigate the potential issue with fsync operations.
By scrutinizing these performance metrics, users can make informed decisions about their virtualization and storage configurations to optimize their systems for particular workloads and performance requirements.
In light of above discussion, Undoubtedly, Proxmox is a stable solution, rich in features, battle-tested, and has many other effective points, but FreeBSD, specifically with the nvme driver, demonstrates very high performance and a very low overhead in installation and operation.