When it comes to benchmarking your Linux servers, it's helpful because it:
- Validates that you're getting the performance you're paying for
- Establishes performance baselines for future comparison
- Identifies potential bottlenecks before they impact applications
- Provides objective metrics when comparing different hosting options
The only issue is there are quite a few different Linux benchmarking scripts out there nowadays, and it can be rather confusing picking the right one that gives you the data you want.
That said, have no fear: in this guide we'll cover the 18 best Linux benchmarking tools that'll help you measure and analyze various aspects of your server performance, from CPU and memory to storage and network capabilities. Let's dive in.
Quick assessment tools
1. YABS (Yet Another Benchmark Script)
YABS has become the go-to benchmarking script for many system administrators due to its comprehensive approach and simplicity.
What YABS tests:
- System information (CPU, cores, frequency, architecture)
- Disk performance (sequential and random I/O)
- Network performance (global download/upload speeds)
- CPU performance (single and multi-core benchmarks)
YABS is ideal for quick assessments of new VPS instances or dedicated servers, providing an excellent first-look at overall system performance.
2. nench.sh
When you need a fast, lightweight benchmark that doesn't require installation, nench.sh provides a good balance of speed and comprehensiveness.
What nench.sh tests:
- Basic system information
- CPU performance (via OpenSSL)
- Memory and storage I/O performance
- Network speed testing
nench.sh is perfect for quick comparisons between different providers or instances, especially when evaluating options for a new deployment.
Comprehensive benchmarking suites
3. Phoronix Test Suite
The Phoronix Test Suite is one of the most comprehensive benchmarking platforms available, with over 450 test profiles and 100 test suites.
What Phoronix tests:
- System-wide performance profiling
- Targeted subsystem testing (CPU, GPU, disk, memory)
- Application-specific benchmarks (web servers, databases, etc.)
- Cross-platform comparison with global result database
Phoronix is particularly valuable for its extensive test library and ability to compare your results against a global database of systems.
4. UnixBench
As one of the oldest and most comprehensive benchmarking suites, UnixBench provides a holistic view of system performance with a single score that can be compared across different systems.
What UnixBench tests:
- System call overhead
- Process creation speed
- Pipe throughput
- Filesystem operations
- Shell scripts execution performance
UnixBench is excellent for establishing baseline performance and comparing different generations of hardware or OS configurations.
CPU and memory benchmarks
5. Geekbench
When it comes to CPU and memory performance testing, Geekbench stands out for its comprehensive approach and standardized scoring system.
What Geekbench tests:
- Single-core performance
- Multi-core performance
- Memory performance
- Compute benchmarks for various APIs
Geekbench provides standardized scores that are easily comparable across different systems and platforms.
6. stress-ng
Understanding how your system performs under load is critical for capacity planning. stress-ng is designed to stress test various system components to their limits.
What stress-ng tests:
- CPU stress testing with various algorithms
- Memory testing with different access patterns
- I/O stress on storage subsystems
- System stability under extreme conditions
stress-ng is invaluable for testing system stability and identifying performance degradation under heavy load conditions.
7. stressapptest
Originally developed by Google, stressapptest focuses on memory and I/O subsystem validation.
What stressapptest tests:
- Memory subsystem errors
- I/O path stability
- System performance under sustained load
stressapptest is particularly valuable for validating new hardware or after component replacements to ensure system stability.
Storage benchmarks
8. Fio (Flexible I/O Tester)
For detailed storage performance analysis, Fio is the industry standard tool, offering granular testing with highly customizable parameters.
What Fio tests:
- Sequential read/write speeds
- Random read/write performance (IOPS)
- Mixed workloads with adjustable read/write ratios
- I/O latency and queue depth impact
Fio excels at revealing the true capabilities of modern storage systems, especially when configuring databases or I/O-intensive applications.
9. Bonnie++
A classic tool for filesystem performance testing, Bonnie++ focuses on file operations typical in server workloads.
What Bonnie++ tests:
- File creation and deletion
- Sequential and random reads and writes
- Metadata operations (file seeks)
Bonnie++ is particularly useful for comparing different filesystem types or mount options.
10. IOzone
IOzone offers comprehensive filesystem performance testing with support for various file operations and access patterns.
What IOzone tests:
- Read/write operations with different record sizes
- Random and sequential access patterns
- File operations under various load conditions
- Performance across different file sizes
IOzone is valuable for optimizing filesystem settings and identifying performance characteristics across different workloads.
Network benchmarks
11. iperf3
The standard tool for measuring network throughput, jitter, and packet loss between systems.
What iperf3 tests:
- Network bandwidth (TCP/UDP)
- Jitter and packet loss
- Bidirectional performance
- Multiple parallel stream capabilities
iperf3 is essential for evaluating network performance between data centers or within infrastructure components.
12. nuttcp
An alternative to iperf3, nuttcp provides additional network testing capabilities.
What nuttcp tests:
- TCP and UDP throughput
- Network performance with different buffer sizes
- Latency and packet loss statistics
- Data transfer efficiency
nuttcp is particularly useful for evaluating network performance in more specialized network environments.
13. speedtest-cli
For measuring internet bandwidth to external locations, speedtest-cli provides a command-line interface to Speedtest.net.
What speedtest-cli Tests
- Download speeds to global servers
- Upload performance
- Connection latency
- ISP and server information
speedtest-cli is valuable for evaluating external connectivity quality, particularly for services that interact with users over the internet.
Application-specific benchmarks
14. wrk (HTTP Benchmark)
For web servers and API endpoints, wrk provides powerful HTTP benchmarking capabilities.
What wrk Tests
- HTTP request throughput (requests per second)
- Latency distribution
- Error rates under load
- Server performance with concurrent connections
wrk is essential for capacity planning and optimizing web applications and APIs.
Database benchmarks
15. Sysbench
Sysbench excels at testing database performance, particularly for MySQL/MariaDB and PostgreSQL.
What Sysbench Tests
- OLTP workload simulation
- CPU performance
- Memory operations
- File I/O performance
Sysbench is invaluable for optimizing database configurations and validating performance after changes.
16. pgbench
For PostgreSQL-specific benchmarking, pgbench provides detailed performance insights.
What pgbench Tests
- Transaction processing performance
- Connection handling efficiency
- Query execution times
- Scalability with concurrent clients
pgbench is essential for PostgreSQL tuning and configuration optimization.
Specialized tools
17. Intel MLC (Memory Latency Checker)
For systems with Intel processors, MLC provides detailed memory subsystem analysis.
What MLC Tests
- Memory bandwidth
- Memory latency
- NUMA effects
- Cache performance
MLC is particularly valuable for optimizing memory-intensive applications and understanding NUMA effects.
18. HPC Challenge Benchmark
For high-performance computing workloads, the HPC Challenge Benchmark provides comprehensive performance evaluation.
What HPC Challenge Tests
- HPL (High-Performance Linpack)
- STREAM memory bandwidth
- PTRANS (parallel matrix transpose)
- RandomAccess
- FFT (Fast Fourier Transform)
The HPC Challenge is helpful for evaluating systems intended for scientific computing or other high-performance workloads.
Linux benchmarking tools comparison
Tool | Category | Complexity | Installation | Real-world Correlation | Open Source | GUI Available |
---|---|---|---|---|---|---|
YABS | All-in-one | Low | None (script) | Medium | Yes | No |
nench.sh | All-in-one | Low | None (script) | Medium | Yes | No |
Phoronix Test Suite | All-in-one | High | Package | High | Yes | Yes |
UnixBench | System | Medium | Build | Medium | Yes | No |
Geekbench | CPU/Memory | Low | Binary | High | No | Yes |
stress-ng | Stress Test | Medium | Package | Medium | Yes | No |
stressapptest | Memory/IO | Medium | Package | Medium | Yes | No |
Fio | Storage | High | Package | Very High | Yes | No |
Bonnie++ | Storage | Medium | Package | High | Yes | No |
IOzone | Storage | High | Package | High | Yes | No |
iperf3 | Network | Medium | Package | Very High | Yes | No |
nuttcp | Network | Medium | Package | Very High | Yes | No |
speedtest-cli | Internet | Low | Package/pip | Very High | Yes | No |
wrk | HTTP | Medium | Build | Very High | Yes | No |
Sysbench | Database | High | Package | Very High | Yes | No |
pgbench | PostgreSQL | Medium | Package | Very High | Yes | No |
Intel MLC | Memory | Medium | Binary | High | No | No |
HPC Challenge | HPC | Very High | Build | Very High | Yes | No |
How to choose the right Linux benchmarking tool
Different scenarios call for different benchmarking approaches:
- Initial server evaluation: YABS or nench.sh for quick assessment
- Detailed performance profiling: Phoronix Test Suite for comprehensive analysis
- Storage-centric workloads: Fio, Bonnie++, and IOzone for in-depth I/O testing
- High-performance computing: Intel MLC, and HPC Challenge for optimizing thread placement and memory access
- Network-critical applications: iperf3, nuttcp, and speedtest-cli for detailed network analysis
- Web application hosting: wrk for HTTP performance testing
- Database servers: Sysbench and pgbench for optimizing database performance
Ultimately, a combination of many of these tools provides the most complete picture of your system's capabilities.
Best practices for meaningful benchmarking
To ensure your benchmarking results are useful:
- Create a consistent testing environment: Minimize background processes and external factors
- Run multiple test iterations: Single runs can be misleading; average multiple runs
- Benchmark with realistic workloads: Test patterns that match your actual application behavior
- Document your methodology: Record specific test parameters for future comparisons
- Consider performance variability: Especially in virtualized or cloud environments
- Test at different times: Performance can vary based on time of day, especially in shared environments
- Validate with cross-tool testing: Use multiple tools that test the same subsystem to confirm findings
- Account for distribution differences: Package names, versions, and default configurations can vary significantly between Linux distributions
- Check for distribution-specific optimizations: Some benchmarking tools may perform differently on various distributions due to compiler flags or kernel parameters
Conclusion
These 18 Linux benchmarking tools provide comprehensive insights into your infrastructure's performance characteristics, helping you optimize deployments, troubleshoot bottlenecks, and make informed decisions.
For reliable infrastructure that consistently delivers on its performance promises, consider solutions like those offered by xTom (hello! that's us ;-), which provides dedicated servers, colocation, IP transit services, and more. Something you can verify with the benchmarking tools covered in this article.
For those who need flexible, scalable virtual servers, V.PS (xTom's VPS brand) offers NVMe-powered KVM VPS instances that deliver exceptional performance.
Whatever your hosting needs, these benchmarking tools will help ensure you're getting the performance you require for your applications and services.
Thanks for reading and have fun benchmarking!
Frequently asked questions about Linux benchmarking
How do synthetic benchmarks compare to application benchmarks?
Synthetic benchmarks measure raw system capabilities, while application benchmarks simulate real-world workloads. Both provide valuable insights—synthetic tests offer standardized comparison points, while application tests better predict actual performance for specific use cases.
How frequently should benchmarking be performed?
Benchmark when deploying new systems, after hardware or significant software changes, and periodically (quarterly is common) to detect performance degradation. Also consider benchmarking during capacity planning exercises.
How can I benchmark applications with specific requirements?
For specialized workloads, custom benchmarks using tools like Phoronix Test Suite's test creation capabilities or application-specific tools (e.g., pgbench for PostgreSQL or mysqlslap for MySQL) often provide more relevant insights than general-purpose benchmarks.
How do I interpret benchmark results in virtualized environments?
Virtualized environments often show more performance variability than bare metal. Look for performance patterns rather than absolute numbers, and consider "noisy neighbor" effects when analyzing unexpected results.
Can benchmarking tools impact production systems?
Intensive benchmarks consume significant resources and can impact production workloads. Always schedule resource-intensive benchmarking during maintenance windows or on staging systems that mirror production configurations.
How do I benchmark distributed systems effectively?
For distributed systems, coordinated multi-node benchmarking using tools like distributed Phoronix instances or specialized tools like Distributed Jmeter provide more meaningful results than single-node tests.
How do benchmarking tools differ across Linux distributions?
While most benchmarking tools function similarly across distributions, there can be notable differences in package availability, default configurations, and even performance due to distribution-specific optimizations. Always check distribution-specific documentation when available, and consider package name differences when installing tools (apt for Debian/Ubuntu, yum for CentOS/RHEL 7, dnf for newer RPM-based systems, pacman for Arch, and zypper for SUSE).
What role does benchmarking play in capacity planning?
Benchmarking helps establish performance baselines and identify scaling limits. By simulating increased load using tools like stress-ng or application-specific load generators, you can predict when infrastructure expansion will be necessary.
Is cloud instance performance consistent?
Cloud instance performance can vary significantly based on the provider, instance type, and even time of day. Running benchmarks at different times and days of the week can help identify performance variability patterns that might affect your applications.