Throughout all of our subsystem tests, we found that the open-source Red Hat Enterprise Linux 6 operating system generally performed better than Microsoft Windows Server 2012. In general, we also found that tuning the operating system allowed us to get even greater performance out of the system running Red Hat Enterprise Linux 6.
Original Title
Server performance with Red Hat Enterprise Linux 6 vs. Microsoft Windows Server 2012
Throughout all of our subsystem tests, we found that the open-source Red Hat Enterprise Linux 6 operating system generally performed better than Microsoft Windows Server 2012. In general, we also found that tuning the operating system allowed us to get even greater performance out of the system running Red Hat Enterprise Linux 6.
Throughout all of our subsystem tests, we found that the open-source Red Hat Enterprise Linux 6 operating system generally performed better than Microsoft Windows Server 2012. In general, we also found that tuning the operating system allowed us to get even greater performance out of the system running Red Hat Enterprise Linux 6.
Commissioned by Red Hat, Inc. SERVER PERFORMANCE WITH RED HAT ENTERPRISE LINUX 6 VS. MICROSOFT WINDOWS SERVER 2012
Selecting the right operating system for your datacenter can potentially maximize the performance levels you get and improve end-user experience. When faced with a choice between Microsoft Windows Server 2012 and Red Hat Enterprise Linux 6, it is important to assess the capabilities of these operating systems on your hardware with both out-of-box and tuned configurations to ensure you can get the most from your servers. In the Principled Technologies labs, we investigated the server performance on hardware running Red Hat Enterprise Linux 6 and Microsoft Windows Server 2012 by testing several different subsystems, including CPU, memory, disk I/O, and network. In nearly every test, we found that Red Hat Enterprise Linux 6 outperformed its Microsoft competitor, and found that tuning with the Red Hat solution generally increased performance even more. If an operating system can get more out of each server subsystem, as Red Hat Enterprise Linux 6 did in our tests, it follows that it has the potential to improve application performance across the board. Here, we present a brief summary of our findings. For more detail, check out our full reports, which we link below.
A Principled Technologies summary 2
Server performance with Red Hat Enterprise Linux 6 vs. Microsoft Windows Server 2012 Testing Java application performance A Java Virtual Machine (JVM) provides a sufficiently rich abstraction layer to run applications independent of particular computer hardware implementation. Testing Java performance with the SPECjbb2013 1 benchmark can give an indication of the Java application performance a solution provides. As our results indicate, Red Hat Enterprise Linux 6 with OpenJDK 2 outperformed Microsoft Windows Server 2012 with Java HotSpot 3 on the industry-standard SPECjbb2013 benchmark on both of the reported metrics, max-jOPS and critical-jOPS, using small heap size. With large heap size, the Red Hat/OpenJDK solution delivered 34,129 max-jOPS and 22,126 critical-jOPS, the best reported critical operations score as of June 30, 2013, while the Microsoft/Java HotSpot solution could not produce a qualifying benchmark result. 4 Figure 1 shows the small heap scores for both systems and the large heap score for Red Hat Enterprise Linux 6 with OpenJDK. Figure 1: SPECjbb2013 scores for Red Hat Enterprise Linux 6 with OpenJDK and Microsoft Windows Server 2012 with Java HotSpot. Higher numbers are better. 22,126 34,129 14,655 34,951 14,414 33,718 0 5,000 10,000 15,000 20,000 25,000 30,000 35,000 40,000 critical-jOPS max-jOPS S P E C j b b 2 0 1 3
j O P S SPECjbb2013 scores Red Hat Enterprise Linux 6 with OpenJDK (large heap) Red Hat Enterprise Linux 6 with OpenJDK (small heap) Microsoft Windows Server 2012 with Java HotSpot (small heap)
For details on our SPECjbb2013 testing, read the full report at www.principledtechnologies.com/RedHat/RHEL6_jbb_0613.pdf. Testing file system I/O performance When choosing an operating system platform for your servers, you should know what I/O performance to expect from the operating system and file systems you select.
1 SPEC and SPECjbb are trademarks of the Standard Performance Evaluation Corp. (SPEC). See www.spec.org for more information. 2 OpenJDK is a trademark of Oracle, Inc. 3 Java and Java HotSpot are trademarks of Oracle, Inc. 4 Small heap (20GB) OpenJDK and Java HotSpot JVM results were not submitted to SPEC for publication. Large heap OpenJDK results are official SPECjbb2013 results at spec.org. See www.spec.org/jbb2013/results/res2013q2/jbb2013-20130319-00013.html for more details on large heap results.
A Principled Technologies summary 3
Server performance with Red Hat Enterprise Linux 6 vs. Microsoft Windows Server 2012 Using the IOzone Filesystem Benchmark in our tests, we found I/O performance of file systems on Red Hat Enterprise Linux 6 was better than the file systems available on Microsoft Windows Server 2012, with both out-of-the-box and optimized configurations. Using default native file systems, ext4 and NTFS, we found that Red Hat Enterprise Linux 6 outperformed Windows Server 2012 by as much as 65.2 percent out- of-the-box, and as much as 33.4 percent using optimized configurations. Using more advanced native file systems, XFS and ReFS, we found that Red Hat Enterprise Linux 6 outperformed Windows Server 2012 by as much as 31.9 percent out-of-the-box, and as much as 48.4 percent using optimized configurations. Figure 2 shows file system performance using the in cache test case our full test included direct I/O and out-of- cache. Figure 2: Comparison of the I/O performance in KB/s for the four file systems using the in-cache method. These throughputs are the geometric average of the 13 IOzone tests. Higher throughput is better. 0 1,000,000 2,000,000 3,000,000 4,000,000 5,000,000 6,000,000 7,000,000 Optimized Out-of-box A v e r a g e
p e r f o r m a n c e
( K B / s ) Comparison of file system performance - In cache Red Hat Enterprise Linux 6 ext4 Microsoft Windows Server 2012 NTFS Red Hat Enterprise Linux 6 XFS Microsoft Windows Server 2012 ReFS
To learn more about our complete I/O testing, see the full report at www.principledtechnologies.com/RedHat/RHEL6_IO_0613.pdf. Testing the CPU with SPEC CPU2006 and LINPACK In our CPU tests, we found that the open-source Red Hat Enterprise Linux 6 solution performed as well or better than Microsoft Windows Server 2012. In our SPEC CPU2006 tests, the Red Hat Enterprise Linux 6 solution achieved consistently higher scores than the Windows Server 2012 solution. Figures 3 and 4 show the scores that the systems achieved on both parts of the benchmark: SPEC CINT2006 and SPEC CFP2006.
A Principled Technologies summary 4
Server performance with Red Hat Enterprise Linux 6 vs. Microsoft Windows Server 2012 Figure 3: SPEC CPU2006 results, in SPEC CINT2006 scores, for the two solutions. Higher numbers are better. 639 640 623 621 618 619 593 591 0 100 200 300 400 500 600 700 Out-of-box Optimized Out-of-box Optimized SPECint_rate2006 SPECint_rate_base2006 S P E C
C I N T 2 0 0 6
s c o r e SPEC CINT2006 scores Red Hat Enterprise Linux 6 Microsoft Windows Server 2012
Figure 4: SPEC CPU2006 results, in SPEC CFP2006 scores, for the two solutions. Higher numbers are better. 422 422 408 404 406 405 399 397 0 50 100 150 200 250 300 350 400 450 Out-of-box Optimized Out-of-box Optimized SPECfp_rate2006 SPECfp_rate_base2006 S P E C
C F P 2 0 0 6
s c o r e SPEC CFP2006 scores Red Hat Enterprise Linux 6 Microsoft Windows Server 2012
We also used the LINPACK benchmark to test the floating point performance of the platforms out-of-box and optimized. As Figure 5 shows, Red Hat Enterprise Linux 6 outperformed Windows Server 2012 at high thread counts in our tests. In addition, tuning Red Hat Enterprise Linux 6 increased performance steadily, while optimizing Windows Server 2012 had less of an effect on its performance.
A Principled Technologies summary 5
Server performance with Red Hat Enterprise Linux 6 vs. Microsoft Windows Server 2012 Figure 5: LINPACK floating point performance results for the two operating system solutions. 0 50 100 150 200 250 300 1 2 4 8 16 P e r f o r m a n c e
( G F l o p s ) Number of threads LINPACK: Average floating point performance Red Hat Enterprise Linux 6 optimized Microsoft Windows Server 2012 optimized Red Hat Enterprise Linux 6 out-of-box Microsoft Windows Server 2012 out-of-box
For complete details about our SPEC CPU2006 and LINPACK testing, read the full report at www.principledtechnologies.com/RedHat/RHEL6_CPU_RAM_0613.pdf. Testing memory performance Random access memory (RAM) is one of the most vital subsystems that can affect the performance of business applications. We used the STREAM benchmark to measure the memory bandwidth available with both Red Hat Enterprise Linux 6 and Microsoft Windows Server 2012. In our memory bandwidth tests, the Red Hat Enterprise Linux 6 solution outperformed the Windows Server 2012 solution at mid- range thread counts. Figures 6 and 7 show the performance of Red Hat Enterprise Linux 6 and Microsoft Windows Server 2012, both out-of-box and optimized.
A Principled Technologies summary 6
Server performance with Red Hat Enterprise Linux 6 vs. Microsoft Windows Server 2012 Figure 6: Out-of-box memory bandwidth comparison using the STREAM benchmark. 0 10,000 20,000 30,000 40,000 50,000 60,000 70,000 80,000 C o p y S c a l e A d d T r i a d C o p y S c a l e A d d T r i a d C o p y S c a l e A d d T r i a d C o p y S c a l e A d d T r i a d C o p y S c a l e A d d T r i a d 1 2 4 8 16 S u s t a i n e d
m e m o r y
b a n d w i d t h
( M B / s ) STREAM tests and number of cores Memory bandwidth comparison: Out-of-box configurations Red Hat Enterprise Linux 6 Microsoft Windows Server 2012
Figure 7: Optimized memory bandwidth comparison using the STREAM benchmark 0 10,000 20,000 30,000 40,000 50,000 60,000 70,000 80,000 C o p y S c a l e A d d T r i a d C o p y S c a l e A d d T r i a d C o p y S c a l e A d d T r i a d C o p y S c a l e A d d T r i a d C o p y S c a l e A d d T r i a d 1 2 4 8 16 S u s t a i n e d
m e m o r y
b a n d w i d t h
( M B / s ) STREAM tests and number of cores Memory bandwidth comparison: Optimized configurations Red Hat Enterprise Linux 6 Microsoft Windows Server 2012
For complete details about our RAM testing using STREAM, read the full report at www.principledtechnologies.com/RedHat/RHEL6_CPU_RAM_0613.pdf.
A Principled Technologies summary 7
Server performance with Red Hat Enterprise Linux 6 vs. Microsoft Windows Server 2012 Testing networking performance Because applications may not typically manage networking resources directly and instead may rely on operating systems to do so, the operating system you select may have a direct impact on TCP and UDP performance available to your applications and users. Throughout our network tests, we found that the open-source Red Hat Enterprise Linux 6 solution delivered up to three times better TCP throughput than Microsoft Windows Server 2012 in an out-of-box configuration and up to two times better throughput in an optimized configuration. In addition, Red Hat Enterprise Linux 6 delivered better UDP throughput at various message sizes. Figure 8 shows the performance of both solutions in the two configurations. Figure 8: TCP throughput, in 10 6 b/s, the two solutions achieved using different size messages. Higher numbers are better. 0 1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 9,000 10,000 T h r o u g h p u t
( 1 0 6 b / s ) Message size (bytes) TCP throughput Red Hat Enterprise 6 OOB Red Hat Enterprise 6 OPT Windows Server 2012 OOB Windows Server 2012 OPT
To learn more about our network testing, read the full report at www.principledtechnologies.com/RedHat/RHEL6_network_0613.pdf. IN CONCLUSION Throughout all of our subsystem tests, we found that the open-source Red Hat Enterprise Linux 6 operating system generally performed better than Microsoft Windows Server 2012. In general, we also found that tuning the operating system allowed us to get even greater performance out of the system running Red Hat Enterprise Linux 6.
A Principled Technologies summary 8
Server performance with Red Hat Enterprise Linux 6 vs. Microsoft Windows Server 2012 ABOUT PRINCIPLED TECHNOLOGIES
Principled Technologies, Inc. 1007 Slater Road, Suite 300 Durham, NC, 27703 www.principledtechnologies.com We provide industry-leading technology assessment and fact-based marketing services. We bring to every assignment extensive experience with and expertise in all aspects of technology testing and analysis, from researching new technologies, to developing new methodologies, to testing with existing and new tools.
When the assessment is complete, we know how to present the results to a broad range of target audiences. We provide our clients with the materials they need, from market-focused data to use in their own collateral to custom sales aids, such as test reports, performance assessments, and white papers. Every document reflects the results of our trusted independent analysis.
We provide customized services that focus on our clients individual requirements. Whether the technology involves hardware, software, Web sites, or services, we offer the experience, expertise, and tools to help our clients assess how it will fare against its competition, its performance, its market readiness, and its quality and reliability.
Our founders, Mark L. Van Name and Bill Catchings, have worked together in technology assessment for over 20 years. As journalists, they published over a thousand articles on a wide array of technology subjects. They created and led the Ziff-Davis Benchmark Operation, which developed such industry-standard benchmarks as Ziff Davis Medias Winstone and WebBench. They founded and led eTesting Labs, and after the acquisition of that company by Lionbridge Technologies were the head and CTO of VeriTest. Principled Technologies is a registered trademark of Principled Technologies, Inc. All other product names are the trademarks of their respective owners. Disclaimer of Warranties; Limitation of Liability: PRINCIPLED TECHNOLOGIES, INC. HAS MADE REASONABLE EFFORTS TO ENSURE THE ACCURACY AND VALIDITY OF ITS TESTING, HOWEVER, PRINCIPLED TECHNOLOGIES, INC. SPECIFICALLY DISCLAIMS ANY WARRANTY, EXPRESSED OR IMPLIED, RELATING TO THE TEST RESULTS AND ANALYSIS, THEIR ACCURACY, COMPLETENESS OR QUALITY, INCLUDING ANY IMPLIED WARRANTY OF FITNESS FOR ANY PARTICULAR PURPOSE. ALL PERSONS OR ENTITIES RELYING ON THE RESULTS OF ANY TESTING DO SO AT THEIR OWN RISK, AND AGREE THAT PRINCIPLED TECHNOLOGIES, INC., ITS EMPLOYEES AND ITS SUBCONTRACTORS SHALL HAVE NO LIABILITY WHATSOEVER FROM ANY CLAIM OF LOSS OR DAMAGE ON ACCOUNT OF ANY ALLEGED ERROR OR DEFECT IN ANY TESTING PROCEDURE OR RESULT.
IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES, INC. BE LIABLE FOR INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH ITS TESTING, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES, INC.S LIABILITY, INCLUDING FOR DIRECT DAMAGES, EXCEED THE AMOUNTS PAID IN CONNECTION WITH PRINCIPLED TECHNOLOGIES, INC.S TESTING. CUSTOMERS SOLE AND EXCLUSIVE REMEDIES ARE AS SET FORTH HEREIN.
Improve Aerospike Database Performance and Predictability by Leveraging Intel® Ethernet 800 Series Network Adapters With Application Device Queues (ADQ)