Differences between version 34 and revision by previous author of PgBench.
Other diffs: Previous Major Revision, Previous Revision, or view the Annotated Edit History
Newer page: | version 34 | Last edited on Monday, June 19, 2006 1:39:05 pm | by GuyThornley | Revert |
Older page: | version 17 | Last edited on Thursday, June 15, 2006 8:22:56 pm | by AristotlePagaltzis | Revert |
@@ -40,9 +40,9 @@
* Using [Ext3] with <tt>ordered</tt> data mode
On with the testing!
-!! Results
+!! Results: 4-disk configurations
* Data array: RAID5, 4x 72GB 10k RPM%%%
WAL array: On data array%%%
@@ -52,8 +52,15 @@
number of transactions per client: 100
number of transactions actually processed: 10000/10000
tps = 132.257337 (including connections establishing)
tps = 141.908320 (excluding connections establishing)
+
+ scaling factor: 600
+ number of clients: 100
+ number of transactions per client: 1000
+ number of transactions actually processed: 100000/100000
+ tps = 114.351628 (including connections establishing)
+ tps = 114.418688 (excluding connections establishing)
</pre>
* Data array: RAID5, 4x 72GB 10k RPM%%%
WAL array: On data array%%%
@@ -65,8 +72,15 @@
number of transactions per client: 100
number of transactions actually processed: 10000/10000
tps = 135.567199 (including connections establishing)
tps = 146.354640 (excluding connections establishing)
+
+ scaling factor: 600
+ number of clients: 100
+ number of transactions per client: 1000
+ number of transactions actually processed: 100000/100000
+ tps = 118.365607 (including connections establishing)
+ tps = 118.442501 (excluding connections establishing)
</pre>
* Data array: RAID5, 4x 72GB 10k RPM%%%
WAL array: On data array%%%
@@ -82,9 +96,9 @@
</pre>
* Data array: RAID5, 4x 72GB 10k RPM%%%
WAL array: On data array%%%
-
+ Other notes: Battery-backed write cache and <tt>commit_delay</tt> disabled%%%
<pre>
scaling factor: 100
number of clients: 100
number of transactions per client: 50
@@ -105,53 +119,102 @@
tps = 220.277597 (excluding connections establishing)
</pre>
* Data array: RAID1+0, 4x 72GB 15k RPM%%%
- WAL array: RAID1, 2x 72GB 10k RPM
%%%
+ WAL array: On data array
%%%
<pre>
scaling factor: 100
number of clients: 100
- number of transactions per client: 2000
- number of transactions actually processed: 200000
/200000
- tps = 409
.561669
(including connections establishing)
- tps = 414
.078634
(excluding connections establishing)
+ number of transactions per client: 1000
+ number of transactions actually processed: 100000
/100000
+ tps = 325
.140579
(including connections establishing)
+ tps = 330
.843403 (excluding connections establishing)
+
+ scaling factor: 600
+ number of clients: 100
+ number of transactions per client: 1000
+ number of transactions actually processed: 100000/100000
+ tps = 284.662951 (including connections establishing)
+ tps = 285.127666
(excluding connections establishing)
</pre>
-* Data array: RAID1+
, 4x 72GB 15k RPM%%%
+* Data array: RAID5
, 4x 72GB 15k RPM%%%
WAL array: On data array%%%
<pre>
scaling factor: 100
number of clients: 100
number of transactions per client: 1000
number of transactions actually processed: 100000/100000
- tps = 325
.140579
(including connections establishing)
- tps = 330
.843403
(excluding connections establishing)
+ tps = 192
.430583
(including connections establishing)
+ tps = 194
.404205 (excluding connections establishing)
+
+ scaling factor: 600
+ number of clients: 100
+ number of transactions per client: 1000
+ number of transactions actually processed: 100000/100000
+ tps = 189.203382 (including connections establishing)
+ tps = 189.379783
(excluding connections establishing)
</pre>
-* Data array: RAID5
, 4x 72GB 15k RPM%%%
+* Data array: RAID1, 2x 72GB 15k RPM%%%
+ WAL array: RAID1, 2x 72GB 15k RPM%%%
+
+ <pre>
+ scaling factor: 100
+ number of clients: 100
+ number of transactions per client: 1000
+ number of transactions actually processed: 100000/100000
+ tps = 263.185661 (including connections establishing)
+ tps = 266.928392 (excluding connections establishing)
+
+ scaling factor: 600
+ number of clients: 100
+ number of transactions per client: 1000
+ number of transactions actually processed: 100000/100000
+ tps = 171.537230 (including connections establishing)
+ tps = 171.680858 (excluding connections establishing)
+ </pre>
+
+!! Results: 6-disk configurations
+
+* Data array: RAID1+
, 4x 72GB 15k RPM%%%
WAL array: RAID1, 2x 72GB 10k RPM%%%
<pre>
scaling factor: 100
+ number of clients: 100
+ number of transactions per client: 2000
+ number of transactions actually processed: 200000/200000
+ tps = 409.561669 (including connections establishing)
+ tps = 414.078634 (excluding connections establishing)
+
+ scaling factor: 600
number of clients: 100
number of transactions per client: 1000
number of transactions actually processed: 100000/100000
- tps = 236
.721312
(including connections establishing)
- tps = 239
.738377
(excluding connections establishing)
+ tps = 340
.756686
(including connections establishing)
+ tps = 341
.404543
(excluding connections establishing)
</pre>
* Data array: RAID5, 4x 72GB 15k RPM%%%
- WAL array: On data array
%%%
+ WAL array: RAID1, 2x 72GB 10k RPM
%%%
<pre>
scaling factor: 100
number of clients: 100
number of transactions per client: 1000
number of transactions actually processed: 100000/100000
- tps = 192
.430583
(including connections establishing)
- tps = 194
.404205
(excluding connections establishing)
+ tps = 276
.581309
(including connections establishing)
+ tps = 280
.727719 (excluding connections establishing)
+
+ scaling factor: 600
+ number of clients: 100
+ number of transactions per client: 1000
+ number of transactions actually processed: 100000/100000
+ tps = 212.377629 (including connections establishing)
+ tps = 212.615105
(excluding connections establishing)
</pre>
* Data array: %%%
WAL array: %%%
@@ -164,8 +227,8 @@
* The test database started at 1.4GB, and got to at least 14GB during testing. Has this growth affected results?
* The WAL consumes large amounts of [Kernel] page cache. When moving the WAL between devices, when the old files are unlinked, 1/2 of the page cache is freed. Since the WAL is never read and written only once, this is as waste!
* The battery-backed write cache makes write performance very erratic.
-* The [HP] ~SmartArray hardware (or perhaps driver) tends to block reads while there are cached writes occuring. Large read latencies (seconds) results
. I have not yet found a way to tune this.
+* The [HP] ~SmartArray hardware (or perhaps driver) tends to block reads while there are cached writes occuring. Large read latencies (whole
seconds) result
. I have not yet found a way to tune this.
----
Part of CategoryDiskNotes