Differences between version 17 and predecessor to the previous major change of PgBench.
Other diffs: Previous Revision, Previous Author, or view the Annotated Edit History
Newer page: | version 17 | Last edited on Thursday, June 15, 2006 8:22:56 pm | by AristotlePagaltzis | Revert |
Older page: | version 15 | Last edited on Thursday, June 15, 2006 8:05:53 pm | by AristotlePagaltzis | Revert |
@@ -1,7 +1,7 @@
This is a scratch pad for some [PostgreSQL] benchmarks. The contributed utility <tt>pgbench</tt> is used for the testing.
-For most of the testing, important parts of the postgres
configuration used are:
+For most of the testing, important parts of the [PostgreSQL]
configuration used are:
<verbatim>
shared_buffers = 23987
max_fsm_relations = 5950
max_fsm_pages = 3207435
@@ -24,27 +24,28 @@
The <tt>pgbench</tt> test database was created with the <tt>-s100</tt> scale factor option. This results in a fresh database of about 1.4GB. Consecutive runs of <tt>pgbench</tt> grow the database, however. All test runs were executed with the <tt>-c100</tt> option for 100 connections. The transactions per connection was adjusted as needed to give a stable test result, without obvious effects of caching. Typical settings were <tt>-t100</tt> to <tt>-t1000</tt>.
The <tt>pgbench</tt> client was actually run over a 100Mbit, full-duplex network connection from a client machine for most of the testing. Running <tt>pgbench</tt> remotely has not measurably degraded the performance. The client machine is a dual 3.06GHz Xeon running Linux 2.4.27.
-The base hardware is
:
+The base hardware:
-
HP DL380 G4%%%
-
Dual 3.20GHz Xeon, 1MB L2 cache
, 800MHz FSB, Hyperthreading
disabled%%%
-
1GB DDR2-400 (PC2-3200) registered ECC memory%%%
-
Broadcom PCI-X onboard network adapters%%%
-
~SmartArray 6i onboard%%%
-
Battery-backed write cache enabled.%%%
+* [
HP]
DL380 G4
+*
Dual 3.20GHz Xeon, 1MB L2 [Cache]
, 800MHz [
FSB]
, HyperThreading
disabled
+*
1GB DDR2-400 (PC2-3200) registered [
ECC]
memory
+*
Broadcom PCI-X onboard [NIC]
+*
~SmartArray 6i onboard [RAID] controller
+*
Battery-backed write cache enabled
- Linux
2.4.27 (
from Debian <tt>kernel-image-2.4.27-2-686-smp</tt>)%%%
-
Using ext3
with '
ordered'
data mode%%%
+The base software:
+
+* LinuxKernel
2.4.27 from [
Debian]’s
<tt>kernel-image-2.4.27-2-686-smp</tt> [Package]
+*
Using [Ext3]
with <tt>
ordered</tt>
data mode
On with the testing!
-----
+!! Results
-Data array: RAID5, 4x 72GB 10k RPM%%%
-WAL array: On data array%%%
-Other notes:
%%%
+*
Data array: RAID5, 4x 72GB 10k RPM%%%
+
WAL array: On data array%%%
<pre>
scaling factor: 100
number of clients: 100
@@ -53,13 +54,11 @@
tps = 132.257337 (including connections establishing)
tps = 141.908320 (excluding connections establishing)
</pre>
-----
-
-
Data array: RAID5, 4x 72GB 10k RPM%%%
-WAL array: On data array%%%
-Other notes: <tt>commit_delay</tt> disabled%%%
+*
Data array: RAID5, 4x 72GB 10k RPM%%%
+
WAL array: On data array%%%
+
Other notes: <tt>commit_delay</tt> disabled%%%
<pre>
scaling factor: 100
number of clients: 100
@@ -68,13 +67,11 @@
tps = 135.567199 (including connections establishing)
tps = 146.354640 (excluding connections establishing)
</pre>
-----
-
-
Data array: RAID5, 4x 72GB 10k RPM%%%
-WAL array: On data array%%%
-Other notes: BBWC
disabled%%%
+*
Data array: RAID5, 4x 72GB 10k RPM%%%
+
WAL array: On data array%%%
+
Other notes: battery-backed write cache
disabled%%%
<pre>
scaling factor: 100
number of clients: 100
@@ -83,13 +80,10 @@
tps = 76.678506 (including connections establishing)
tps = 83.263195 (excluding connections establishing)
</pre>
-----
-
-
Data array: RAID5, 4x 72GB 10k RPM%%%
-WAL array: On data array%%%
-Other notes: BBWC disabled, <tt>commit_delay</tt> disabled
%%%
+*
Data array: RAID5, 4x 72GB 10k RPM%%%
+
WAL array: On data array%%%
<pre>
scaling factor: 100
number of clients: 100
@@ -98,13 +92,10 @@
tps = 50.434271 (including connections establishing)
tps = 53.195151 (excluding connections establishing)
</pre>
-----
-
-
Data array: RAID1, 2x 72GB 10k RPM%%%
-WAL array: RAID1, 2x 72GB 10k RPM%%%
-Other notes:
%%%
+*
Data array: RAID1, 2x 72GB 10k RPM%%%
+
WAL array: RAID1, 2x 72GB 10k RPM%%%
<pre>
scaling factor: 100
number of clients: 100
@@ -113,13 +104,10 @@
tps = 217.737758 (including connections establishing)
tps = 220.277597 (excluding connections establishing)
</pre>
-----
-
-
Data array: RAID1+, 4x 72GB 15k RPM%%%
-WAL array: RAID1, 2x 72GB 10k RPM%%%
-Other notes:
%%%
+*
Data array: RAID1+, 4x 72GB 15k RPM%%%
+
WAL array: RAID1, 2x 72GB 10k RPM%%%
<pre>
scaling factor: 100
number of clients: 100
@@ -128,13 +116,10 @@
tps = 409.561669 (including connections establishing)
tps = 414.078634 (excluding connections establishing)
</pre>
-----
-
-
Data array: RAID1+, 4x 72GB 15k RPM%%%
-WAL array: On data array%%%
-Other notes:
%%%
+*
Data array: RAID1+, 4x 72GB 15k RPM%%%
+
WAL array: On data array%%%
<pre>
scaling factor: 100
number of clients: 100
@@ -143,13 +128,10 @@
tps = 325.140579 (including connections establishing)
tps = 330.843403 (excluding connections establishing)
</pre>
-----
-
-
Data array: RAID5, 4x 72GB 15k RPM%%%
-WAL array: RAID1, 2x 72GB 10k RPM%%%
-Other notes:
%%%
+*
Data array: RAID5, 4x 72GB 15k RPM%%%
+
WAL array: RAID1, 2x 72GB 10k RPM%%%
<pre>
scaling factor: 100
number of clients: 100
@@ -158,13 +140,10 @@
tps = 236.721312 (including connections establishing)
tps = 239.738377 (excluding connections establishing)
</pre>
-----
-
-
Data array: RAID5, 4x 72GB 15k RPM%%%
-WAL array: On data array%%%
-Other notes:
%%%
+*
Data array: RAID5, 4x 72GB 15k RPM%%%
+
WAL array: On data array%%%
<pre>
scaling factor: 100
number of clients: 100
@@ -173,24 +152,20 @@
tps = 192.430583 (including connections establishing)
tps = 194.404205 (excluding connections establishing)
</pre>
-----
-
-
Data array: %%%
-WAL array: %%%
-Other notes: %%%
+*
Data array: %%%
+
WAL array: %%%
+
Other notes: %%%
<pre>
</pre>
-----
-
Other observations
-* The test database started at 1.4GB, and got to at least 14GB during testing, has
this growth affected results?
-* The WAL consumes large amounts of kernel
page cache. When moving the WAL between devices, when the old files are unlinked,
-
1/2 of the page cache is freed. The
WAL is never read,
and written only once, this is as waste!
-* The BBWC
makes write performance very erratic
-* The HP SmartArray hardware (or perhaps driver) tends to block reads while there are cached writes occuring. Large read latencies (seconds) results. I have not yet found a way to tune this.
+!!!
Other observations
+
+* The test database started at 1.4GB, and got to at least 14GB during testing. Has
this growth affected results?
+* The WAL consumes large amounts of [Kernel]
page cache. When moving the WAL between devices, when the old files are unlinked, 1/2 of the page cache is freed. Since the
WAL is never read and written only once, this is as waste!
+* The battery-backed write cache
makes write performance very erratic.
+* The [
HP] ~
SmartArray hardware (or perhaps driver) tends to block reads while there are cached writes occuring. Large read latencies (seconds) results. I have not yet found a way to tune this.
----
-
-
CategoryDiskNotes
+Part of
CategoryDiskNotes