Penguin

Differences between version 15 and predecessor to the previous major change of PgBench.

Other diffs: Previous Revision, Previous Author, or view the Annotated Edit History

Newer page: version 15 Last edited on Thursday, June 15, 2006 8:05:53 pm by AristotlePagaltzis Revert
Older page: version 3 Last edited on Thursday, June 15, 2006 4:11:18 pm by PerryLorier Revert
@@ -1,5 +1,5 @@
-This is a scratch pad for some [PostgreSQL | http://www.postgresql.org ] benchmarks. The contributed utility <tt>pgbench</tt> is used for the testing. 
+This is a scratch pad for some [PostgreSQL] benchmarks. The contributed utility <tt>pgbench</tt> is used for the testing. 
  
 For most of the testing, important parts of the postgres configuration used are: 
  <verbatim> 
  shared_buffers = 23987 
@@ -12,8 +12,12 @@
  checkpoint_warning = 300 
  commit_delay = 20000 
  commit_siblings = 3 
  wal_sync_method = fdatasync 
+  
+ enable_seqscan = off  
+ default_with_oids = off  
+ stats_start_collector = false  
  </verbatim> 
  
 Exceptions will be listed as the tests are performed. 
  
@@ -24,13 +28,15 @@
 The base hardware is: 
  
  HP DL380 G4%%% 
  Dual 3.20GHz Xeon, 1MB L2 cache, 800MHz FSB, Hyperthreading disabled%%% 
- 1GB DDR -333 (PC2700 ) registered ECC memory%%% 
+ 1GB DDR2 -400 (PC2-3200 ) registered ECC memory%%% 
  Broadcom PCI-X onboard network adapters%%% 
- Battery-backed write cache enabled. 
+ ~SmartArray 6i onboard%%%  
+ Battery-backed write cache enabled.%%%  
  
- Linux 2.4.27 (from Debian <tt>kernel-image-2.4.27-2-686-smp</tt>) 
+ Linux 2.4.27 (from Debian <tt>kernel-image-2.4.27-2-686-smp</tt>)%%%  
+ Using ext3 with 'ordered' data mode%%%  
  
 On with the testing! 
  
 ---- 
@@ -120,8 +126,53 @@
  number of transactions per client: 2000 
  number of transactions actually processed: 200000/200000 
  tps = 409.561669 (including connections establishing) 
  tps = 414.078634 (excluding connections establishing) 
+ </pre>  
+  
+----  
+  
+Data array: RAID1+0, 4x 72GB 15k RPM%%%  
+WAL array: On data array%%%  
+Other notes: %%%  
+  
+ <pre>  
+ scaling factor: 100  
+ number of clients: 100  
+ number of transactions per client: 1000  
+ number of transactions actually processed: 100000/100000  
+ tps = 325.140579 (including connections establishing)  
+ tps = 330.843403 (excluding connections establishing)  
+ </pre>  
+  
+----  
+  
+Data array: RAID5, 4x 72GB 15k RPM%%%  
+WAL array: RAID1, 2x 72GB 10k RPM%%%  
+Other notes: %%%  
+  
+ <pre>  
+ scaling factor: 100  
+ number of clients: 100  
+ number of transactions per client: 1000  
+ number of transactions actually processed: 100000/100000  
+ tps = 236.721312 (including connections establishing)  
+ tps = 239.738377 (excluding connections establishing)  
+ </pre>  
+  
+----  
+  
+Data array: RAID5, 4x 72GB 15k RPM%%%  
+WAL array: On data array%%%  
+Other notes: %%%  
+  
+ <pre>  
+ scaling factor: 100  
+ number of clients: 100  
+ number of transactions per client: 1000  
+ number of transactions actually processed: 100000/100000  
+ tps = 192.430583 (including connections establishing)  
+ tps = 194.404205 (excluding connections establishing)  
  </pre> 
  
 ---- 
  
@@ -132,5 +183,14 @@
  <pre> 
  </pre> 
  
 ---- 
+Other observations  
+* The test database started at 1.4GB, and got to at least 14GB during testing, has this growth affected results?  
+* The WAL consumes large amounts of kernel page cache. When moving the WAL between devices, when the old files are unlinked,  
+ 1/2 of the page cache is freed. The WAL is never read, and written only once, this is as waste!  
+* The BBWC makes write performance very erratic  
+* The HP SmartArray hardware (or perhaps driver) tends to block reads while there are cached writes occuring. Large read latencies (seconds) results. I have not yet found a way to tune this.  
+  
+----  
+  
 CategoryDiskNotes