Penguin

Differences between version 28 and predecessor to the previous major change of PgBench.

Other diffs: Previous Revision, Previous Author, or view the Annotated Edit History

Newer page: version 28 Last edited on Saturday, June 17, 2006 12:18:06 am by GuyThornley Revert
Older page: version 17 Last edited on Thursday, June 15, 2006 8:22:56 pm by AristotlePagaltzis Revert
@@ -40,9 +40,9 @@
 * Using [Ext3] with <tt>ordered</tt> data mode 
  
 On with the testing! 
  
-!! Results 
+!! Results: 4-disk configurations  
  
 * Data array: RAID5, 4x 72GB 10k RPM%%% 
  WAL array: On data array%%% 
  
@@ -82,9 +82,9 @@
  </pre> 
  
 * Data array: RAID5, 4x 72GB 10k RPM%%% 
  WAL array: On data array%%% 
-  
+ Other notes: Battery-backed write cache and <tt>commit_delay</tt> disabled%%%  
  <pre> 
  scaling factor: 100 
  number of clients: 100 
  number of transactions per client: 50 
@@ -105,53 +105,88 @@
  tps = 220.277597 (excluding connections establishing) 
  </pre> 
  
 * Data array: RAID1+0, 4x 72GB 15k RPM%%% 
- WAL array: RAID1, 2x 72GB 10k RPM %%% 
+ WAL array: On data array %%% 
  
  <pre> 
  scaling factor: 100 
  number of clients: 100 
- number of transactions per client: 2000  
- number of transactions actually processed: 200000 /200000  
- tps = 409 .561669 (including connections establishing)  
- tps = 414 .078634 (excluding connections establishing) 
+ number of transactions per client: 1000  
+ number of transactions actually processed: 100000 /100000  
+ tps = 325 .140579 (including connections establishing)  
+ tps = 330 .843403 (excluding connections establishing)  
+  
+ scaling factor: 600  
+ number of clients: 100  
+ number of transactions per client: 1000  
+ number of transactions actually processed: 100000/100000  
+ tps = 284.662951 (including connections establishing)  
+ tps = 285.127666 (excluding connections establishing) 
  </pre> 
  
-* Data array: RAID1+ , 4x 72GB 15k RPM%%% 
+* Data array: RAID5 , 4x 72GB 15k RPM%%% 
  WAL array: On data array%%% 
  
  <pre> 
  scaling factor: 100 
  number of clients: 100 
  number of transactions per client: 1000 
  number of transactions actually processed: 100000/100000 
- tps = 325 .140579 (including connections establishing)  
- tps = 330 .843403 (excluding connections establishing) 
+ tps = 192 .430583 (including connections establishing)  
+ tps = 194 .404205 (excluding connections establishing) 
  </pre> 
  
-* Data array: RAID5 , 4x 72GB 15k RPM%%% 
+* Data array: RAID1, 2x 72GB 15k RPM%%%  
+ WAL array: RAID1, 2x 72GB 15k RPM%%%  
+  
+ <pre>  
+ scaling factor: 100  
+ number of clients: 100  
+ number of transactions per client: 1000  
+ number of transactions actually processed: 100000/100000  
+ tps = 263.185661 (including connections establishing)  
+ tps = 266.928392 (excluding connections establishing)  
+ </pre>  
+  
+!! Results: 6-disk configurations  
+  
+* Data array: RAID1+ , 4x 72GB 15k RPM%%% 
  WAL array: RAID1, 2x 72GB 10k RPM%%% 
  
  <pre> 
  scaling factor: 100 
+ number of clients: 100  
+ number of transactions per client: 2000  
+ number of transactions actually processed: 200000/200000  
+ tps = 409.561669 (including connections establishing)  
+ tps = 414.078634 (excluding connections establishing)  
+  
+ scaling factor: 600  
  number of clients: 100 
  number of transactions per client: 1000 
  number of transactions actually processed: 100000/100000 
- tps = 236 .721312 (including connections establishing)  
- tps = 239 .738377 (excluding connections establishing) 
+ tps = 340 .756686 (including connections establishing)  
+ tps = 341 .404543 (excluding connections establishing) 
  </pre> 
  
 * Data array: RAID5, 4x 72GB 15k RPM%%% 
- WAL array: On data array %%% 
+ WAL array: RAID1, 2x 72GB 10k RPM %%% 
  
  <pre> 
  scaling factor: 100 
  number of clients: 100 
  number of transactions per client: 1000 
  number of transactions actually processed: 100000/100000 
- tps = 192 .430583 (including connections establishing)  
- tps = 194 .404205 (excluding connections establishing) 
+ tps = 276 .581309 (including connections establishing)  
+ tps = 280 .727719 (excluding connections establishing)  
+  
+ scaling factor: 600  
+ number of clients: 100  
+ number of transactions per client: 1000  
+ number of transactions actually processed: 100000/100000  
+ tps = 212.377629 (including connections establishing)  
+ tps = 212.615105 (excluding connections establishing) 
  </pre> 
  
 * Data array: %%% 
  WAL array: %%% 
@@ -164,8 +199,8 @@
  
 * The test database started at 1.4GB, and got to at least 14GB during testing. Has this growth affected results? 
 * The WAL consumes large amounts of [Kernel] page cache. When moving the WAL between devices, when the old files are unlinked, 1/2 of the page cache is freed. Since the WAL is never read and written only once, this is as waste! 
 * The battery-backed write cache makes write performance very erratic. 
-* The [HP] ~SmartArray hardware (or perhaps driver) tends to block reads while there are cached writes occuring. Large read latencies (seconds) results . I have not yet found a way to tune this. 
+* The [HP] ~SmartArray hardware (or perhaps driver) tends to block reads while there are cached writes occuring. Large read latencies (whole seconds) result . I have not yet found a way to tune this. 
  
 ---- 
 Part of CategoryDiskNotes