Penguin

Differences between version 18 and predecessor to the previous major change of PgBench.

Other diffs: Previous Revision, Previous Author, or view the Annotated Edit History

Newer page: version 18 Last edited on Thursday, June 15, 2006 9:17:59 pm by GuyThornley Revert
Older page: version 17 Last edited on Thursday, June 15, 2006 8:22:56 pm by AristotlePagaltzis Revert
@@ -150,8 +150,20 @@
  number of transactions per client: 1000 
  number of transactions actually processed: 100000/100000 
  tps = 192.430583 (including connections establishing) 
  tps = 194.404205 (excluding connections establishing) 
+ </pre>  
+  
+* Data array: RAID1, 2x 72GB 15k RPM%%%  
+ WAL array: RAID1, 2x 72GB 15k RPM%%%  
+  
+ <pre>  
+ scaling factor: 100  
+ number of clients: 100  
+ number of transactions per client: 500  
+ number of transactions actually processed: 50000/50000  
+ tps = 179.163723 (including connections establishing)  
+ tps = 182.671524 (excluding connections establishing)  
  </pre> 
  
 * Data array: %%% 
  WAL array: %%% 
@@ -164,8 +176,8 @@
  
 * The test database started at 1.4GB, and got to at least 14GB during testing. Has this growth affected results? 
 * The WAL consumes large amounts of [Kernel] page cache. When moving the WAL between devices, when the old files are unlinked, 1/2 of the page cache is freed. Since the WAL is never read and written only once, this is as waste! 
 * The battery-backed write cache makes write performance very erratic. 
-* The [HP] ~SmartArray hardware (or perhaps driver) tends to block reads while there are cached writes occuring. Large read latencies (seconds) results . I have not yet found a way to tune this. 
+* The [HP] ~SmartArray hardware (or perhaps driver) tends to block reads while there are cached writes occuring. Large read latencies (whole seconds) result . I have not yet found a way to tune this. 
  
 ---- 
 Part of CategoryDiskNotes