Home
Main website
Display Sidebar
Hide Ads
Recent Changes
View Source:
PgBench
Edit
PageHistory
Diff
Info
LikePages
You are viewing an old revision of this page.
View the current version
.
This is a scratch pad for some [PostgreSQL] benchmarks. The contributed utility <tt>pgbench</tt> is used for the testing. For most of the testing, important parts of the [PostgreSQL] configuration used are: <verbatim> shared_buffers = 23987 max_fsm_relations = 5950 max_fsm_pages = 3207435 wal_buffers = 544 checkpoint_segments = 40 checkpoint_timeout = 900 checkpoint_warning = 300 commit_delay = 20000 commit_siblings = 3 wal_sync_method = fdatasync enable_seqscan = off default_with_oids = off stats_start_collector = false </verbatim> Exceptions will be listed as the tests are performed. The <tt>pgbench</tt> test database was created with the <tt>-s100</tt> scale factor option. This results in a fresh database of about 1.4GB. Consecutive runs of <tt>pgbench</tt> grow the database, however. All test runs were executed with the <tt>-c100</tt> option for 100 connections. The transactions per connection was adjusted as needed to give a stable test result, without obvious effects of caching. Typical settings were <tt>-t100</tt> to <tt>-t1000</tt>. The <tt>pgbench</tt> client was actually run over a 100Mbit, full-duplex network connection from a client machine for most of the testing. Running <tt>pgbench</tt> remotely has not measurably degraded the performance. The client machine is a dual 3.06GHz Xeon running Linux 2.4.27. The base hardware: * [HP] DL380 G4 * Dual 3.20GHz Xeon, 1MB L2 [Cache], 800MHz [FSB], HyperThreading disabled * 1GB DDR2-400 (PC2-3200) registered [ECC] memory * Broadcom PCI-X onboard [NIC] * ~SmartArray 6i onboard [RAID] controller * Battery-backed write cache enabled The base software: * LinuxKernel 2.4.27 from [Debian]’s <tt>kernel-image-2.4.27-2-686-smp</tt> [Package] * Using [Ext3] with <tt>ordered</tt> data mode On with the testing! !! Results: 4-disk configurations * Data array: RAID5, 4x 72GB 10k RPM%%% WAL array: On data array%%% <pre> scaling factor: 100 number of clients: 100 number of transactions per client: 100 number of transactions actually processed: 10000/10000 tps = 132.257337 (including connections establishing) tps = 141.908320 (excluding connections establishing) </pre> * Data array: RAID5, 4x 72GB 10k RPM%%% WAL array: On data array%%% Other notes: <tt>commit_delay</tt> disabled%%% <pre> scaling factor: 100 number of clients: 100 number of transactions per client: 100 number of transactions actually processed: 10000/10000 tps = 135.567199 (including connections establishing) tps = 146.354640 (excluding connections establishing) </pre> * Data array: RAID5, 4x 72GB 10k RPM%%% WAL array: On data array%%% Other notes: battery-backed write cache disabled%%% <pre> scaling factor: 100 number of clients: 100 number of transactions per client: 50 number of transactions actually processed: 5000/5000 tps = 76.678506 (including connections establishing) tps = 83.263195 (excluding connections establishing) </pre> * Data array: RAID5, 4x 72GB 10k RPM%%% WAL array: On data array%%% Other notes: Battery-backed write cache and <tt>commit_delay</tt> disabled%%% <pre> scaling factor: 100 number of clients: 100 number of transactions per client: 50 number of transactions actually processed: 5000/5000 tps = 50.434271 (including connections establishing) tps = 53.195151 (excluding connections establishing) </pre> * Data array: RAID1, 2x 72GB 10k RPM%%% WAL array: RAID1, 2x 72GB 10k RPM%%% <pre> scaling factor: 100 number of clients: 100 number of transactions per client: 1000 number of transactions actually processed: 100000/100000 tps = 217.737758 (including connections establishing) tps = 220.277597 (excluding connections establishing) </pre> * Data array: RAID1+0, 4x 72GB 15k RPM%%% WAL array: On data array%%% <pre> scaling factor: 100 number of clients: 100 number of transactions per client: 1000 number of transactions actually processed: 100000/100000 tps = 325.140579 (including connections establishing) tps = 330.843403 (excluding connections establishing) scaling factor: 600 number of clients: 100 number of transactions per client: 1000 number of transactions actually processed: 100000/100000 tps = 284.662951 (including connections establishing) tps = 285.127666 (excluding connections establishing) </pre> * Data array: RAID5, 4x 72GB 15k RPM%%% WAL array: On data array%%% <pre> scaling factor: 100 number of clients: 100 number of transactions per client: 1000 number of transactions actually processed: 100000/100000 tps = 192.430583 (including connections establishing) tps = 194.404205 (excluding connections establishing) scaling factor: 600 number of clients: 100 number of transactions per client: 1000 number of transactions actually processed: 100000/100000 tps = 189.203382 (including connections establishing) tps = 189.379783 (excluding connections establishing) </pre> * Data array: RAID1, 2x 72GB 15k RPM%%% WAL array: RAID1, 2x 72GB 15k RPM%%% <pre> scaling factor: 100 number of clients: 100 number of transactions per client: 1000 number of transactions actually processed: 100000/100000 tps = 263.185661 (including connections establishing) tps = 266.928392 (excluding connections establishing) </pre> !! Results: 6-disk configurations * Data array: RAID1+0, 4x 72GB 15k RPM%%% WAL array: RAID1, 2x 72GB 10k RPM%%% <pre> scaling factor: 100 number of clients: 100 number of transactions per client: 2000 number of transactions actually processed: 200000/200000 tps = 409.561669 (including connections establishing) tps = 414.078634 (excluding connections establishing) scaling factor: 600 number of clients: 100 number of transactions per client: 1000 number of transactions actually processed: 100000/100000 tps = 340.756686 (including connections establishing) tps = 341.404543 (excluding connections establishing) </pre> * Data array: RAID5, 4x 72GB 15k RPM%%% WAL array: RAID1, 2x 72GB 10k RPM%%% <pre> scaling factor: 100 number of clients: 100 number of transactions per client: 1000 number of transactions actually processed: 100000/100000 tps = 276.581309 (including connections establishing) tps = 280.727719 (excluding connections establishing) scaling factor: 600 number of clients: 100 number of transactions per client: 1000 number of transactions actually processed: 100000/100000 tps = 212.377629 (including connections establishing) tps = 212.615105 (excluding connections establishing) </pre> * Data array: %%% WAL array: %%% Other notes: %%% <pre> </pre> !!! Other observations * The test database started at 1.4GB, and got to at least 14GB during testing. Has this growth affected results? * The WAL consumes large amounts of [Kernel] page cache. When moving the WAL between devices, when the old files are unlinked, 1/2 of the page cache is freed. Since the WAL is never read and written only once, this is as waste! * The battery-backed write cache makes write performance very erratic. * The [HP] ~SmartArray hardware (or perhaps driver) tends to block reads while there are cached writes occuring. Large read latencies (whole seconds) result. I have not yet found a way to tune this. ---- Part of CategoryDiskNotes
4 pages link to
PgBench
:
WAL
PostgreSQL
GuyThornley
PostgreSQLNotes