Differences between version 4 and previous revision of WAL.
Other diffs: Previous Major Revision, Previous Author, or view the Annotated Edit History
Newer page: | version 4 | Last edited on Thursday, June 22, 2006 10:59:59 pm | by GuyThornley | Revert |
Older page: | version 3 | Last edited on Thursday, June 22, 2006 10:38:46 pm | by GuyThornley | Revert |
@@ -49,10 +49,10 @@
Many people extol the virtue of placing the WAL on its own device, seperate from the main data files. After performing some extensive PgBench testing, this author questions that advice; the performance increase appears to be related to the increase in number of spindles more than the specific disk configuration. Compare the 4-disk RAID1+0 and dual RAID1 configurations. Those tests, however, were performed using a battery-backed write cache; it is entirely possible that without a write-cache the <tt>fsync()</tt> seek latency would significantly impact the main data file I/O.
Some additional configuration notes:
-<tt>commit_delay</tt>%%%
-<tt>commit_siblings</tt>%%%
+<tt>[
commit_delay | http://www.postgresql.org/docs/8./static/runtime-config.html#GUC-COMMIT-DELAY]
</tt>%%%
+<tt>[
commit_siblings | http://www.postgresql.org/docs/8./static/runtime-config.html#GUC-COMMIT-SIBLINGS]
</tt>%%%
These settings trade throughput for <tt>COMMIT</tt> latency. The circumstances under which increasing <tt>commit_delay</tt> has most notable effect are when:
# There is no battery-backed write cache
# There are a lot of small transactions
# The WAL device has a high <tt>fsync()</tt> latency
@@ -62,18 +62,18 @@
<tt>commit_delay = 20000</tt>%%%
<tt>commit_siblings = 3</tt>%%%
-<tt>wal_buffers</tt>%%%
+<tt>[
wal_buffers | http://www.postgresql.org/docs/8./static/runtime-config.html#GUC-WAL-BUFFERS]
</tt>%%%
This setting controls the upper bound on the size of <tt>write()</tt> requests to WAL segment files. Modern operating systems and disks work very well with writes in the range of 128kiB to 512kiB, often reaching peak I/O bandwidth with <tt>write()</tt> requests in the range.
Before generating some new WAL activity, [PostgreSQL] writes out the WAL buffers if more than half are full. Given a default block size of 8 kiB, and the disk performance above, this suggests that <tt>wal_buffers = 128</tt> is a good starting point. The default setting of <tt>8</tt> is surely too small for most systems.
Doing <tt>COMMIT</tt> both writes out all the used WAL buffers and issues an expensive <tt>fsync()</tt>. Thus small transactions come with a heavy I/O penalty, and the WAL buffers are wasted. Using <tt>commit_delay</tt> may help significantly in this situation.
-<tt>checkpoint_segments</tt>%%%
-<tt>checkpoint_timeout</tt>%%%
-<tt>checkpoint_warning</tt>%%%
+<tt>[
checkpoint_segments | http://www.postgresql.org/docs/8./static/runtime-config.html#GUC-CHECKPOINT-SEGMENTS]
</tt>%%%
+<tt>[
checkpoint_timeout | http://www.postgresql.org/docs/8./static/runtime-config.html#GUC-CHECKPOINT-TIMEOUT]
</tt>%%%
+<tt>[
checkpoint_warning | http://www.postgresql.org/docs/8./static/runtime-config.html#GUC-CHECKPOINT-WARNING]
</tt>%%%
Good settings for these parameters are particularly hard to determine. On one hand, you want to allow [PostgreSQL] and the operating system perform write-caching of the main data files; on the other hand, you want to constrain crash-recovery times to reasonable levels.
Current reccomendations are to have checkpoints occur no more frequently than 5 minute intervals. Thus the settings