Differences between version 3 and previous revision of WAL.
Other diffs: Previous Major Revision, Previous Author, or view the Annotated Edit History
Newer page: | version 3 | Last edited on Thursday, June 22, 2006 10:38:46 pm | by GuyThornley | Revert |
Older page: | version 2 | Last edited on Thursday, June 22, 2006 7:13:24 pm | by GuyThornley | Revert |
@@ -19,11 +19,14 @@
Using <tt>fsync()</tt> on the modified data files does not provide the required durability. During <tt>fsync()</tt> a partial write may still result, corrupting <i>previously committed data</i>. This is very real possibility under heavy IO and an OS crash or power outage. Furthermore, if many different data files were touched by a transaction, all the data files need to be flushed using <tt>fsync()</tt> and very poor performance is the result.
The WAL solves these problems:
# All updates are logged sequentially, into the same file. Sequential write performance of modern disks is very good, and <tt>fsync()</tt> needs to be called only once
+# The WAL records are guaranteed to be on disk before the data files are written
# The original database page is logged occasionally, and modifications to that base page are logged; thus any partial write to the main data files can be recovered correctly
-# The actual
data files can be written out on as as-needed basis, and the OS is left great liberty how it writes them out
+# The data files can be written out on as as-needed basis, and the OS is left great liberty how it writes them out
# The WAL content itself is protected by an application level [CRC] performed by [PostgreSQL]
+
+It is important to understand that before any data page can be written back to the filesystem, the appropriate WAL records for the changes must be on disk. When removing a page from the shared buffers, [PostgreSQL] ensures the WAL record for the last update is actually on the disk.
!! Segments
The WAL, conceptually, is an infinite sequence of blocks which are numbered starting at zero. In [PostgreSQL] the default block size is 8kiB, and is also used for all heap and index data file pages. The first block is created when the database system is initialised at install time. To ease WAL file management, this infinite sequence of blocks is divided into segments which are 2048 blocks long. In the default configuration, this creates 16MiB files. New segments can be created quickly, and old segments can easily be unlinked or moved elsewhere for archival, as required.
@@ -42,23 +45,36 @@
!! Tuning the WAL
Some configuration parameters for configuring WAL behaviour are available. They are all documented in the online [PostgreSQL documentation | http://www.postgresql.org/docs/]. A good overview of some of the specific configuration parameters is available [here | http://www.postgresql.org/docs/8.0/static/wal-configuration.html].
-Some additional notes:
+Many people extol the virtue of placing the WAL on its own device, seperate from the main data files. After performing some extensive PgBench testing, this author questions that advice; the performance increase appears to be related to the increase in number of spindles more than the specific disk configuration. Compare the 4-disk RAID1+0 and dual RAID1 configurations. Those tests, however, were performed using a battery-backed write cache; it is entirely possible that without a write-cache the <tt>fsync()</tt> seek latency would significantly impact the main data file I/O.
+
+
Some additional configuration
notes:
<tt>commit_delay</tt>%%%
<tt>commit_siblings</tt>%%%
- Do not forget using these will increase the per-transaction
<tt>COMMIT</tt> latency.
+ These settings trade throughput for
<tt>COMMIT</tt> latency. The circumstances under which increasing <tt>commit_delay</tt> has most notable effect are when:
+ # There is no battery-backed write cache
+ # There are a lot of small transactions
+ # The WAL device has a high <tt>fsync()</tt> latency
+ # There are a large number of concurrent connections
+
+ The author's favourite settings for these parameters are
+
+ <tt>commit_delay = 20000</tt>%%%
+ <tt>commit_siblings = 3</tt>%%%
<tt>wal_buffers</tt>%%%
- The larger
the <tt>write()</tt> passed
to the kernel, the faster the
WAL will run
. Using
<tt>commit_delay
</tt> and several concurrent writers allows a large quantity of WAL activity to be written and flushed at once
.
+ This setting controls
the upper bound on the size of
<tt>write()</tt> requests
to WAL segment files
. Modern operating systems and disks work very well with writes in the range of 128kiB to 512kiB, often reaching peak I/O bandwidth with
<tt>write()
</tt> requests in the range
.
- As of
[PostgreSQL] 8.,
the WAL buffers are written out when
half of them
are filled
. Given that modern operating systems and disks work very well with writes which are 128kiB - 512kiB in
size, setting
<tt>wal_buffers=128</tt> may be optimal
. The default setting of <tt>8</tt> is surely too small for most systems.
+ Before generating some new WAL activity,
[PostgreSQL] writes out
the WAL buffers if more than
half are full
. Given a default block
size of 8 kiB
, and the disk performance above, this suggests that
<tt>wal_buffers = 128</tt> is a good starting point
. The default setting of <tt>8</tt> is surely too small for most systems.
+
+ Doing <tt>COMMIT</tt> both writes out all the used WAL buffers and issues an expensive <tt>fsync()</tt>. Thus small transactions come with a heavy I/O penalty, and the WAL buffers are wasted. Using <tt>commit_delay</tt> may help significantly in this situation
.
<tt>checkpoint_segments</tt>%%%
<tt>checkpoint_timeout</tt>%%%
<tt>checkpoint_warning</tt>%%%
- Good settings for these parameters are particular
hard to determine. On one hand, you want to allow [PostgreSQL] and the operating system perform write-caching of the main data files; on the other hand, you want to constrain crash-recovery times to reasonable levels.
+ Good settings for these parameters are particularly
hard to determine. On one hand, you want to allow [PostgreSQL] and the operating system perform write-caching of the main data files; on the other hand, you want to constrain crash-recovery times to reasonable levels.
Current reccomendations are to have checkpoints occur no more frequently than 5 minute intervals. Thus the settings
<tt>checkpoint_timeout = 600</tt>%%%
@@ -68,9 +84,11 @@
It is impossible to tune <tt>checkpoint_segments</tt> for worst-case WAL activity, such as a bulk load using <tt>COPY FROM STDIN</tt> [SQL] command. This might generate 30MiB/s or more of WAL activity. 30MiB/s for 5 minutes is 8.7GiB of WAL data! The best suggestion appears to be let the server run for a while, then study the timestamps on the WAL segment files in <tt>pg_xlog</tt> to calculate the WAL write rate.
Once the WAL write rate is deduced, the number of log segments required for 10 minutes of WAL activity can be calculated as <tt>checkpoint_segments = (wal_write_rate * 600) / 16</tt>
+
+ It is the author's experience that crash recovery from the WAL is a fairly slow operation. I/O rates in the region of 2MiB/s to 4MiB/s should be expected. The <tt>checkpoint_segments</tt> setting could be limited to the longest permissible recovery time. For example, to complete recovery within 5 minutes at 3.0MiB/sec, the maximum setting of <tt>checkpoint_segments</tt> is <tt>(300 * 3) / 16 = 56</tt>
!! Incremental backup using WAL segments
<i>Note: The author has __not__ attempted incremental backups with this mechanism!</i>
Given the principles of WAL operation, it should be possible to achieve incremental on-line backups using WAL segments. As of [PostgreSQL] 8.0, the required configuration directives and server-side support has been implemented. The full procedure is documented [here | http://www.postgresql.org/docs/8.0/static/backup-online.html].