diff options
author | Tom Lane <tgl@sss.pgh.pa.us> | 2012-11-12 22:05:21 -0500 |
---|---|---|
committer | Tom Lane <tgl@sss.pgh.pa.us> | 2012-11-12 22:05:21 -0500 |
commit | 634e148dcaa1ec77aba4f5eac883285e8f225268 (patch) | |
tree | f1c78621ca7b007ae5a5eb64aeddc7e6a570d5ca /src/backend/access/transam/xlog.c | |
parent | f8ffe6234a09faeaabe034643b619462df897ca9 (diff) | |
download | postgresql-634e148dcaa1ec77aba4f5eac883285e8f225268.tar.gz postgresql-634e148dcaa1ec77aba4f5eac883285e8f225268.zip |
Fix multiple problems in WAL replay.
Most of the replay functions for WAL record types that modify more than
one page failed to ensure that those pages were locked correctly to ensure
that concurrent queries could not see inconsistent page states. This is
a hangover from coding decisions made long before Hot Standby was added,
when it was hardly necessary to acquire buffer locks during WAL replay
at all, let alone hold them for carefully-chosen periods.
The key problem was that RestoreBkpBlocks was written to hold lock on each
page restored from a full-page image for only as long as it took to update
that page. This was guaranteed to break any WAL replay function in which
there was any update-ordering constraint between pages, because even if the
nominal order of the pages is the right one, any mixture of full-page and
non-full-page updates in the same record would result in out-of-order
updates. Moreover, it wouldn't work for situations where there's a
requirement to maintain lock on one page while updating another. Failure
to honor an update ordering constraint in this way is thought to be the
cause of bug #7648 from Daniel Farina: what seems to have happened there
is that a btree page being split was rewritten from a full-page image
before the new right sibling page was written, and because lock on the
original page was not maintained it was possible for hot standby queries to
try to traverse the page's right-link to the not-yet-existing sibling page.
To fix, get rid of RestoreBkpBlocks as such, and instead create a new
function RestoreBackupBlock that restores just one full-page image at a
time. This function can be invoked by WAL replay functions at the points
where they would otherwise perform non-full-page updates; in this way, the
physical order of page updates remains the same no matter which pages are
replaced by full-page images. We can then further adjust the logic in
individual replay functions if it is necessary to hold buffer locks
for overlapping periods. A side benefit is that we can simplify the
handling of concurrency conflict resolution by moving that code into the
record-type-specfic functions; there's no more need to contort the code
layout to keep conflict resolution in front of the RestoreBkpBlocks call.
In connection with that, standardize on zero-based numbering rather than
one-based numbering for referencing the full-page images. In HEAD, I
removed the macros XLR_BKP_BLOCK_1 through XLR_BKP_BLOCK_4. They are
still there in the header files in previous branches, but are no longer
used by the code.
In addition, fix some other bugs identified in the course of making these
changes:
spgRedoAddNode could fail to update the parent downlink at all, if the
parent tuple is in the same page as either the old or new split tuple and
we're not doing a full-page image: it would get fooled by the LSN having
been advanced already. This would result in permanent index corruption,
not just transient failure of concurrent queries.
Also, ginHeapTupleFastInsert's "merge lists" case failed to mark the old
tail page as a candidate for a full-page image; in the worst case this
could result in torn-page corruption.
heap_xlog_freeze() was inconsistent about using a cleanup lock or plain
exclusive lock: it did the former in the normal path but the latter for a
full-page image. A plain exclusive lock seems sufficient, so change to
that.
Also, remove gistRedoPageDeleteRecord(), which has been dead code since
VACUUM FULL was rewritten.
Back-patch to 9.0, where hot standby was introduced. Note however that 9.0
had a significantly different WAL-logging scheme for GIST index updates,
and it doesn't appear possible to make that scheme safe for concurrent hot
standby queries, because it can leave inconsistent states in the index even
between WAL records. Given the lack of complaints from the field, we won't
work too hard on fixing that branch.
Diffstat (limited to 'src/backend/access/transam/xlog.c')
-rw-r--r-- | src/backend/access/transam/xlog.c | 112 |
1 files changed, 67 insertions, 45 deletions
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index 213d5b791cb..f4e5c471ce5 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -926,8 +926,8 @@ begin:; * loop, write_len includes the backup block data. * * Also set the appropriate info bits to show which buffers were backed - * up. The i'th XLR_SET_BKP_BLOCK bit corresponds to the i'th distinct - * buffer value (ignoring InvalidBuffer) appearing in the rdata chain. + * up. The XLR_BKP_BLOCK(N) bit corresponds to the N'th distinct buffer + * value (ignoring InvalidBuffer) appearing in the rdata chain. */ write_len = len; for (i = 0; i < XLR_MAX_BKP_BLOCKS; i++) @@ -938,7 +938,7 @@ begin:; if (!dtbuf_bkp[i]) continue; - info |= XLR_SET_BKP_BLOCK(i); + info |= XLR_BKP_BLOCK(i); bkpb = &(dtbuf_xlg[i]); page = (char *) BufferGetBlock(dtbuf[i]); @@ -3560,9 +3560,16 @@ CleanupBackupHistory(void) } /* - * Restore the backup blocks present in an XLOG record, if any. + * Restore a full-page image from a backup block attached to an XLOG record. * - * We assume all of the record has been read into memory at *record. + * lsn: LSN of the XLOG record being replayed + * record: the complete XLOG record + * block_index: which backup block to restore (0 .. XLR_MAX_BKP_BLOCKS - 1) + * get_cleanup_lock: TRUE to get a cleanup rather than plain exclusive lock + * keep_buffer: TRUE to return the buffer still locked and pinned + * + * Returns the buffer number containing the page. Note this is not terribly + * useful unless keep_buffer is specified as TRUE. * * Note: when a backup block is available in XLOG, we restore it * unconditionally, even if the page in the database appears newer. @@ -3573,15 +3580,20 @@ CleanupBackupHistory(void) * modifications of the page that appear in XLOG, rather than possibly * ignoring them as already applied, but that's not a huge drawback. * - * If 'cleanup' is true, a cleanup lock is used when restoring blocks. - * Otherwise, a normal exclusive lock is used. During crash recovery, that's - * just pro forma because there can't be any regular backends in the system, - * but in hot standby mode the distinction is important. The 'cleanup' - * argument applies to all backup blocks in the WAL record, that suffices for - * now. + * If 'get_cleanup_lock' is true, a cleanup lock is obtained on the buffer, + * else a normal exclusive lock is used. During crash recovery, that's just + * pro forma because there can't be any regular backends in the system, but + * in hot standby mode the distinction is important. + * + * If 'keep_buffer' is true, return without releasing the buffer lock and pin; + * then caller is responsible for doing UnlockReleaseBuffer() later. This + * is needed in some cases when replaying XLOG records that touch multiple + * pages, to prevent inconsistent states from being visible to other backends. + * (Again, that's only important in hot standby mode.) */ -void -RestoreBkpBlocks(XLogRecPtr lsn, XLogRecord *record, bool cleanup) +Buffer +RestoreBackupBlock(XLogRecPtr lsn, XLogRecord *record, int block_index, + bool get_cleanup_lock, bool keep_buffer) { Buffer buffer; Page page; @@ -3589,49 +3601,59 @@ RestoreBkpBlocks(XLogRecPtr lsn, XLogRecord *record, bool cleanup) char *blk; int i; - if (!(record->xl_info & XLR_BKP_BLOCK_MASK)) - return; - + /* Locate requested BkpBlock in the record */ blk = (char *) XLogRecGetData(record) + record->xl_len; for (i = 0; i < XLR_MAX_BKP_BLOCKS; i++) { - if (!(record->xl_info & XLR_SET_BKP_BLOCK(i))) + if (!(record->xl_info & XLR_BKP_BLOCK(i))) continue; memcpy(&bkpb, blk, sizeof(BkpBlock)); blk += sizeof(BkpBlock); - buffer = XLogReadBufferExtended(bkpb.node, bkpb.fork, bkpb.block, - RBM_ZERO); - Assert(BufferIsValid(buffer)); - if (cleanup) - LockBufferForCleanup(buffer); - else - LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); + if (i == block_index) + { + /* Found it, apply the update */ + buffer = XLogReadBufferExtended(bkpb.node, bkpb.fork, bkpb.block, + RBM_ZERO); + Assert(BufferIsValid(buffer)); + if (get_cleanup_lock) + LockBufferForCleanup(buffer); + else + LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); - page = (Page) BufferGetPage(buffer); + page = (Page) BufferGetPage(buffer); - if (bkpb.hole_length == 0) - { - memcpy((char *) page, blk, BLCKSZ); - } - else - { - memcpy((char *) page, blk, bkpb.hole_offset); - /* must zero-fill the hole */ - MemSet((char *) page + bkpb.hole_offset, 0, bkpb.hole_length); - memcpy((char *) page + (bkpb.hole_offset + bkpb.hole_length), - blk + bkpb.hole_offset, - BLCKSZ - (bkpb.hole_offset + bkpb.hole_length)); - } + if (bkpb.hole_length == 0) + { + memcpy((char *) page, blk, BLCKSZ); + } + else + { + memcpy((char *) page, blk, bkpb.hole_offset); + /* must zero-fill the hole */ + MemSet((char *) page + bkpb.hole_offset, 0, bkpb.hole_length); + memcpy((char *) page + (bkpb.hole_offset + bkpb.hole_length), + blk + bkpb.hole_offset, + BLCKSZ - (bkpb.hole_offset + bkpb.hole_length)); + } + + PageSetLSN(page, lsn); + PageSetTLI(page, ThisTimeLineID); + MarkBufferDirty(buffer); - PageSetLSN(page, lsn); - PageSetTLI(page, ThisTimeLineID); - MarkBufferDirty(buffer); - UnlockReleaseBuffer(buffer); + if (!keep_buffer) + UnlockReleaseBuffer(buffer); + + return buffer; + } blk += BLCKSZ - bkpb.hole_length; } + + /* Caller specified a bogus block_index */ + elog(ERROR, "failed to restore block_index %d", block_index); + return InvalidBuffer; /* keep compiler quiet */ } /* @@ -3660,7 +3682,7 @@ RecordIsValid(XLogRecord *record, XLogRecPtr recptr, int emode) { uint32 blen; - if (!(record->xl_info & XLR_SET_BKP_BLOCK(i))) + if (!(record->xl_info & XLR_BKP_BLOCK(i))) continue; memcpy(&bkpb, blk, sizeof(BkpBlock)); @@ -8714,8 +8736,8 @@ xlog_outrec(StringInfo buf, XLogRecord *record) for (i = 0; i < XLR_MAX_BKP_BLOCKS; i++) { - if (record->xl_info & XLR_SET_BKP_BLOCK(i)) - appendStringInfo(buf, "; bkpb%d", i + 1); + if (record->xl_info & XLR_BKP_BLOCK(i)) + appendStringInfo(buf, "; bkpb%d", i); } appendStringInfo(buf, ": %s", RmgrTable[record->xl_rmid].rm_name); |