aboutsummaryrefslogtreecommitdiff
path: root/src/backend/storage
Commit message (Collapse)AuthorAge
* Fix WAL-logging of FSM and VM truncation.Heikki Linnakangas2016-10-19
| | | | | | | | | | | | | | | | | | | | | | | | | | When a relation is truncated, it is important that the FSM is truncated as well. Otherwise, after recovery, the FSM can return a page that has been truncated away, leading to errors like: ERROR: could not read block 28991 in file "base/16390/572026": read only 0 of 8192 bytes We were using MarkBufferDirtyHint() to dirty the buffer holding the last remaining page of the FSM, but during recovery, that might in fact not dirty the page, and the FSM update might be lost. To fix, use the stronger MarkBufferDirty() function. MarkBufferDirty() requires us to do WAL-logging ourselves, to protect from a torn page, if checksumming is enabled. Also fix an oversight in visibilitymap_truncate: it also needs to WAL-log when checksumming is enabled. Analysis by Pavan Deolasee. Discussion: <CABOikdNr5vKucqyZH9s1Mh0XebLs_jRhKv6eJfNnD2wxTn=_9A@mail.gmail.com> Backpatch to 9.3, where we got data checksums.
* Don't trust CreateFileMapping() to clear the error code on success.Tom Lane2016-09-23
| | | | | | | | | | | | | We must test GetLastError() even when CreateFileMapping() returns a non-null handle. If that value were left over from some previous system call, we might be fooled into thinking the segment already existed. Experimentation on Windows 7 suggests that CreateFileMapping() clears the error code on success, but it is not documented to do so, so let's not rely on that happening in all Windows releases. Amit Kapila Discussion: <20811.1474390987@sss.pgh.pa.us>
* Avoid using PostmasterRandom() for DSM control segment ID.Tom Lane2016-09-23
| | | | | | | | | | | | | | | Commits 470d886c3 et al intended to fix the problem that the postmaster selected the same "random" DSM control segment ID on every start. But using PostmasterRandom() for that destroys the intended property that the delay between random_start_time and random_stop_time will be unpredictable. (Said delay is probably already more predictable than we could wish, but that doesn't mean that reducing it by a couple orders of magnitude is OK.) Revert the previous patch and add a comment warning against misuse of PostmasterRandom. Fix the original problem by calling srandom() early in PostmasterMain, using a low-security seed that will later be overwritten by PostmasterRandom. Discussion: <20789.1474390434@sss.pgh.pa.us>
* Use PostmasterRandom(), not random(), for DSM control segment ID.Robert Haas2016-09-20
| | | | | Otherwise, every startup gets the same "random" value, which is definitely not what was intended.
* Retry DSM control segment creation if Windows indicates access denied.Robert Haas2016-09-20
| | | | | | | | | | | | | | | Otherwise, attempts to run multiple postmasters running on the same machine may fail, because Windows sometimes returns ERROR_ACCESS_DENIED rather than ERROR_ALREADY_EXISTS when there is an existing segment. Hitting this bug is much more likely because of another defect not fixed by this patch, namely that dsm_postmaster_startup() uses random() which returns the same value every time. But that's not a reason not to fix this. Kyotaro Horiguchi and Amit Kapila, reviewed by Michael Paquier Discussion: <CAA4eK1JyNdMeF-dgrpHozDecpDfsRZUtpCi+1AbtuEkfG3YooQ@mail.gmail.com>
* Fix copy/pasto in file identificationSimon Riggs2016-09-12
| | | | Daniel Gustafsson
* Fix mdtruncate() to close fd.c handle of deleted segments.Andres Freund2016-09-08
| | | | | | | | | | | | | | | | | mdtruncate() forgot to FileClose() a segment's mdfd_vfd, when deleting it. That lead to a fd.c handle to a truncated file being kept open until backend exit. The issue appears to have been introduced way back in 1a5c450f3024ac5, before that the handle was closed inside FileUnlink(). The impact of this bug is limited - only VACUUM and ON COMMIT TRUNCATE for temporary tables, truncate files in place (i.e. TRUNCATE itself is not affected), and the relation has to be bigger than 1GB. The consequences of a leaked fd.c handle aren't severe either. Discussion: <20160908220748.oqh37ukwqqncbl3n@alap3.anarazel.de> Backpatch: all supported releases
* Don't CHECK_FOR_INTERRUPTS between WaitLatch and ResetLatch.Tom Lane2016-08-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | This coding pattern creates a race condition, because if an interesting interrupt happens after we've checked InterruptPending but before we reset our latch, the latch-setting done by the signal handler would get lost, and then we might block at WaitLatch in the next iteration without ever noticing the interrupt condition. You can put the CHECK_FOR_INTERRUPTS before WaitLatch or after ResetLatch, but not between them. Aside from fixing the bugs, add some explanatory comments to latch.h to perhaps forestall the next person from making the same mistake. In HEAD, also replace gather_readnext's direct call of HandleParallelMessages with CHECK_FOR_INTERRUPTS. It does not seem clean or useful for this one caller to bypass ProcessInterrupts and go straight to HandleParallelMessages; not least because that fails to consider the InterruptPending flag, resulting in useless work both here (if InterruptPending isn't set) and in the next CHECK_FOR_INTERRUPTS call (if it is). This thinko seems to have been introduced in the initial coding of storage/ipc/shm_mq.c (commit ec9037df2), and then blindly copied into all the subsequent parallel-query support logic. Back-patch relevant hunks to 9.4 to extirpate the error everywhere. Discussion: <1661.1469996911@sss.pgh.pa.us>
* Finish up XLOG_HINT renamingAlvaro Herrera2016-06-17
| | | | | | | Commit b8fd1a09f3 renamed XLOG_HINT to XLOG_FPI, but neglected two places. Backpatch to 9.3, like that commit.
* Fix PageAddItem BRIN bugAlvaro Herrera2016-05-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BRIN was relying on the ability to remove a tuple from an index page, then putting another tuple in the same line pointer. But PageAddItem refuses to add a tuple beyond the first free item past the last used item, and in particular, it rejects an attempt to add an item to an empty page anywhere other than the first line pointer. PageAddItem issues a WARNING and indicates to the caller that it failed, which in turn causes the BRIN calling code to issue a PANIC, so the whole sequence looks like this: WARNING: specified item offset is too large PANIC: failed to add BRIN tuple To fix, create a new function PageAddItemExtended which is like PageAddItem except that the two boolean arguments become a flags bitmap; the "overwrite" and "is_heap" boolean flags in PageAddItem become PAI_OVERWITE and PAI_IS_HEAP flags in the new function, and a new flag PAI_ALLOW_FAR_OFFSET enables the behavior required by BRIN. PageAddItem() retains its original signature, for compatibility with third-party modules (other callers in core code are not modified, either). Also, in the belt-and-suspenders spirit, I added a new sanity check in brinGetTupleForHeapBlock to raise an error if an TID found in the revmap is not marked as live by the page header. This causes it to react with "ERROR: corrupted BRIN index" to the bug at hand, rather than a hard crash. Backpatch to 9.5. Bug reported by Andreas Seltenreich as detected by his handy sqlsmith fuzzer. Discussion: https://www.postgresql.org/message-id/87mvni77jh.fsf@elite.ansel.ydns.eu
* Introduce durable_rename() and durable_link_or_rename().Andres Freund2016-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | Renaming a file using rename(2) is not guaranteed to be durable in face of crashes; especially on filesystems like xfs and ext4 when mounted with data=writeback. To be certain that a rename() atomically replaces the previous file contents in the face of crashes and different filesystems, one has to fsync the old filename, rename the file, fsync the new filename, fsync the containing directory. This sequence is not generally adhered to currently; which exposes us to data loss risks. To avoid having to repeat this arduous sequence, introduce durable_rename(), which wraps all that. Also add durable_link_or_rename(). Several places use link() (with a fallback to rename()) to rename a file, trying to avoid replacing the target file out of paranoia. Some of those rename sequences need to be durable as well. There seems little reason extend several copies of the same logic, so centralize the link() callers. This commit does not yet make use of the new functions; they're used in a followup commit. Author: Michael Paquier, Andres Freund Discussion: 56583BDD.9060302@2ndquadrant.com Backpatch: All supported branches
* Fix wrong keysize in PrivateRefCountHash creation.Andres Freund2016-02-21
| | | | | | | | | | | | In 4b4b680c3 I accidentally used sizeof(PrivateRefCountArray) instead of sizeof(PrivateRefCountEntry) when creating the refcount overflow hashtable. As the former is bigger than the latter, this luckily only resulted in a slightly increased memory usage when many buffers are pinned in a backend. Reported-By: Takashi Horikawa Discussion: 73FA3881462C614096F815F75628AFCD035A48C3@BPXM01GP.gisp.nec.co.jp Backpatch: 9.5, where thew new ref count infrastructure was introduced
* Correct statement to actually be the intended assert statement.Andres Freund2015-12-14
| | | | | | | | | e3f4cfc7 introduced a LWLockHeldByMe() call, without the corresponding Assert() surrounding it. Spotted by Coverity. Backpatch: 9.1+, like the previous commit
* Fix bug leading to restoring unlogged relations from empty files.Andres Freund2015-12-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At the end of crash recovery, unlogged relations are reset to the empty state, using their init fork as the template. The init fork is copied to the main fork without going through shared buffers. Unfortunately WAL replay so far has not necessarily flushed writes from shared buffers to disk at that point. In normal crash recovery, and before the introduction of 'fast promotions' in fd4ced523 / 9.3, the END_OF_RECOVERY checkpoint flushes the buffers out in time. But with fast promotions that's not the case anymore. To fix, force WAL writes targeting the init fork to be flushed immediately (using the new FlushOneBuffer() function). In 9.5+ that flush can centrally be triggered from the code dealing with restoring full page writes (XLogReadBufferForRedoExtended), in earlier releases that responsibility is in the hands of XLOG_HEAP_NEWPAGE's replay function. Backpatch to 9.1, even if this currently is only known to trigger in 9.3+. Flushing earlier is more robust, and it is advantageous to keep the branches similar. Typical symptoms of this bug are errors like 'ERROR: index "..." contains unexpected zero page at block 0' shortly after promoting a node. Reported-By: Thom Brown Author: Andres Freund and Michael Paquier Discussion: 20150326175024.GJ451@alap3.anarazel.de Backpatch: 9.1-
* Message improvementsPeter Eisentraut2015-11-16
|
* shm_mq: Third attempt at fixing nowait behavior in shm_mq_receive.Robert Haas2015-11-03
| | | | | | | | | | | | | Commit a1480ec1d3bacb9acb08ec09f22bc25bc033115b purported to fix the problems with commit b2ccb5f4e6c81305386edb34daf7d1d1e1ee112a, but it didn't completely fix them. The problem is that the checks were performed in the wrong order, leading to a race condition. If the sender attached, sent a message, and detached after the receiver called shm_mq_get_sender and before the receiver called shm_mq_counterparty_gone, we'd incorrectly return SHM_MQ_DETACHED before all messages were read. Repair by reversing the order of operations, and add a long comment explaining why this new logic is (hopefully) correct.
* Fix serialization anomalies due to race conditions on INSERT.Kevin Grittner2015-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On insert the CheckForSerializableConflictIn() test was performed before the page(s) which were going to be modified had been locked (with an exclusive buffer content lock). If another process acquired a relation SIReadLock on the heap and scanned to a page on which an insert was going to occur before the page was so locked, a rw-conflict would be missed, which could allow a serialization anomaly to be missed. The window between the check and the page lock was small, so the bug was generally not noticed unless there was high concurrency with multiple processes inserting into the same table. This was reported by Peter Bailis as bug #11732, by Sean Chittenden as bug #13667, and by others. The race condition was eliminated in heap_insert() by moving the check down below the acquisition of the buffer lock, which had been the very next statement. Because of the loop locking and unlocking multiple buffers in heap_multi_insert() a check was added after all inserts were completed. The check before the start of the inserts was left because it might avoid a large amount of work to detect a serialization anomaly before performing the all of the inserts and the related WAL logging. While investigating this bug, other SSI bugs which were even harder to hit in practice were noticed and fixed, an unnecessary check (covered by another check, so redundant) was removed from heap_update(), and comments were improved. Back-patch to all supported branches. Kevin Grittner and Thomas Munro
* shm_mq: Repair breakage from previous commit.Robert Haas2015-10-22
| | | | | | If the counterparty writes some data into the queue and then detaches, it's wrong to return SHM_MQ_DETACHED right away. If we do that, we fail to read whatever was written.
* shm_mq: Fix failure to notice a dead counterparty when nowait is used.Robert Haas2015-10-22
| | | | | | | | | | | | The shm_mq mechanism was intended to optionally notice when the process on the other end of the queue fails to attach to the queue. It does this by allowing the user to pass a BackgroundWorkerHandle; if the background worker in question is launched and dies without attaching to the queue, then we know it never will. This logic works OK in blocking mode, but when called with nowait = true we fail to notice that this has happened due to an asymmetry in the logic. Repair. Reported off-list by Rushabh Lathia. Patch by me.
* Don't send protocol messages to a shm_mq that no longer exists.Robert Haas2015-10-16
| | | | | | | | | | | Commit 2bd9e412f92bc6a68f3e8bcb18e04955cc35001d introduced a mechanism for relaying protocol messages from a background worker to another backend via a shm_mq. However, there was no provision for shutting down the communication channel. Therefore, a protocol message sent late in the shutdown sequence, such as a DEBUG message resulting from cranking up log_min_messages, could crash the server. To fix, install an on_dsm_detach callback that disables sending messages to the shm_mq when the associated DSM is detached.
* Correct value of LW_SHARED_MASK.Andres Freund2015-09-22
| | | | | | | | | The previous wrong value lead to wrong LOCK_DEBUG output, never showing any shared lock holders. Reported-By: Alexander Korotkov Discussion: CAPpHfdsPmWqz9FB0AnxJrwp1=KLF0n=-iB+QvR0Q8GSmpFVdUQ@mail.gmail.com Backpatch: 9.5, where the bug was introduced.
* Be more wary about partially-valid LOCALLOCK data in RemoveLocalLock().Tom Lane2015-09-20
| | | | | | | | | | | | RemoveLocalLock() must consider the possibility that LockAcquireExtended() failed to palloc the initial space for a locallock's lockOwners array. I had evidently meant to cope with this hazard when the code was originally written (commit 1785acebf2ed14fd66955e2d9a55d77a025f418d), but missed that the pfree needed to be protected with an if-test. Just to make sure things are left in a clean state, reset numLockOwners as well. Per low-memory testing by Andreas Seltenreich. Back-patch to all supported branches.
* Fix two off-by-one errors in bufmgr.c.Andres Freund2015-08-12
| | | | | | | | | | | | | | | | | | | In 4b4b680c I passed a buffer index number (starting from 0) instead of a proper Buffer id (which start from 1 for shared buffers) in two places. This wasn't noticed so far as one of those locations isn't compiled at all (PrintPinnedBufs) and the other one (InvalidBuffer) requires a unlikely, but possible, set of circumstances to trigger a symptom. To reduce the likelihood of such incidents a bit also convert existing open coded mappings from buffer descriptors to buffer ids with BufferDescriptorGetBuffer(). Author: Qingqing Zhou Reported-By: Qingqing Zhou Discussion: CAJjS0u2ai9ooUisKtkV8cuVUtEkMTsbK8c7juNAjv8K11zeCQg@mail.gmail.com Backpatch: 9.5 where the private ref count infrastructure was introduced
* Fix attach-related race condition in shm_mq_send_bytes.Robert Haas2015-08-07
| | | | Spotted by Antonin Houska.
* Fix incorrect calculation in shm_mq_receive.Robert Haas2015-08-06
| | | | | | | | | If some, but not all, of the length word has already been read, and the next attempt to read sees exactly the number of bytes needed to complete the length word, or fewer, then we'll incorrectly read less than all of the available data. Antonin Houska
* Micro optimize LWLockAttemptLock() a bit.Andres Freund2015-08-02
| | | | | | | | | | | | LWLockAttemptLock pointlessly read the lock's state in every loop iteration, even though pg_atomic_compare_exchange_u32() returns the old value. Instead do that only once before the loop iteration. Additionally there's no need to have the expected_state variable, old_state mostly had the same value anyway. Noticed-By: Heikki Linnakangas Backpatch: 9.5, no reason to let the branches diverge at this point
* Fix issues around the "variable" support in the lwlock infrastructure.Andres Freund2015-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | The lwlock scalability work introduced two race conditions into the lwlock variable support provided for xlog.c. First, and harmlessly on most platforms, it set/read the variable without the spinlock in some places. Secondly, due to the removal of the spinlock, it was possible that a backend missed changes to the variable's state if it changed in the wrong moment because checking the lock's state, the variable's state and the queuing are not protected by a single spinlock acquisition anymore. To fix first move resetting the variable's from LWLockAcquireWithVar to WALInsertLockRelease, via a new function LWLockReleaseClearVar. That prevents issues around waiting for a variable's value to change when a new locker has acquired the lock, but not yet set the value. Secondly re-check that the variable hasn't changed after enqueing, that prevents the issue that the lock has been released and already re-acquired by the time the woken up backend checks for the lock's state. Reported-By: Jeff Janes Analyzed-By: Heikki Linnakangas Reviewed-By: Heikki Linnakangas Discussion: 5592DB35.2060401@iki.fi Backpatch: 9.5, where the lwlock scalability went in
* Remove outdated comment in LWLockDequeueSelf's header.Andres Freund2015-07-29
| | | | | Noticed-By: Robert Haas Backpatch: 9.5, where the function was added
* Reduce chatter from signaling of autovacuum workers.Tom Lane2015-07-28
| | | | | | | | | | | | | | | | | | | | Don't print a WARNING if we get ESRCH from a kill() that's attempting to cancel an autovacuum worker. It's possible (and has been seen in the buildfarm) that the worker is already gone by the time we are able to execute the kill, in which case the failure is harmless. About the only plausible reason for reporting such cases would be to help debug corrupted lock table contents, but this is hardly likely to be the most important symptom if that happens. Moreover issuing a WARNING might scare users more than is warranted. Also, since sending a signal to an autovacuum worker is now entirely a routine thing, and the worker will log the query cancel on its end anyway, reduce the message saying we're doing that from LOG to DEBUG1 level. Very minor cosmetic cleanup as well. Since the main practical reason for doing this is to avoid unnecessary buildfarm failures, back-patch to all active branches.
* Fix a couple of bugs with wal_log_hints.Heikki Linnakangas2015-06-26
| | | | | | | | | | | | | | 1. Replay of the WAL record for setting a bit in the visibility map contained an assertion that a full-page image of that record type can only occur with checksums enabled. But it can also happen with wal_log_hints, so remove the assertion. Unlike checksums, wal_log_hints can be changed on the fly, so it would be complicated to figure out if it was enabled at the time that the WAL record was generated. 2. wal_log_hints has the same effect on the locking needed to read the LSN of a page as data checksums. BufferGetLSNAtomic() didn't get the memo. Backpatch to 9.4, where wal_log_hints was added.
* Remove special cases for ETXTBSY from new fsync'ing logic.Tom Lane2015-05-29
| | | | | | | | | | The argument that this is a sufficiently-expected case to be silently ignored seems pretty thin. Andres had brought it up back when we were still considering that most fsync failures should be hard errors, and it probably would be legit not to fail hard for ETXTBSY --- but the same is true for EROFS and other cases, which is why we gave up on hard failures. ETXTBSY is surely not a normal case, so logging the failure seems fine from here.
* Fix fsync-at-startup code to not treat errors as fatal.Tom Lane2015-05-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 2ce439f3379aed857517c8ce207485655000fc8e introduced a rather serious regression, namely that if its scan of the data directory came across any un-fsync-able files, it would fail and thereby prevent database startup. Worse yet, symlinks to such files also caused the problem, which meant that crash restart was guaranteed to fail on certain common installations such as older Debian. After discussion, we agreed that (1) failure to start is worse than any consequence of not fsync'ing is likely to be, therefore treat all errors in this code as nonfatal; (2) we should not chase symlinks other than those that are expected to exist, namely pg_xlog/ and tablespace links under pg_tblspc/. The latter restriction avoids possibly fsync'ing a much larger part of the filesystem than intended, if the user has left random symlinks hanging about in the data directory. This commit takes care of that and also does some code beautification, mainly moving the relevant code into fd.c, which seems a much better place for it than xlog.c, and making sure that the conditional compilation for the pre_sync_fname pass has something to do with whether pg_flush_data works. I also relocated the call site in xlog.c down a few lines; it seems a bit silly to be doing this before ValidateXLOGDirectoryStructure(). The similar logic in initdb.c ought to be made to match this, but that change is noncritical and will be dealt with separately. Back-patch to all active branches, like the prior commit. Abhijit Menon-Sen and Tom Lane
* Fix assorted inconsistencies in our calls of readlink().Tom Lane2015-05-28
| | | | | | | | | | | | | | Ensure that we null-terminate the result string (one place in pg_rewind). Be paranoid about out-of-range results from readlink() (should not happen, but there is no good reason for some call sites to be careful about it and others not). Consistently use the whole buffer, not sometimes one byte less. Ensure we emit an appropriate errcode() in all cases. Spell the error messages the same way. The only serious bug here is the missing null-termination in pg_rewind, which is new code, so no need for a back-patch. Abhijit Menon-Sen and Tom Lane
* pgindent run for 9.5Bruce Momjian2015-05-23
|
* Improve packing/alignment annotation for ItemPointerData.Tom Lane2015-05-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We want this struct to be exactly a series of 3 int16 words, no more and no less. Historically, at least, some ARM compilers preferred to pad it to 8 bytes unless coerced. Our old way of doing that was just to use __attribute__((packed)), but as pointed out by Piotr Stefaniak, that does too much: it also licenses the compiler to give the struct only byte-alignment. We don't want that because it adds access overhead, possibly quite significant overhead. According to the GCC manual, what we want requires also specifying __attribute__((align(2))). It's not entirely clear if all the relevant compilers accept this pragma as well, but we can hope the buildfarm will tell us if not. We can also add a static assertion that should fire if the compiler padded the struct. Since the combination of these pragmas should define exactly what we want on any compiler that accepts them, let's try using them wherever we think they exist, not only for __arm__. (This is likely to expose that the conditional definitions in c.h are inadequate, but finding that out would be a good thing.) The immediate motivation for this is that the current definition of ExecRowMark allows its curCtid field to be misaligned. It is not clear whether there are any other uses of ItemPointerData with a similar hazard. We could change the definition of ExecRowMark if this doesn't work, but it would be far better to have a future-proof fix. Piotr Stefaniak, some further hacking by me
* Fix more typos in comments.Heikki Linnakangas2015-05-20
| | | | Patch by CharSyam, plus a few more I spotted with grep.
* Collection of typo fixes.Heikki Linnakangas2015-05-20
| | | | | | | | | | | | | | | Use "a" and "an" correctly, mostly in comments. Two error messages were also fixed (they were just elogs, so no translation work required). Two function comments in pg_proc.h were also fixed. Etsuro Fujita reported one of these, but I found a lot more with grep. Also fix a few other typos spotted while grepping for the a/an typos. For example, "consists out of ..." -> "consists of ...". Plus a "though"/ "through" mixup reported by Euler Taveira. Many of these typos were in old code, which would be nice to backpatch to make future backpatching easier. But much of the code was new, and I didn't feel like crafting separate patches for each branch. So no backpatching.
* Fix off-by-one error in Assertion.Heikki Linnakangas2015-05-19
| | | | | | | | | | | | The point of the assertion is to ensure that the arrays allocated in stack are large enough, but the check was one item short. This won't matter in practice because MaxIndexTuplesPerPage is an overestimate, so you can't have that many items on a page in reality. But let's be tidy. Spotted by Anastasia Lubennikova. Backpatch to all supported versions, like the patch that added the assertion.
* Fix error message in pre_sync_fname.Robert Haas2015-05-18
| | | | | | | The old one didn't include %m anywhere, and required extra translation. Report by Peter Eisentraut. Fix by me. Review by Tom Lane.
* Remove reference to src/tools/backend/index.htmlStephen Frost2015-05-08
| | | | | | | src/tools/backend was removed back in 63f1ccd, but backend/storage/lmgr/README didn't get the memo. Author: Amit Langote
* Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.Andres Freund2015-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
* Fix some problems with patch to fsync the data directory.Robert Haas2015-05-05
| | | | | | | | pg_win32_is_junction() was a typo for pgwin32_is_junction(). open() was used not only in a two-argument form, which breaks on Windows, but also where BasicOpenFile() should have been used. Per reports from Andrew Dunstan and David Rowley.
* Recursively fsync() the data directory after a crash.Robert Haas2015-05-04
| | | | | | | | | | | Otherwise, if there's another crash, some writes from after the first crash might make it to disk while writes from before the crash fail to make it to disk. This could lead to data corruption. Back-patch to all supported versions. Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised by me.
* Create an infrastructure for parallel computation in PostgreSQL.Robert Haas2015-04-30
| | | | | | | | | | | | | | | | | This does four basic things. First, it provides convenience routines to coordinate the startup and shutdown of parallel workers. Second, it synchronizes various pieces of state (e.g. GUCs, combo CID mappings, transaction snapshot) from the parallel group leader to the worker processes. Third, it prohibits various operations that would result in unsafe changes to that state while parallelism is active. Finally, it propagates events that would result in an ErrorResponse, NoticeResponse, or NotifyResponse message being sent to the client from the parallel workers back to the master, from which they can then be sent on to the client. Robert Haas, Amit Kapila, Noah Misch, Rushabh Lathia, Jeevan Chalke. Suggestions and review from Andres Freund, Heikki Linnakangas, Noah Misch, Simon Riggs, Euler Taveira, and Jim Nasby.
* Introduce replication progress tracking infrastructure.Andres Freund2015-04-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When implementing a replication solution ontop of logical decoding, two related problems exist: * How to safely keep track of replication progress * How to change replication behavior, based on the origin of a row; e.g. to avoid loops in bi-directional replication setups The solution to these problems, as implemented here, consist out of three parts: 1) 'replication origins', which identify nodes in a replication setup. 2) 'replication progress tracking', which remembers, for each replication origin, how far replay has progressed in a efficient and crash safe manner. 3) The ability to filter out changes performed on the behest of a replication origin during logical decoding; this allows complex replication topologies. E.g. by filtering all replayed changes out. Most of this could also be implemented in "userspace", e.g. by inserting additional rows contain origin information, but that ends up being much less efficient and more complicated. We don't want to require various replication solutions to reimplement logic for this independently. The infrastructure is intended to be generic enough to be reusable. This infrastructure also replaces the 'nodeid' infrastructure of commit timestamps. It is intended to provide all the former capabilities, except that there's only 2^16 different origins; but now they integrate with logical decoding. Additionally more functionality is accessible via SQL. Since the commit timestamp infrastructure has also been introduced in 9.5 (commit 73c986add) changing the API is not a problem. For now the number of origins for which the replication progress can be tracked simultaneously is determined by the max_replication_slots GUC. That GUC is not a perfect match to configure this, but there doesn't seem to be sufficient reason to introduce a separate new one. Bumps both catversion and wal page magic. Author: Andres Freund, with contributions from Petr Jelinek and Craig Ringer Reviewed-By: Heikki Linnakangas, Petr Jelinek, Robert Haas, Steve Singer Discussion: 20150216002155.GI15326@awork2.anarazel.de, 20140923182422.GA15776@alap3.anarazel.de, 20131114172632.GE7522@alap2.anarazel.de
* Fix various typos and grammar errors in comments.Andres Freund2015-04-26
| | | | | Author: Dmitriy Olshevskiy Discussion: 553D00A6.4090205@bk.ru
* Fix another bug in DSM_CREATE_NULL_IF_MAXSEGMENTS handling.Robert Haas2015-04-02
| | | | Amit Kapila
* Fix rare core dump in BackendIdGetTransactionIds().Tom Lane2015-03-30
| | | | | | | | | | BackendIdGetTransactionIds() neglected the possibility that the PROC pointer in a ProcState array entry is null. In current usage, this could only crash if the other backend had exited since pgstat_read_current_status saw it as active, which is a pretty narrow window. But it's reachable in the field, per bug #12918 from Vladimir Borodin. Back-patch to 9.4 where the faulty code was introduced.
* Suppress some unused-variable complaints in new LOCK_DEBUG code.Tom Lane2015-03-26
| | | | Jeff Janes
* Reduce pinning and buffer content locking for btree scans.Kevin Grittner2015-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Even though the main benefit of the Lehman and Yao algorithm for btrees is that no locks need be held between page reads in an index search, we were holding a buffer pin on each leaf page after it was read until we were ready to read the next one. The reason was so that we could treat this as a weak lock to create an "interlock" with vacuum's deletion of heap line pointers, even though our README file pointed out that this was not necessary for a scan using an MVCC snapshot. The main goal of this patch is to reduce the blocking of vacuum processes by in-progress btree index scans (including a cursor which is idle), but the code rearrangement also allows for one less buffer content lock to be taken when a forward scan steps from one page to the next, which results in a small but consistent performance improvement in many workloads. This patch leaves behavior unchanged for some cases, which can be addressed separately so that each case can be evaluated on its own merits. These unchanged cases are when a scan uses a non-MVCC snapshot, an index-only scan, and a scan of a btree index for which modifications are not WAL-logged. If later patches allow all of these cases to drop the buffer pin after reading a leaf page, then the btree vacuum process can be simplified; it will no longer need the "super-exclusive" lock to delete tuples from a page. Reviewed by Heikki Linnakangas and Kyotaro Horiguchi