aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* Fix problems with ParamListInfo serialization mechanism.Robert Haas2015-11-02
| | | | | | | | | | | | | | | | | | | | | | Commit d1b7c1ffe72e86932b5395f29e006c3f503bc53d introduced a mechanism for serializing a ParamListInfo structure to be passed to a parallel worker. However, this mechanism failed to handle external expanded values, as pointed out by Noah Misch. Repair. Moreover, plpgsql_param_fetch requires adjustment because the serialization mechanism needs it to skip evaluating unused parameters just as we would do when it is called from copyParamList, but params == estate->paramLI in that case. To fix, make the bms_is_member test in that function unconditional. Finally, have setup_param_list set a new ParamListInfo field, paramMask, to the parameters actually used in the expression, so that we don't try to fetch those that are not needed when serializing a parameter list. This isn't necessary for correctness, but it makes the performance of the parallel executor code comparable to what we do for cases involving cursors. Design suggestions and extensive review by Noah Misch. Patch by me.
* Add RMV to list of commands taking AE lock.Kevin Grittner2015-11-02
| | | | | | Backpatch to 9.3, where it was initially omitted. Craig Ringer, with minor adjustment by Kevin Grittner
* Fix serialization anomalies due to race conditions on INSERT.Kevin Grittner2015-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On insert the CheckForSerializableConflictIn() test was performed before the page(s) which were going to be modified had been locked (with an exclusive buffer content lock). If another process acquired a relation SIReadLock on the heap and scanned to a page on which an insert was going to occur before the page was so locked, a rw-conflict would be missed, which could allow a serialization anomaly to be missed. The window between the check and the page lock was small, so the bug was generally not noticed unless there was high concurrency with multiple processes inserting into the same table. This was reported by Peter Bailis as bug #11732, by Sean Chittenden as bug #13667, and by others. The race condition was eliminated in heap_insert() by moving the check down below the acquisition of the buffer lock, which had been the very next statement. Because of the loop locking and unlocking multiple buffers in heap_multi_insert() a check was added after all inserts were completed. The check before the start of the inserts was left because it might avoid a large amount of work to detect a serialization anomaly before performing the all of the inserts and the related WAL logging. While investigating this bug, other SSI bugs which were even harder to hit in practice were noticed and fixed, an unnecessary check (covered by another check, so redundant) was removed from heap_update(), and comments were improved. Back-patch to all supported branches. Kevin Grittner and Thomas Munro
* Implement lookbehind constraints in our regular-expression engine.Tom Lane2015-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
* doc: security_barrier option is a Boolean, not a string.Robert Haas2015-10-30
| | | | | | Mistake introduced by commit 5bd91e3a835b5d5499fee5f49fc7c0c776fe63dd. Hari Babu
* Update parallel executor support to reuse the same DSM.Robert Haas2015-10-30
| | | | | | | | | | | | | | | | Commit b0b0d84b3d663a148022e900ebfc164284a95f55 purported to make it possible to relaunch workers using the same parallel context, but it had an unpleasant race condition: we might reinitialize after the workers have sent their last control message but before they have dettached the DSM, leaving to crashes. Repair by introducing a new ParallelContext operation, ReinitializeParallelDSM. Adjust execParallel.c to use this new support, so that we can rescan a Gather node by relaunching workers but without needing to recreate the DSM. Amit Kapila, with some adjustments by me. Extracted from latest parallel sequential scan patch.
* Fix typo in bgworker.cRobert Haas2015-10-30
|
* Docs: add example clarifying use of nested JSON containment.Tom Lane2015-10-29
| | | | | | | | Show how this can be used in practice to make queries simpler and more flexible. Also, draw an explicit contrast to the existence operator, which doesn't work that way. Peter Geoghegan and Tom Lane
* Remove some remains from Alpha support removalPeter Eisentraut2015-10-29
|
* Message style improvementsPeter Eisentraut2015-10-28
| | | | | Message style, plurals, quoting, spelling, consistency with similar messages
* Add missing serial comma, for consistency.Robert Haas2015-10-28
| | | | Amit Langote, per Etsuro Fujita
* Fix incorrect message in ATWrongRelkindError.Robert Haas2015-10-28
| | | | | | Mistake introduced by commit 3bf3ab8c563699138be02f9dc305b7b77a724307. Etsuro Fujita
* Fix secondary expected output for commit_ts testAlvaro Herrera2015-10-27
| | | | Per red wall in buildfarm
* Make Gather node projection-capable.Robert Haas2015-10-28
| | | | | | | | | The original Gather code failed to mark a Gather node as not able to do projection, but it couldn't, even though it did call initialize its projection info via ExecAssignProjectionInfo. There doesn't seem to be any good reason for this node not to have projection capability, so clean things up so that it does. Without this, plans using Gather nodes might need to carry extra Result nodes to do projection.
* Document BRIN's inclusion opclass frameworkAlvaro Herrera2015-10-27
| | | | | | | | | | | Backpatch to 9.5 -- this should have been part of b0b7be61337, but we didn't have 38b03caebc5de either at the time. Author: Emre Hasegeli Revised by: Ian Barwick Discussion: http://www.postgresql.org/message-id/CAE2gYzyB39Q9up_-TO6FKhH44pcAM1x6n_Cuj15qKoLoFihUVg@mail.gmail.com http://www.postgresql.org/message-id/562DA711.3020305@2ndquadrant.com
* Fix BRIN free space computationsAlvaro Herrera2015-10-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | A bug in the original free space computation made it possible to return a page which wasn't actually able to fit the item. Since the insertion code isn't prepared to deal with PageAddItem failing, a PANIC resulted ("failed to add BRIN tuple [to new page]"). Add a macro to encapsulate the correct computation, and use it in brin_getinsertbuffer's callers before calling that routine, to raise an early error. I became aware of the possiblity of a problem in this area while working on ccc4c074994d734. There's no archived discussion about it, but it's easy to reproduce a problem in the unpatched code with something like CREATE TABLE t (a text); CREATE INDEX ti ON t USING brin (a) WITH (pages_per_range=1); for length in `seq 8000 8196` do psql -f - <<EOF TRUNCATE TABLE t; INSERT INTO t VALUES ('z'), (repeat('a', $length)); EOF done Backpatch to 9.5, where BRIN was introduced.
* Cleanup commit timestamp module activaction, againAlvaro Herrera2015-10-27
| | | | | | | | | | | | | Further tweak commit_ts.c so that on a standby the state is completely consistent with what that in the master, rather than behaving differently in the cases that the settings differ. Now in standby and master the module should always be active or inactive in lockstep. Author: Petr Jelínek, with some further tweaks by Álvaro Herrera. Backpatch to 9.5, where commit timestamps were introduced. Discussion: http://www.postgresql.org/message-id/5622BF9D.2010409@2ndquadrant.com
* Measure string lengths only onceAlvaro Herrera2015-10-27
| | | | | | | | | | | | | | | | | | | | Bernd Helmle complained that CreateReplicationSlot() was assigning the same value to the same variable twice, so we could remove one of them. Code inspection reveals that we can actually remove both assignments: according to the author the assignment was there for beauty of the strlen line only, and another possible fix to that is to put the strlen in its own line, so do that. To be consistent within the file, refactor all duplicated strlen() calls, which is what we do elsewhere in the backend anyway. In basebackup.c, snprintf already returns the right length; no need for strlen afterwards. Backpatch to 9.4, where replication slots were introduced, to keep code identical. Some of this is older, but the patch doesn't apply cleanly and it's only of cosmetic value anyway. Discussion: http://www.postgresql.org/message-id/BE2FD71DEA35A2287EA5F018@eje.credativ.lan
* shm_mq: Repair breakage from previous commit.Robert Haas2015-10-22
| | | | | | If the counterparty writes some data into the queue and then detaches, it's wrong to return SHM_MQ_DETACHED right away. If we do that, we fail to read whatever was written.
* Add two missing cases to ATWrongRelkindError.Robert Haas2015-10-22
| | | | | | | This way, we produce a better error message if someone tries to do something like ALTER INDEX .. ALTER COLUMN .. SET STORAGE. Amit Langote
* shm_mq: Fix failure to notice a dead counterparty when nowait is used.Robert Haas2015-10-22
| | | | | | | | | | | | The shm_mq mechanism was intended to optionally notice when the process on the other end of the queue fails to attach to the queue. It does this by allowing the user to pass a BackgroundWorkerHandle; if the background worker in question is launched and dies without attaching to the queue, then we know it never will. This logic works OK in blocking mode, but when called with nowait = true we fail to notice that this has happened due to an asymmetry in the logic. Repair. Reported off-list by Rushabh Lathia. Patch by me.
* Fix typos in comments.Robert Haas2015-10-22
| | | | CharSyam
* doc: Add advice on updating checkpoint_segments to max_wal_sizePeter Eisentraut2015-10-22
| | | | with suggestion from Michael Paquier
* Remove redundant CREATEUSER/NOCREATEUSER options in CREATE ROLE et al.Tom Lane2015-10-22
| | | | | | | | | | | | Once upon a time we did not have a separate CREATEROLE privilege, and CREATEUSER effectively meant SUPERUSER. When we invented CREATEROLE (in 8.1) we also added SUPERUSER so as to have a less confusing keyword for this role property. However, we left CREATEUSER in place as a deprecated synonym for SUPERUSER, because of backwards-compatibility concerns. It's still there and is still confusing people, as for example in bug #13694 from Justin Catterson. 9.6 will be ten years or so later, which surely ought to be long enough to end the deprecation and just remove these old keywords. Hence, do so.
* Fix a couple of bugs in recent parallelism-related commits.Robert Haas2015-10-22
| | | | | | | | | | | Commit 816e336f12ecabdc834d4cc31bcf966b2dd323dc added the wrong error check to async.c; sending restrictions is restricted to the leader, not altogether unsafe. Commit 3bd909b220930f21d6e15833a17947be749e7fde added ExecShutdownNode to traverse the planstate tree and call shutdown functions, but made a Gather node, the only node that actually has such a function, abort the tree traversal, which is wrong.
* Add header comments to execParallel.c and nodeGather.c.Robert Haas2015-10-22
| | | | | Patch by me, per a note from Simon Riggs. Reviewed by Amit Kapila and Amit Langote.
* doc: Improve markup and fine-tune replication protocol documentationPeter Eisentraut2015-10-21
|
* Fix incorrect translation of minus-infinity datetimes for json/jsonb.Tom Lane2015-10-20
| | | | | | | | | | | | | Commit bda76c1c8cfb1d11751ba6be88f0242850481733 caused both plus and minus infinity to be rendered as "infinity", which is not only wrong but inconsistent with the pre-9.4 behavior of to_json(). Fix that by duplicating the coding in date_out/timestamp_out/timestamptz_out more closely. Per bug #13687 from Stepan Perlov. Back-patch to 9.4, like the previous commit. In passing, also re-pgindent json.c, since it had gotten a bit messed up by recent patches (and I was already annoyed by indentation-related problems in back-patching this fix ...)
* doc: Move documentation of max_wal_size to better positionPeter Eisentraut2015-10-20
|
* Fix incorrect comment in plannodes.hRobert Haas2015-10-20
| | | | Etsuro Fujita
* Remove duplicate word.Robert Haas2015-10-20
| | | | Amit Langote
* Tab complete CREATE EXTENSION .. VERSION.Robert Haas2015-10-20
| | | | Jeff Janes
* Put back ssl_renegotiation_limit parameter, but only allow 0.Robert Haas2015-10-20
| | | | | | | | | | | | Per a report from Shay Rojansky, Npgsql sends ssl_renegotiation_limit=0 in the startup packet because it does not support renegotiation; other clients which have not attempted to support renegotiation might well behave similarly. The recent removal of this parameter forces them to break compatibility with either current PostgreSQL versions, or previous ones. Per discussion, the best solution is to accept the parameter but only allow a value of 0. Shay Rojansky, edited a little by me.
* Be a bit more rigorous about how we cache strcoll and strxfrm results.Robert Haas2015-10-20
| | | | | | | | | | | Commit 0e57b4d8bd9674adaf5747421b3255b85e385534 contained some clever logic that attempted to make sure that we couldn't get confused about whether the last thing we cached was a strcoll() result or a strxfrm() result, but it wasn't quite clever enough, because we can perform further abbreviations after having already performed some comparisons. Introduce an explicit flag in the hopes of making this watertight. Peter Geoghegan, reviewed by me.
* Remove obsolete comment.Robert Haas2015-10-20
| | | | Peter Geoghegan
* Eschew "RESET statement_timeout" in tests.Noah Misch2015-10-20
| | | | | | | | | Instead, use transaction abort. Given an unlucky bout of latency, the timeout would cancel the RESET itself. Buildfarm members gharial, lapwing, mereswine, shearwater, and sungazer witness that. Back-patch to 9.1 (all supported versions). The query_canceled test still could timeout before entering its subtransaction; for whatever reason, that has yet to happen on the buildfarm.
* Fix incorrect handling of lookahead constraints in pg_regprefix().Tom Lane2015-10-19
| | | | | | | | | | | | | | | | | | | pg_regprefix was doing nothing with lookahead constraints, which would be fine if it were the right kind of nothing, but it isn't: we have to terminate our search for a fixed prefix, not just pretend the LACON arc isn't there. Otherwise, if the current state has both a LACON outarc and a single plain-color outarc, we'd falsely conclude that the color represents an addition to the fixed prefix, and generate an extracted index condition that restricts the indexscan too much. (See added regression test case.) Terminating the search is conservative: we could traverse the LACON arc (thus assuming that the constraint can be satisfied at runtime) and then examine the outarcs of the linked-to state. But that would be a lot more work than it seems worth, because writing a LACON followed by a single plain character is a pretty silly thing to do. This makes a difference only in rather contrived cases, but it's a bug, so back-patch to all supported branches.
* Add a C API for parallel heap scans.Robert Haas2015-10-16
| | | | | | | | | | | | | | | | Using this API, one backend can set up a ParallelHeapScanDesc to which multiple backends can then attach. Each tuple in the relation will be returned to exactly one of the scanning backends. Only forward scans are supported, and rescans must be carefully coordinated. This is not exposed to the planner or executor yet. The original version of this code was written by me. Amit Kapila reviewed it, tested it, and improved it, including adding support for synchronized scans, per review comments from Jeff Davis. Extensive testing of this and related patches was performed by Haribabu Kommi. Final cleanup of this patch by me.
* Allow a parallel context to relaunch workers.Robert Haas2015-10-16
| | | | | | | | | This may allow some callers to avoid the overhead involved in tearing down a parallel context and then setting up a new one, which means releasing the DSM and then allocating and populating a new one. I suspect we'll want to revise the Gather node to make use of this new capability, but even if not it may be useful elsewhere and requires very little additional code.
* Miscellaneous cleanup of regular-expression compiler.Tom Lane2015-10-16
| | | | | | | | | | | | | | | | | Revert our previous addition of "all" flags to copyins() and copyouts(); they're no longer needed, and were never anything but an unsightly hack. Improve a couple of infelicities in the REG_DEBUG code for dumping the NFA data structure, including adding code to count the total number of states and arcs. Add a couple of missed error checks. Add some more documentation in the README file, and some regression tests illustrating cases that exceeded the state-count limit and/or took unreasonable amounts of time before this set of patches. Back-patch to all supported branches.
* Improve memory-usage accounting in regular-expression compiler.Tom Lane2015-10-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This code previously counted the number of NFA states it created, and complained if a limit was exceeded, so as to prevent bizarre regex patterns from consuming unreasonable time or memory. That's fine as far as it went, but the code paid no attention to how many arcs linked those states. Since regexes can be contrived that have O(N) states but will need O(N^2) arcs after fixempties() processing, it was still possible to blow out memory, and take a long time doing it too. To fix, modify the bookkeeping to count space used by both states and arcs. I did not bother with including the "color map" in the accounting; it can only grow to a few megabytes, which is not a lot in comparison to what we're allowing for states+arcs (about 150MB on 64-bit machines or half that on 32-bit machines). Looking at some of the larger real-world regexes captured in the Tcl regression test suite suggests that the most that is likely to be needed for regexes found in the wild is under 10MB, so I believe that the current limit has enough headroom to make it okay to keep it as a hard-wired limit. In connection with this, redefine REG_ETOOBIG as meaning "regular expression is too complex"; the previous wording of "nfa has too many states" was already somewhat inapropos because of the error code's use for stack depth overrun, and it was not very user-friendly either. Back-patch to all supported branches.
* Improve performance of pullback/pushfwd in regular-expression compiler.Tom Lane2015-10-16
| | | | | | | | | | | | | | | | | | | The previous coding would create a new intermediate state every time it wanted to interchange the ordering of two constraint arcs. Certain regex features such as \Y can generate large numbers of parallel constraint arcs, and if we needed to reorder the results of that, we created unreasonable numbers of intermediate states. To improve matters, keep a list of already-created intermediate states associated with the state currently being considered by the outer loop; we can re-use such states to place all the new arcs leading to the same destination or source. I also took the trouble to redefine push() and pull() to have a less risky API: they no longer delete any state or arc that the caller might possibly have a pointer to, except for the specifically-passed constraint arc. This reduces the risk of re-introducing the same type of error seen in the failed patch for CVE-2007-4772. Back-patch to all supported branches.
* Improve performance of fixempties() pass in regular-expression compiler.Tom Lane2015-10-16
| | | | | | | | | | | | | | | The previous coding took something like O(N^4) time to fully process a chain of N EMPTY arcs. We can't really do much better than O(N^2) because we have to insert about that many arcs, but we can do lots better than what's there now. The win comes partly from using mergeins() to amortize de-duplication of arcs across multiple source states, and partly from exploiting knowledge of the ordering of arcs for each state to avoid looking at arcs we don't need to consider during the scan. We do have to be a bit careful of the possible reordering of arcs introduced by the sort-merge coding of the previous commit, but that's not hard to deal with. Back-patch to all supported branches.
* Fix O(N^2) performance problems in regular-expression compiler.Tom Lane2015-10-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Change the singly-linked in-arc and out-arc lists to be doubly-linked, so that arc deletion is constant time rather than having worst-case time proportional to the number of other arcs on the connected states. Modify the bulk arc transfer operations copyins(), copyouts(), moveins(), moveouts() so that they use a sort-and-merge algorithm whenever there's more than a small number of arcs to be copied or moved. The previous method is O(N^2) in the number of arcs involved, because it performs duplicate checking independently for each copied arc. The new method may change the ordering of existing arcs for the destination state, but nothing really cares about that. Provide another bulk arc copying method mergeins(), which is unused as of this commit but is needed for the next one. It basically is like copyins(), but the source arcs might not all come from the same state. Replace the O(N^2) bubble-sort algorithm used in carcsort() with a qsort() call. These changes greatly improve the performance of regex compilation for large or complex regexes, at the cost of extra space for arc storage during compilation. The original tradeoff was probably fine when it was made, but now we care more about speed and less about memory consumption. Back-patch to all supported branches.
* Fix regular-expression compiler to handle loops of constraint arcs.Tom Lane2015-10-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's possible to construct regular expressions that contain loops of constraint arcs (that is, ^ $ AHEAD BEHIND or LACON arcs). There's no use in fully traversing such a loop at execution, since you'd just end up in the same NFA state without having consumed any input. Worse, such a loop leads to infinite looping in the pullback/pushfwd stage of compilation, because we keep pushing or pulling the same constraints around the loop in a vain attempt to move them to the pre or post state. Such looping was previously recognized in CVE-2007-4772; but the fix only handled the case of trivial single-state loops (that is, a constraint arc leading back to its source state) ... and not only that, it was incorrect even for that case, because it broke the admittedly-not-very-clearly-stated API contract of the pull() and push() subroutines. The first two regression test cases added by this commit exhibit patterns that result in assertion failures because of that (though there seem to be no ill effects in non-assert builds). The other new test cases exhibit multi-state constraint loops; in an unpatched build they will run until the NFA state-count limit is exceeded. To fix, remove the code added for CVE-2007-4772, and instead create a general-purpose constraint-loop-breaking phase of regex compilation that executes before we do pullback/pushfwd. Since we never need to traverse a constraint loop fully, we can just break the loop at any chosen spot, if we add clone states that can replicate any sequence of arc transitions that would've traversed just part of the loop. Also add some commentary clarifying why we have to have all these machinations in the first place. This class of problems has been known for some time --- we had a report from Marc Mamin about two years ago, for example, and there are related complaints in the Tcl bug tracker. I had discussed a fix of this kind off-list with Henry Spencer, but didn't get around to doing something about it until the issue was rediscovered by Greg Stark recently. Back-patch to all supported branches.
* Remove volatile qualifiers from proc.c and procarray.cRobert Haas2015-10-16
| | | | | | | | Prior to commit 0709b7ee72e4bc71ad07b7120acd117265ab51d0, access to variables within a spinlock-protected critical section had to be done through a volatile pointer, but that should no longer be necessary. Michael Paquier
* Remove volatile qualifiers from dynahash.c, shmem.c, and sinvaladt.cRobert Haas2015-10-16
| | | | | | | | Prior to commit 0709b7ee72e4bc71ad07b7120acd117265ab51d0, access to variables within a spinlock-protected critical section had to be done through a volatile pointer, but that should no longer be necessary. Thomas Munro
* Remove cautions about using volatile from spin.h.Robert Haas2015-10-16
| | | | | | | Commit 0709b7ee72e4bc71ad07b7120acd117265ab51d0 obsoleted this comment but neglected to update it. Thomas Munro
* Prohibit parallel query when the isolation level is serializable.Robert Haas2015-10-16
| | | | | | | | | | | | In order for this to be safe, the code which hands true serializability will need to taught that the SIRead locks taken by a parallel worker pertain to the same transaction as those taken by the parallel leader. Some further changes may be needed as well. Until the necessary adaptations are made, don't generate parallel plans in serializable mode, and if a previously-generated parallel plan is used after serializable mode has been activated, run it serially. This fixes a bug in commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b.
* Rewrite interaction of parallel mode with parallel executor support.Robert Haas2015-10-16
| | | | | | | | | | | | | | In the previous coding, before returning from ExecutorRun, we'd shut down all parallel workers. This was dead wrong if ExecutorRun was called with a non-zero tuple count; it had the effect of truncating the query output. To fix, give ExecutePlan control over whether to enter parallel mode, and have it refuse to do so if the tuple count is non-zero. Rewrite the Gather logic so that it can cope with being called outside parallel mode. Commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b is largely to blame for this problem, though this patch modifies some subsequently-committed code which relied on the guarantees it purported to make.