aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Remove unused structure member.Robert Haas2016-07-21
| | | | Michael Paquier
* Remove very-obsolete estimates of shmem usage from postgresql.conf.sample.Tom Lane2016-07-19
| | | | | | | | | | runtime.sgml used to contain a table of estimated shared memory consumption rates for max_connections and some other GUCs. Commit 390bfc643 removed that on the well-founded grounds that (a) we weren't maintaining the entries well and (b) it no longer mattered so much once we got out from under SysV shmem limits. But it missed that there were even-more-obsolete versions of some of those numbers in comments in postgresql.conf.sample. Remove those too. Back-patch to 9.3 where the aforesaid commit went in.
* Add comment & docs about no vacuum truncation with sto.Kevin Grittner2016-07-19
| | | | Omission noted by Andres Freund.
* Stamp 9.6beta3.REL9_6_BETA3Tom Lane2016-07-18
|
* Fix typos in comments and debug messageMagnus Hagander2016-07-18
| | | | Antonin Houska
* Translation updatesPeter Eisentraut2016-07-18
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 3d71988dffd3c0798a8864c55ca4b7833b48abb1
* Clear all-frozen visibilitymap status when locking tuples.Andres Freund2016-07-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since a892234 & fd31cd265 the visibilitymap's freeze bit is used to avoid vacuuming the whole relation in anti-wraparound vacuums. Doing so correctly relies on not adding xids to the heap without also unsetting the visibilitymap flag. Tuple locking related code has not done so. To allow selectively resetting all-frozen - to avoid pessimizing heap_lock_tuple - allow to selectively reset the all-frozen with visibilitymap_clear(). To avoid having to use visibilitymap_get_status (e.g. via VM_ALL_FROZEN) inside a critical section, have visibilitymap_clear() return whether any bits have been reset. There's a remaining issue (denoted by XXX): After the PageIsAllVisible() check in heap_lock_tuple() and heap_lock_updated_tuple_rec() the page status could theoretically change. Practically that currently seems impossible, because updaters will hold a page level pin already. Due to the next beta coming up, it seems better to get the required WAL magic bump done before resolving this issue. The added flags field fields to xl_heap_lock and xl_heap_lock_updated require bumping the WAL magic. Since there's already been a catversion bump since the last beta, that's not an issue. Reviewed-By: Robert Haas, Amit Kapila and Andres Freund Author: Masahiko Sawada, heavily revised by Andres Freund Discussion: CAEepm=3fWAbWryVW9swHyLTY4sXVf0xbLvXqOwUoDiNCx9mBjQ@mail.gmail.com Backpatch: -
* Remove obsolete comment.Tom Lane2016-07-17
| | | | Peter Geoghegan
* Establish conventions about global object names used in regression tests.Tom Lane2016-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | To ensure that "make installcheck" can be used safely against an existing installation, we need to be careful about what global object names (database, role, and tablespace names) we use; otherwise we might accidentally clobber important objects. There's been a weak consensus that test databases should have names including "regression", and that test role names should start with "regress_", but we didn't have any particular rule about tablespace names; and neither of the other rules was followed with any consistency either. This commit moves us a long way towards having a hard-and-fast rule that regression test databases must have names including "regression", and that test role and tablespace names must start with "regress_". It's not completely there because I did not touch some test cases in rolenames.sql that test creation of special role names like "session_user". That will require some rethinking of exactly what we want to test, whereas the intent of this patch is just to hit all the cases in which the needed renamings are cosmetic. There is no enforcement mechanism in this patch either, but if we don't add one we can expect that the tests will soon be violating the convention again. Again, that's not such a cosmetic change and it will require discussion. (But I did use a quick-hack enforcement patch to find these cases.) Discussion: <16638.1468620817@sss.pgh.pa.us>
* Correctly dump database and tablespace ACLsStephen Frost2016-07-17
| | | | | | | | Dump out the appropriate GRANT/REVOKE commands for databases and tablespaces from pg_dumpall to replicate what the current state is. This was broken during the changes to buildACLCommands for 9.6+ servers for pg_init_privs.
* Improve test case exercising the sorting path for hash index build.Tom Lane2016-07-16
| | | | | | | On second thought, we should probably do at least a minimal check that the constructed index is valid, since the big problem with the most recent breakage was not whether the sorting was correct but that the index had incorrect hash codes placed in it.
* Add regression test case exercising the sorting path for hash index build.Tom Lane2016-07-16
| | | | | | | | | | | | | | | We've broken this code path at least twice in the past, so it's prudent to have a test case that covers it. To allow exercising the code path without creating a very large (and slow to run) test case, redefine the sort threshold to be bounded by maintenance_work_mem as well as the number of available buffers. While at it, fix an ancient oversight that when building a temp index, the number of available buffers is not NBuffers but NLocBuffer. Also, if assertions are enabled, apply a direct test that the sort actually does return the tuples in the expected order. Peter Geoghegan Patch: <CAM3SWZTBAo4hjbBd780+MrOKiKp_TMo1N3A0Rw9_im8gbD7fQA@mail.gmail.com>
* Fix crash in close_ps() for NaN input coordinates.Tom Lane2016-07-16
| | | | | | | | | | The Assert() here seems unreasonably optimistic. Andreas Seltenreich found that it could fail with NaNs in the input geometries, and it seems likely to me that it might fail in corner cases due to roundoff error, even for ordinary input values. As a band-aid, make the function return SQL NULL instead of crashing. Report: <87d1md1xji.fsf@credativ.de>
* Advance PG_CONTROL_VERSION.Tom Lane2016-07-16
| | | | | | | | | | | | This should have been done in commit 73c986adde5d73a5 which added several new fields to pg_control, and again in commit 5028f22f6eb05798 which changed the CRC algorithm, but it wasn't. It's far too late to fix it in the 9.5 branch, but let's do so in 9.6, so that if a 9.6 postmaster is started against a 9.4-era pg_control it will complain about a versioning problem rather than a CRC failure. We already forced initdb/pg_upgrade for beta3, so there's no downside to doing this now. Discussion: <7615.1468598094@sss.pgh.pa.us>
* Fix torn-page, unlogged xid and further risks from heap_update().Andres Freund2016-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When heap_update needs to look for a page for the new tuple version, because the current one doesn't have sufficient free space, or when columns have to be processed by the tuple toaster, it has to release the lock on the old page during that. Otherwise there'd be lock ordering and lock nesting issues. To avoid concurrent sessions from trying to update / delete / lock the tuple while the page's content lock is released, the tuple's xmax is set to the current session's xid. That unfortunately was done without any WAL logging, thereby violating the rule that no XIDs may appear on disk, without an according WAL record. If the database were to crash / fail over when the page level lock is released, and some activity lead to the page being written out to disk, the xid could end up being reused; potentially leading to the row becoming invisible. There might be additional risks by not having t_ctid point at the tuple itself, without having set the appropriate lock infomask fields. To fix, compute the appropriate xmax/infomask combination for locking the tuple, and perform WAL logging using the existing XLOG_HEAP_LOCK record. That allows the fix to be backpatched. This issue has existed for a long time. There appears to have been partial attempts at preventing dangers, but these never have fully been implemented, and were removed a long time ago, in 11919160 (cf. HEAP_XMAX_UNLOGGED). In master / 9.6, there's an additional issue, namely that the visibilitymap's freeze bit isn't reset at that point yet. Since that's a new issue, introduced only in a892234f830, that'll be fixed in a separate commit. Author: Masahiko Sawada and Andres Freund Reported-By: Different aspects by Thomas Munro, Noah Misch, and others Discussion: CAEepm=3fWAbWryVW9swHyLTY4sXVf0xbLvXqOwUoDiNCx9mBjQ@mail.gmail.com Backpatch: 9.1/all supported versions
* Make HEAP_LOCK/HEAP2_LOCK_UPDATED replay reset HEAP_XMAX_INVALID.Andres Freund2016-07-15
| | | | | | | | | | | | | | | 0ac5ad5 started to compress infomask bits in WAL records. Unfortunately the replay routines for XLOG_HEAP_LOCK/XLOG_HEAP2_LOCK_UPDATED forgot to reset the HEAP_XMAX_INVALID (and some other) hint bits. Luckily that's not problematic in the majority of cases, because after a crash/on a standby row locks aren't meaningful. Unfortunately that does not hold true in the presence of prepared transactions. This means that after a crash, or after promotion, row level locks held by a prepared, but not yet committed, prepared transaction might not be enforced. Discussion: 20160715192319.ubfuzim4zv3rqnxv@alap3.anarazel.de Backpatch: 9.3, the oldest branch on which 0ac5ad5 is present.
* Avoid invalidating all foreign-join cached plans when user mappings change.Tom Lane2016-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We must not push down a foreign join when the foreign tables involved should be accessed under different user mappings. Previously we tried to enforce that rule literally during planning, but that meant that the resulting plans were dependent on the current contents of the pg_user_mapping catalog, and we had to blow away all cached plans containing any remote join when anything at all changed in pg_user_mapping. This could have been improved somewhat, but the fact that a syscache inval callback has very limited info about what changed made it hard to do better within that design. Instead, let's change the planner to not consider user mappings per se, but to allow a foreign join if both RTEs have the same checkAsUser value. If they do, then they necessarily will use the same user mapping at runtime, and we don't need to know specifically which one that is. Post-plan-time changes in pg_user_mapping no longer require any plan invalidation. This rule does give up some optimization ability, to wit where two foreign table references come from views with different owners or one's from a view and one's directly in the query, but nonetheless the same user mapping would have applied. We'll sacrifice the first case, but to not regress more than we have to in the second case, allow a foreign join involving both zero and nonzero checkAsUser values if the nonzero one is the same as the prevailing effective userID. In that case, mark the plan as only runnable by that userID. The plancache code already had a notion of plans being userID-specific, in order to support RLS. It was a little confused though, in particular lacking clarity of thought as to whether it was the rewritten query or just the finished plan that's dependent on the userID. Rearrange that code so that it's clearer what depends on which, and so that the same logic applies to both RLS-injected role dependency and foreign-join-injected role dependency. Note that this patch doesn't remove the other issue mentioned in the original complaint, which is that while we'll reliably stop using a foreign join if it's disallowed in a new context, we might fail to start using a foreign join if it's now allowed, but we previously created a generic cached plan that didn't use one. It was agreed that the chance of winning that way was not high enough to justify the much larger number of plan invalidations that would have to occur if we tried to cause it to happen. In passing, clean up randomly-varying spelling of EXPLAIN commands in postgres_fdw.sql, and fix a COSTS ON example that had been allowed to leak into the committed tests. This reverts most of commits fbe5a3fb7 and 5d4171d1c, which were the previous attempt at ensuring we wouldn't push down foreign joins that span permissions contexts. Etsuro Fujita and Tom Lane Discussion: <d49c1e5b-f059-20f4-c132-e9752ee0113e@lab.ntt.co.jp>
* Avoid serializability errors when locking a tuple with a committed updateAlvaro Herrera2016-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When key-share locking a tuple that has been not-key-updated, and the update is a committed transaction, in some cases we raised serializability errors: ERROR: could not serialize access due to concurrent update Because the key-share doesn't conflict with the update, the error is unnecessary and inconsistent with the case that the update hasn't committed yet. This causes problems for some usage patterns, even if it can be claimed that it's sufficient to retry the aborted transaction: given a steady stream of updating transactions and a long locking transaction, the long transaction can be starved indefinitely despite multiple retries. To fix, we recognize that HeapTupleSatisfiesUpdate can return HeapTupleUpdated when an updating transaction has committed, and that we need to deal with that case exactly as if it were a non-committed update: verify whether the two operations conflict, and if not, carry on normally. If they do conflict, however, there is a difference: in the HeapTupleBeingUpdated case we can just sleep until the concurrent transaction is gone, while in the HeapTupleUpdated case this is not possible and we must raise an error instead. Per trouble report from Olivier Dony. In addition to a couple of test cases that verify the changed behavior, I added a test case to verify the behavior that remains unchanged, namely that errors are raised when a update that modifies the key is used. That must still generate serializability errors. One pre-existing test case changes behavior; per discussion, the new behavior is actually the desired one. Discussion: https://www.postgresql.org/message-id/560AA479.4080807@odoo.com https://www.postgresql.org/message-id/20151014164844.3019.25750@wrigleys.postgresql.org Backpatch to 9.3, where the problem appeared.
* Fix parsing NOT sequence in tsqueryTeodor Sigaev2016-07-15
| | | | | | | Digging around bug #14245 I found that commit 6734a1cacd44f5b731933cbc93182b135b167d0c missed that NOT operation is right associative in opposite to all other. This miss is resposible for tsquery parser fail on sequence of NOT operations
* Fix nested NOT operation cleanup in tsquery.Teodor Sigaev2016-07-15
| | | | | | | | | During normalization of tsquery tree it tries to simplify nested NOT operations but there it's obvioulsy missed that subsequent node could be a leaf node (value node) Bug #14245: Segfault on weird to_tsquery Reported by David Kellum.
* Adjust spellings of forms of "cancel"Peter Eisentraut2016-07-14
|
* Fix GiST index build for NaN values in geometric types.Tom Lane2016-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GiST index build could go into an infinite loop when presented with boxes (or points, circles or polygons) containing NaN component values. This happened essentially because the code assumed that x == x is true for any "double" value x; but it's not true for NaNs. The looping behavior was not the only problem though: we also attempted to sort the items using simple double comparisons. Since NaNs violate the trichotomy law, qsort could (in principle at least) get arbitrarily confused and mess up the sorting of ordinary values as well as NaNs. And we based splitting choices on box size calculations that could produce NaNs, again resulting in undesirable behavior. To fix, replace all comparisons of doubles in this logic with float8_cmp_internal, which is NaN-aware and is careful to sort NaNs consistently, higher than any non-NaN. Also rearrange the box size calculation to not produce NaNs; instead it should produce an infinity for a box with NaN on one side and not-NaN on the other. I don't by any means claim that this solves all problems with NaNs in geometric values, but it should at least make GiST index insertion work reliably with such data. It's likely that the index search side of things still needs some work, and probably regular geometric operations too. But with this patch we're laying down a convention for how such cases ought to behave. Per bug #14238 from Guang-Dih Lei. Back-patch to 9.2; the code used before commit 7f3bd86843e5aad8 is quite different and doesn't lock up on my simple test case, nor on the submitter's dataset. Report: <20160708151747.1426.60150@wrigleys.postgresql.org> Discussion: <28685.1468246504@sss.pgh.pa.us>
* Remove reference to range mode in pg_xlogdump errorMagnus Hagander2016-07-14
| | | | | | pg_xlogdump doesn't have any other mode, so it's just confusing to include this in the error message as it indicates there might be another mode.
* Minor test adjustment.Tom Lane2016-07-13
| | | | | | Dept of second thoughts: given the RESET SESSION AUTHORIZATION that was just added by commit cec550139, we don't need the reconnection that used to be here. Might as well buy back a few microseconds.
* Add a regression test case to improve code coverage for tuplesort.Tom Lane2016-07-13
| | | | | | | | | | | | | | | | | Test the external-sort code path in CLUSTER for two different scenarios: multiple-pass external sorting, and the best case for replacement selection, where only one run is produced, so that no merge is required. This test would have caught the bug fixed in commit 1b0fc8507, at least when run with valgrind enabled. In passing, add a short-circuit test in plan_cluster_use_sort() to make dead certain that it selects sorting when enable_indexscan is off. As things stand, that would happen anyway, but it seems like good future proofing for this test. Peter Geoghegan Discussion: <CAM3SWZSgxehDkDMq1FdiW2A0Dxc79wH0hz1x-TnGy=1BXEL+nw@mail.gmail.com>
* Add serial comma and quoting to messagePeter Eisentraut2016-07-12
|
* Put some things in a better order in psql helpPeter Eisentraut2016-07-12
|
* Allow IMPORT FOREIGN SCHEMA within pl/pgsql.Tom Lane2016-07-12
| | | | | | | | | | | | | | Since IMPORT FOREIGN SCHEMA has an INTO clause, pl/pgsql needs to be aware of that and avoid capturing the INTO as an INTO-variables clause. This isn't hard, though it's annoying to have to make IMPORT a plpgsql keyword just for this. (Fortunately, we have the infrastructure now to make it an unreserved keyword, so at least this shouldn't break any existing pl/pgsql code.) Per report from Merlin Moncure. Back-patch to 9.5 where IMPORT FOREIGN SCHEMA was introduced. Report: <CAHyXU0wpHf2bbtKGL1gtUEFATCY86r=VKxfcACVcTMQ70mCyig@mail.gmail.com>
* Print a given subplan only once in EXPLAIN.Tom Lane2016-07-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We have, for a very long time, allowed the same subplan (same member of the PlannedStmt.subplans list) to be referenced by more than one SubPlan node; this avoids problems for cases such as subplans within an IndexScan's indxqual and indxqualorig fields. However, EXPLAIN had not gotten the memo and would print each reference as though it were an independent identical subplan. To fix, track plan_ids of subplans we've printed and don't print the same plan_id twice. Per report from Pavel Stehule. BTW: the particular case of IndexScan didn't cause visible duplication in a plain EXPLAIN, only EXPLAIN ANALYZE, because in the former case we short-circuit executor startup before the indxqual field is processed by ExecInitExpr. That seems like it could easily lead to other EXPLAIN problems in future, but it's not clear how to avoid it without breaking the "EXPLAIN a plan using hypothetical indexes" use-case. For now I've left that issue alone. Although this is a longstanding bug, it's purely cosmetic (no great harm is done by the repeat printout) and we haven't had field complaints before. So I'm hesitant to back-patch it, especially since there is some small risk of ABI problems due to the need to add a new field to ExplainState. In passing, rearrange order of fields in ExplainState to be less random, and update some obsolete comments about when/where to initialize them. Report: <CAFj8pRAimq+NK-menjt+3J4-LFoodDD8Or6=Lc_stcFD+eD4DA@mail.gmail.com>
* Improve output of psql's \df+ command.Tom Lane2016-07-11
| | | | | | | | | | Add display of proparallel (parallel-safety) when the server is >= 9.6, and display of proacl (access privileges) for all server versions. Minor tweak of column ordering to keep related columns together. Michael Paquier Discussion: <CAB7nPqTR3Vu3xKOZOYqSm-+bSZV0kqgeGAXD6w5GLbkbfd5Q6w@mail.gmail.com>
* Add missing newline in error messageMagnus Hagander2016-07-11
|
* Fix start WAL filename for concurrent backups from standbyMagnus Hagander2016-07-11
| | | | | | | | | | On a standby, ThisTimelineID is always 0, so we would generate a filename in timeline 0 even for other timelines. Instead, use starttli which we have retreived from the controlfile. Report by: Francesco Canovai in bug #14230 Author: Marco Nenciarini Reviewed by: Michael Paquier and Amit Kapila
* Revert "Add some temporary code to record stack usage at server process exit."Tom Lane2016-07-10
| | | | | | | This reverts commit 88cf37d2a86d5b66380003d7c3384530e3f91e40 as well as follow-on commits ea9c4a16d5ad88a1d28d43ef458e3209b53eb106 and c57562725d219c4249b82f4a4fb5aaeee3ae0d53. We've learned about as much as we can from the buildfarm.
* Fix TAP tests and MSVC scripts for pathnames with spaces.Tom Lane2016-07-09
| | | | | | | | | | | | | | Change assorted places in our Perl code that did things like system("prog $path/file"); to do it more like system('prog', "$path/file"); which is safe against spaces and other special characters in the path variable. The latter was already the prevailing style, but a few bits of code hadn't gotten this memo. Back-patch to 9.4 as relevant. Michael Paquier, Kyotaro Horiguchi Discussion: <20160704.160213.111134711.horiguchi.kyotaro@lab.ntt.co.jp>
* Improve recording of IA64 stack data.Tom Lane2016-07-09
| | | | | | | | Examination of the results from anole and gharial suggests that we're only managing to track the size of one of the two stacks of IA64 machines. Some googling gave the answer: on HPUX11, the register stack is reported as a page type I don't see in pstat.h on my HPUX10 box. Let's try testing for that.
* Add more temporary code to record stack usage at server process exit.Tom Lane2016-07-08
| | | | | | | | After a look at preliminary results from commit 88cf37d2a86d5b66, I realized it'd be a good idea to spew out the maximum depth measurement seen by check_stack_depth. So add some quick-n-dirty code to do that. Like the previous commit, this will be reverted once we've gathered a set of buildfarm runs with it.
* Add some temporary code to record stack usage at server process exit.Tom Lane2016-07-08
| | | | | | | | | | | | | This patch is meant to gather information from the buildfarm members, and will be reverted in a day or so. The idea is to try to find out the high-water stack consumption while running the regression tests, particularly on IA64 which is suspected to use much more stack than other architectures. On machines with pmap, we can use that; but the IA64 farm members are running HPUX, so also include some bespoke code for HPUX. (I've tested the latter on HPUX 10/HPPA; not entirely sure it will work on HPUX 11/IA64, but we'll soon find out.) Discussion: <CAM-w4HMwwcwaVvYcAH0_FGtG5GeXdYVRfvG81pXnSJWHnCfosQ@mail.gmail.com>
* Fix typo in comment.Robert Haas2016-07-07
| | | | Amit Langote
* Properly adjust pointers when tuples are moved during CLUSTER.Robert Haas2016-07-07
| | | | | | | | Otherwise, when we abandon incremental memory accounting and use batch allocation for the final merge pass, we might crash. This has been broken since 0011c0091e886b874e485a46ff2c94222ffbf550. Peter Geoghegan, tested by Noah Misch
* Fix a prototype which is inconsistent with the function definition.Robert Haas2016-07-07
| | | | Peter Geoghegan
* Clarify resource utilization of parallel query.Robert Haas2016-07-07
| | | | | | | | | | | | temp_file_limit is a per-process limit, not a per-session limit across all cooperating parallel processes; change wording accordingly, per a suggestion from Tom Lane. Also, document under max_parallel_workers_per_gather the fact that each process involved in a parallel query may use as many resources as a separate session. Caveat emptor. Per a complaint from Peter Geoghegan.
* Reduce stack space consumption in tzload().Tom Lane2016-07-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While syncing our timezone code with IANA's updates in commit 1c1a7cbd6, I'd chosen not to adopt the code they conditionally compile under #ifdef ALL_STATE. The main thing that that drives is that the space for gmtime and localtime timezone definitions isn't statically allocated, but is malloc'd on first use. I reasoned we didn't need that logic: we don't have localtime() at all, and we always initialize TimeZone to GMT so we always need that one. But there is one other thing ALL_STATE does, which is to make tzload() malloc its transient workspace instead of just declaring it as a local variable. It turns out that that local variable occupies 78K. Even worse is that, at least for common US timezone settings, there's a recursive call to parse the "posixrules" zone name, making peak stack consumption to select a time zone upwards of 150K. That's an uncomfortably large fraction of our STACK_DEPTH_SLOP safety margin, and could result in outright crashes if we try to reduce STACK_DEPTH_SLOP as has been discussed recently. Furthermore, this means that the postmaster's peak stack consumption is several times that of a backend running typical queries (since, except on Windows, backends inherit the timezone GUC values and don't ever run this code themselves unless you do SET TIMEZONE). That's completely backwards from a safety perspective. Hence, adopt the ALL_STATE rather than non-ALL_STATE variant of tzload(), while not changing the other code aspects that symbol controls. The risk of an ENOMEM error from malloc() seems less than that of a SIGSEGV from stack overrun. This should probably get back-patched along with 1c1a7cbd6 and followon fixes, whenever we decide we have enough confidence in the updates to do that.
* Rename pg_stat_wal_receiver.conn_info to conninfo.Fujii Masao2016-07-07
| | | | | | | | | Per discussion on pgsql-hackers, conninfo is better as the column name because it's more commonly used in PostgreSQL. Catalog version bumped due to the change of pg_proc. Author: Michael Paquier
* Fix typosPeter Eisentraut2016-07-06
|
* Fix typo in comment.Fujii Masao2016-07-06
| | | | Author: Masahiko Sawada
* Fix failure to handle conflicts in non-arbiter exclusion constraints.Tom Lane2016-07-04
| | | | | | | | | | | | | | | | | | | ExecInsertIndexTuples treated an exclusion constraint as subject to noDupErr processing even when it was not listed in arbiterIndexes, and would therefore not error out for a conflict in such a constraint, instead returning it as an arbiter-index failure. That led to an infinite loop in ExecInsert, since ExecCheckIndexConstraints ignored the index as-intended and therefore didn't throw the expected error. To fix, make the exclusion constraint code path use the same condition as the index_insert call does to decide whether no-error-for-duplicates behavior is appropriate. While at it, refactor a little bit to avoid unnecessary list_member_oid calls. (That surely wouldn't save anything worth noticing, but I find the code a bit clearer this way.) Per bug report from Heikki Rauhala. Back-patch to 9.5 where ON CONFLICT was introduced. Report: <4C976D6B-76B4-434C-8052-D009F7B7AEDA@reaktor.fi>
* Typo fix.Tom Lane2016-07-03
|
* Allow RTE_SUBQUERY rels to be considered parallel-safe.Tom Lane2016-07-03
| | | | | | | | | | There isn't really any reason not to; the original comments here were partly confused about subplans versus subquery-in-FROM, and partly dependent on restrictions that no longer apply now that subqueries return Paths not Plans. Depending on what's inside the subquery, it might fail to produce any parallel_safe Paths, but that's fine. Tom Lane and Robert Haas
* Fix up parallel-safety marking for appendrels.Tom Lane2016-07-03
| | | | | | | | | | | | | | | The previous coding assumed that the value derived by set_rel_consider_parallel() for an appendrel parent would be accurate for all the appendrel's children; but this is not so, for example because one child might scan a temp table. Instead, apply set_rel_consider_parallel() to each child rel as well as the parent, and then take the AND of the results as controlling parallel safety for the appendrel as a whole. (We might someday be able to deal more intelligently than this with cases in which some of the childrels are parallel-safe and others not, but that's for later.) Robert Haas and Tom Lane
* Allow treating TABLESAMPLE scans as parallel-safe.Tom Lane2016-07-03
| | | | | | | | | | This was the intention all along, but an extraneous "return;" in set_rel_consider_parallel() caused sampled rels to never be marked consider_parallel. Since we don't have any partial tablesample path/plan type yet, there's no possibility of parallelizing the sample scan itself; but this fix allows such a scan to appear below a parallel join, for example.