aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* createdb: Fix quoting of --encoding, --lc-ctype and --lc-collateMichael Paquier2020-02-27
| | | | | | | | | | | | The original coding failed to properly quote those arguments, leading to failures when using quotes in the values used. As the quoting can be encoding-sensitive, the connection to the backend needs to be taken before applying the correct quoting. Author: Michael Paquier Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/20200214041004.GB1998@paquier.xyz Backpatch-through: 9.5
* Fix docs regarding AFTER triggers on partitioned tablesAlvaro Herrera2020-02-26
| | | | | | | | In commit 86f575948c77 I forgot to update the trigger.sgml paragraph that needs to explain that AFTER triggers are allowed in partitioned tables. Do so now. Discussion: https://postgr.es/m/20200224185850.GA30899@alvherre.pgsql
* Add prefix checks in exclude lists for pg_rewind, pg_checksums and base backupsMichael Paquier2020-02-24
| | | | | | | | | | | | | | | | | | An instance of PostgreSQL crashing with a bad timing could leave behind temporary pg_internal.init files, potentially causing failures when verifying checksums. As the same exclusion lists are used between pg_rewind, pg_checksums and basebackup.c, all those tools are extended with prefix checks to keep everything in sync, with dedicated checks added for pg_internal.init. Backpatch down to 11, where pg_checksums (pg_verify_checksums in 11) and checksum verification for base backups have been introduced. Reported-by: Michael Banck Author: Michael Paquier Reviewed-by: Kyotaro Horiguchi, David Steele Discussion: https://postgr.es/m/62031974fd8e941dd8351fbc8c7eff60d59c5338.camel@credativ.de Backpatch-through: 11
* Remove extra word from comment.Etsuro Fujita2020-02-20
|
* Doc: discourage use of partial indexes for poor-man's-partitioning.Tom Lane2020-02-19
| | | | | | | | | | | Creating a bunch of non-overlapping partial indexes is generally a bad idea, so add an example saying not to do that. Back-patch to v10. Before that, the alternative of using (real) partitioning wasn't available, so that the tradeoff isn't quite so clear cut. Discussion: https://postgr.es/m/CAKVFrvFY-f7kgwMRMiPLbPYMmgjc8Y2jjUGK_Y0HVcYAmU6ymg@mail.gmail.com
* Fix typoPeter Eisentraut2020-02-19
| | | | Reported-by: Daniel Verite <daniel@manitou-mail.org>
* Fix confusion about event trigger vs. plain function in plpgsql.Tom Lane2020-02-19
| | | | | | | | | | | | | | | | | | | | | | | | | The function hash table keys made by compute_function_hashkey() failed to distinguish event-trigger call context from regular call context. This meant that once we'd successfully made a hash entry for an event trigger (either by validation, or by normal use as an event trigger), an attempt to call the trigger function as a plain function would find this hash entry and thereby bypass the you-can't-do-that check in do_compile(). Thus we'd attempt to execute the function, leading to strange errors or even crashes, depending on function contents and server version. To fix, add an isEventTrigger field to PLpgSQL_func_hashkey, paralleling the longstanding infrastructure for regular triggers. This fits into what had been pad space, so there's no risk of an ABI break, even assuming that any third-party code is looking at these hash keys. (I considered replacing isTrigger with a PLpgSQL_trigtype enum field, but felt that that carried some API/ABI risk. Maybe we should change it in HEAD though.) Per bug #16266 from Alexander Lakhin. This has been broken since event triggers were invented, so back-patch to all supported branches. Discussion: https://postgr.es/m/16266-fcd7f838e97ba5d4@postgresql.org
* Fix mesurement of elapsed time during truncating heap in VACUUM.Fujii Masao2020-02-19
| | | | | | | | | | | | | | | | | | | | | | | | | | VACUUM may truncate heap in several batches. The activity report is logged for each batch, and contains the number of pages in the table before and after the truncation, and also the elapsed time during the truncation. Previously the elapsed time reported in each batch was the total elapsed time since starting the truncation until finishing each batch. For example, if the truncation was processed dividing into three batches, the second batch reported the accumulated time elapsed during both first and second batches. This is strange and confusing because the number of pages in the table reported together is not total. Instead, each batch should report the time elapsed during only that batch. The cause of this issue was that the resource usage snapshot was initialized only at the beginning of the truncation and was never reset later. This commit fixes the issue by changing VACUUM so that the resource usage snapshot is reset at each batch. Back-patch to all supported branches. Reported-by: Tatsuhito Kasahara Author: Tatsuhito Kasahara Reviewed-by: Masahiko Sawada, Fujii Masao Discussion: https://postgr.es/m/CAP0=ZVJsf=NvQuy+QXQZ7B=ZVLoDV_JzsVC1FRsF1G18i3zMGg@mail.gmail.com
* Stop demanding that top xact must be seen before subxact in decoding.Amit Kapila2020-02-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | Manifested as ERROR: subtransaction logged without previous top-level txn record this check forbids legit behaviours like - First xl_xact_assignment record is beyond reading, i.e. earlier restart_lsn. - After restart_lsn there is some change of a subxact. - After that, there is second xl_xact_assignment (for another subxact) revealing the relationship between top and first subxact. Such a transaction won't be streamed anyway because we hadn't seen it in full. Saying for sure whether xact of some record encountered after the snapshot was deserialized can be streamed or not requires to know whether it wrote something before deserialization point --if yes, it hasn't been seen in full and can't be decoded. Snapshot doesn't have such info, so there is no easy way to relax the check. Reported-by: Hsu, John Diagnosed-by: Arseny Sher Author: Arseny Sher, Amit Kapila Reviewed-by: Amit Kapila, Dilip Kumar Backpatch-through: 9.5 Discussion: https://postgr.es/m/AB5978B2-1772-4FEE-A245-74C91704ECB0@amazon.com
* Teach pg_dump to dump comments on RLS policy objects.Tom Lane2020-02-17
| | | | | | | | | | This was unaccountably omitted in the original RLS patch. The SQL syntax is basically the same as for comments on triggers, so crib code from dumpTrigger(). Per report from Marc Munro. Back-patch to all supported branches. Discussion: https://postgr.es/m/1581889298.18009.15.camel@bloodnok.com
* Add description about LogicalRewriteTruncate wait event into document.Fujii Masao2020-02-17
| | | | | | | | | Back-patch to v10 where commit 249cf070e3 introduced LogicalRewriteTruncate wait event. Author: Fujii Masao Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/949931aa-4ed4-d867-a7b5-de9c02b2292b@oss.nttdata.com
* Doc: fix old oversights in GRANT/REVOKE documentation.Tom Lane2020-02-12
| | | | | | | | | | | | | | | | | The GRANTED BY clause in GRANT/REVOKE ROLE has been there since 2005 but was never documented. I'm not sure now whether that was just an oversight or was intentional (given the limited capability of the option). But seeing that pg_dumpall does emit code that uses this option, it seems like not documenting it at all is a bad idea. Also, when we upgraded the syntax to allow CURRENT_USER/SESSION_USER as the privilege recipient, the role form of GRANT was incorrectly not modified to show that, and REVOKE's docs weren't touched at all. Although I'm not that excited about GRANTED BY, the other oversight seems serious enough to justify a back-patch. Discussion: https://postgr.es/m/3070.1581526786@sss.pgh.pa.us
* Document the pg_upgrade -j/--jobs option as taking an argumentPeter Eisentraut2020-02-11
|
* Stamp 11.7.REL_11_7Tom Lane2020-02-10
|
* Last-minute updates for release notes.Tom Lane2020-02-10
| | | | Security: CVE-2020-1720
* createuser: fix parsing of --connection-limit argumentAlvaro Herrera2020-02-10
| | | | | | | The original coding failed to quote the argument properly. Reported-by: Daniel Gustafsson Discussion: 1B8AE66C-85AB-4728-9BB4-612E8E61C219@yesql.se
* Fix priv checks for ALTER <object> DEPENDS ON EXTENSIONAlvaro Herrera2020-02-10
| | | | | | | | | | | | | | Marking an object as dependant on an extension did not have any privilege check whatsoever; this allowed any user to mark objects as droppable by anyone able to DROP EXTENSION, which could be used to cause system-wide havoc. Disallow by checking that the calling user owns the mentioned object. (No constraints are placed on the extension.) Security: CVE-2020-1720 Reported-by: Tom Lane Discussion: 31605.1566429043@sss.pgh.pa.us
* Translation updatesPeter Eisentraut2020-02-10
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 85c682262712155b8026c05a3b09066e85a6af98
* Revert "pg_upgrade: Fix quoting of some arguments in pg_ctl command"Michael Paquier2020-02-10
| | | | | | | This reverts commit d1c0b61. The patch has some downsides that require more attention, as discussed with Noah Misch. Backpatch-through: 9.5
* doc: Spell checkingAmit Kapila2020-02-10
| | | | | | | Reported-by: Justin Pryzby Author: Justin Pryzby Backpatch-through: 9.6 Discussion: https://postgr.es/m/20200206021432.GA24549@telsasoft.com
* pg_upgrade: Fix quoting of some arguments in pg_ctl commandMichael Paquier2020-02-10
| | | | | | | | | | | | | | | | | | | | The previous coding forgot to apply shell quoting to the socket directory and the data folder, leading to failures when running pg_upgrade. This refactors the code generating the pg_ctl command starting clusters to use a more correct shell quoting. Failures are easier to trigger in 12 and newer versions by using a value of --socketdir that includes quotes, but it is also possible to cause failures with quotes included in the default socket directory used by pg_upgrade or the data folders of the clusters involved in the upgrade. As 9.4 is going to be EOL'd with the next minor release, nobody is likely going to upgrade to it now so this branch is not included in the set of branches fixed. Author: Michael Paquier Reviewed-by: Álvaro Herrera, Noah Misch Backpatch-through: 9.5
* Revert "docs: change "default role" wording to "predefined role""Tom Lane2020-02-09
| | | | | | | | | This reverts commit 749f702d13de750b0adaf0aedbcc3c01f078c0c2. Per discussion, we can't change the section title without some web-site work, so revert this change temporarily. Discussion: https://postgr.es/m/157742545062.1149.11052653770497832538@wrigleys.postgresql.org
* Release notes for 12.2, 11.7, 10.12, 9.6.17, 9.5.21, 9.4.26.Tom Lane2020-02-09
|
* Store the deletion horizon XID for a deleted GIN page on the right page.Tom Lane2020-02-09
| | | | | | | | | | | | | | | | | Commit b10714080 moved the GinPageSetDeleteXid() call to a spot where the "page" variable was pointing to the wrong page, causing the XID to be inserted on a page that's not being deleted, thus allowing later GinPageIsRecyclable tests to recycle the deleted page too soon. It might be a good idea to stop using the single "page" variable for multiple purposes in this function. But for the moment I just moved the GinPageSetDeleteXid() call down beside the GinPageSetDeleted() call, which seems like a more logical place for it anyway. Back-patch to v11, as the faulty patch was. (Fortunately, the bug hasn't made it into any release yet.) Discussion: https://postgr.es/m/21620.1581098806@sss.pgh.pa.us
* Add note about access permission checks by inherited TRUNCATE and LOCK TABLE.Fujii Masao2020-02-07
| | | | | | | | | | | | | | | | | | | Inherited queries perform access permission checks on the parent table only. But there are two exceptions to this rule in v12 or before; TRUNCATE and LOCK TABLE commands through a parent table check the permissions on not only the parent table but also the children tables. Previously these exceptions were not documented. This commit adds the note about these exceptions, into the document. Back-patch to v9.4. But we don't apply this commit to the master because commit e6f1e560e4 already got rid of the exception about inherited TRUNCATE and upcoming commit will do for the exception about inherited LOCK TABLE. Author: Amit Langote Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CA+HiwqHfTnMU6SUkyHxCmpHUKk7ERLHCR3vZVq19ZOQBjPBLmQ@mail.gmail.com
* Fix typo.Amit Kapila2020-02-06
| | | | | | | Reported-by: Amit Langote Author: Amit Langote Backpatch-through: 9.6, where it was introduced Discussion: https://postgr.es/m/CA+HiwqFNADeukaaGRmTqANbed9Fd81gLi08AWe_F86_942Gspw@mail.gmail.com
* Fix bug in LWLock statistics mechanism.Fujii Masao2020-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously PostgreSQL built with -DLWLOCK_STATS could report more than one LWLock statistics entries for the same backend process and the same LWLock. This is strange and only one statistics should be output in that case, instead. The cause of this issue is that the key variable used for LWLock stats hash table was not fully initialized. The key consists of two fields and they were initialized. But the following 4 bytes allocated in the key variable for the alignment was not initialized. So even if the same key was specified, hash_search(HASH_ENTER) could not find the existing entry for that key and created new one. This commit fixes this issue by initializing the key variable with zero. As the side effect of this commit, the volume of LWLock statistics output would be reduced very much. Back-patch to v10, where commit 3761fe3c20 introduced the issue. Author: Fujii Masao Reviewed-by: Julien Rouhaud, Kyotaro Horiguchi Discussion: https://postgr.es/m/26359edb-798a-568f-d93a-6aafac49752d@oss.nttdata.com
* Force tuple conversion when the source has missing attributes.Andrew Gierth2020-02-05
| | | | | | | | | | | | | | | | | | | | | Tuple conversion incorrectly concluded that no conversion was needed as long as all the attributes lined up. But if the source tuple has a missing attribute (from addition of a column with default), then the destination tupdesc might not reflect the same default. The typical symptom was that the affected columns would be unexpectedly NULL. Repair by always forcing conversion if the source has missing attributes, which will be filled in by the deform operation. (In theory we could optimize for when the destination has the same default, but that seemed overkill.) Backpatch to 11 where missing attributes were added. Per bug #16242. Vik Fearing (discovery, code, testing) and me (analysis, testcase). Discussion: https://postgr.es/m/16242-d1c9fca28445966b@postgresql.org
* ALTER SUBSCRIPTION / REFRESH docs: explain copy_dataAlvaro Herrera2020-02-05
| | | | | | | | | | The docs are ambiguous as to which tables would be copied over when the copy_data parameter is true in ALTER SUBSCRIPTION ... REFRESH PUBLICATION. Make it clear that it only applies to tables which are new in the publication. Author: David Christensen (reword by Álvaro Herrera) Discussion: https://postgr.es/m/95339420-7F09-4F8C-ACC0-8F1CFAAD9CD7@endpoint.com
* When a TAP file has non-zero exit status, retain temporary directories.Noah Misch2020-02-05
| | | | | | | | | | | | PostgresNode already retained base directories in such cases. Stop using $SIG{__DIE__}, which is redundant with the exit status check, in lieu of proliferating it to TestLib. Back-patch to 9.6, where commit 88802e068017bee8cea7a5502a712794e761c7b5 introduced retention on failure. Reviewed by Daniel Gustafsson. Discussion: https://postgr.es/m/20200202170155.GA3264196@rfd.leadboat.com
* Add note about how each partition's default value is treated, into the doc.Fujii Masao2020-02-05
| | | | | | | | | | | | | Column defaults may be specified separately for each partition. But INSERT via a partitioned table ignores those partition's default values. The former is documented, but the latter restriction not. This commit adds the note about that restriction into the document. Back-patch to v10 where partitioning was introduced. Author: Fujii Masao Reviewed-by: Amit Langote Discussion: https://postgr.es/m/CAHGQGwEs-59omrfGF7hOHz9iMME3RbKy5ny+iftDx3LHTEn9sA@mail.gmail.com
* Handle lack of DSM slots in parallel btree build, take 2.Thomas Munro2020-02-05
| | | | | | | | | | Commit 74618e77 added a new check intended to fix a bug, but put it in the wrong place so that parallel btree build was always disabled. Do the check after we've actually tried to create a DSM segment. Back-patch to 11, like the earlier commit. Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/CAH2-WzmDABkJzrNnvf%2BOULK-_A_j9gkYg_Dz-H62jzNv4eKQTw%40mail.gmail.com
* Fix handling of "Subplans Removed" field in EXPLAIN output.Tom Lane2020-02-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 499be013d added this field in a rather poorly-thought-through manner, with the result being that rather than being a field of the Append or MergeAppend plan node as intended (and as it seems to be, in text format), it was actually an element of the "Plans" subgroup. At least in JSON format, that's flat out invalid syntax, because "Plans" is an array not an object. While it's not hard to move the generation of the field so that it appears where it's supposed to, this does result in a visible change in field order in text format, in cases where a Append or MergeAppend plan node has any InitPlans attached. That's slightly annoying to do in stable branches; but the alternative of continuing to emit broken non-text formats seems worse. Also, since the set of fields emitted is not supposed to be data-dependent in non-text formats, make sure that "Subplans Removed" appears in Append and MergeAppend nodes even when it's zero, in those formats. (The previous coding made it look like it could appear in some other node types such as BitmapAnd, but we don't actually support runtime pruning there, so don't emit it in those cases.) Per bug #16171 from Mahadevan Ramachandran. Fix by Daniel Gustafsson and Tom Lane, reviewed by Hamid Akhtar. Back-patch to v11 where this code came in. Discussion: https://postgr.es/m/16171-b72259ab75505fa2@postgresql.org
* Add missing break out seqscan loop in logical replicationAlvaro Herrera2020-02-03
| | | | | | | | | | | | | When replica identity is FULL (an admittedly unusual case), the loop that searches for tuples in execReplication.c didn't stop scanning the table when once a matching tuple was found. Add the missing 'break'. Note slight behavior change: we now return the first matching tuple rather than the last one. They are supposed to be indistinguishable anyway, so this shouldn't matter. Author: Konstantin Knizhnik Discussion: https://postgr.es/m/379743f6-ae91-b866-f7a2-5624e6d2b0a4@postgrespro.ru
* Revert commit a5b652f3a0.Fujii Masao2020-02-03
| | | | | | | | | | | | This commit reverts the fix "Make inherited TRUNCATE perform access permission checks on parent table only" only in the back branches. It's not hard to imagine that there are some applications expecting the old behavior and the fix breaks their security. To avoid this compatibility problem, we decided to apply the fix only in HEAD and revert it in all supported back branches. Discussion: https://postgr.es/m/21015.1580400165@sss.pgh.pa.us
* Fix memory leak on DSM slot exhaustion.Thomas Munro2020-02-01
| | | | | | | | | | | | | | If we attempt to create a DSM segment when no slots are available, we should return the memory to the operating system. Previously we did that if the DSM_CREATE_NULL_IF_MAXSEGMENTS flag was passed in, but we didn't do it if an error was raised. Repair. Back-patch to 9.4, where DSM segments arrived. Author: Thomas Munro Reviewed-by: Robert Haas Reported-by: Julian Backes Discussion: https://postgr.es/m/CA%2BhUKGKAAoEw-R4om0d2YM4eqT1eGEi6%3DQot-3ceDR-SLiWVDw%40mail.gmail.com
* Fix CheckAttributeType's handling of collations for ranges.Tom Lane2020-01-31
| | | | | | | | | | | | | | | | | | | | Commit fc7695891 changed CheckAttributeType to recurse into ranges, but made it pass down the wrong collation (always InvalidOid, since ranges as such have no collation). This would result in guaranteed failure when considering a range type whose subtype is collatable. Embarrassingly, we lack any regression tests that would expose such a problem (but fortunately, somebody noticed before we shipped this bug in any release). Fix it to pass down the range's subtype collation property instead, and add some regression test cases to exercise collatable-subtype ranges a bit more. Back-patch to all supported branches, as the previous patch was. Report and patch by Julien Rouhaud, test cases tweaked by me Discussion: https://postgr.es/m/CAOBaU_aBWqNweiGUFX0guzBKkcfJ8mnnyyGC_KBQmO12Mj5f_A@mail.gmail.com
* Fix parallel pg_dump/pg_restore for failure to create worker processes.Tom Lane2020-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we failed to fork a worker process, or create a communication pipe for one, WaitForTerminatingWorkers would suffer an assertion failure if assert-enabled, otherwise crash or go into an infinite loop. This was a consequence of not accounting for the startup condition where we've not yet forked all the workers. The original bug was that ParallelBackupStart would set workerStatus to WRKR_IDLE before it had successfully forked a worker. I made things worse in commit b7b8cc0cf by not understanding the undocumented fact that the WRKR_TERMINATED state was also meant to represent the case where a worker hadn't been started yet: I changed enum T_WorkerStatus so that *all* the worker slots were initially in WRKR_IDLE state. But this wasn't any more broken in practice, since even one slot in the wrong state would keep WaitForTerminatingWorkers from terminating. In v10 and later, introduce an explicit T_WorkerStatus value for worker-not-started, in hopes of preventing future oversights of the same ilk. Before that, just document that WRKR_TERMINATED is supposed to cover that case (partly because it wasn't actively broken, and partly because the enum is exposed outside parallel.c in those branches, so there's microscopically more risk involved in changing it). In all branches, introduce a WORKER_IS_RUNNING status test macro to hide which T_WorkerStatus values mean that, and be more careful not to access ParallelSlot fields till we're sure they're valid. Per report from Vignesh C, though this is my patch not his. Back-patch to all supported branches. Discussion: https://postgr.es/m/CALDaNm1Luv-E3sarR+-unz-BjchquHHyfP+YC+2FS2pt_J+wxg@mail.gmail.com
* Fix typo in recently-added TAP test for replication slotsMichael Paquier2020-01-31
| | | | Oversight in commit b0afdca.
* In jsonb_plpython.c, suppress warning message from gcc 10.Tom Lane2020-01-30
| | | | | | | | | | | | Very recent gcc complains that PLyObject_ToJsonbValue could return a pointer to a local variable. I think it's wrong; but the coding is fragile enough, and the savings of one palloc() minimal enough, that it seems better to just do a palloc() all the time. (My other idea of tweaking the if-condition doesn't suppress the warning.) Back-patch to v11 where this code was introduced. Discussion: https://postgr.es/m/21547.1580170366@sss.pgh.pa.us
* Handle lack of DSM slots in parallel btree build.Thomas Munro2020-01-31
| | | | | | | | | | | | | If no DSM slots are available, a ParallelContext can still be created, but its seg pointer is NULL. Teach parallel btree build to cope with that by falling back to a regular non-parallel build, to avoid crashing with a segmentation fault. Back-patch to 11, where parallel CREATE INDEX landed. Reported-by: Nicola Contu Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/CA%2BhUKGJgJEBnkuODBVomyK3MWFvDBbMVj%3Dgdt6DnRPU-5sQ6UQ%40mail.gmail.com
* Make inherited TRUNCATE perform access permission checks on parent table only.Fujii Masao2020-01-31
| | | | | | | | | | | | | | Previously, TRUNCATE command through a parent table checked the permissions on not only the parent table but also the children tables inherited from it. This was a bug and inherited queries should perform access permission checks on the parent table only. This commit fixes that bug. Back-patch to all supported branches. Author: Amit Langote Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CAHGQGwFHdSvifhJE+-GSNqUHSfbiKxaeQQ7HGcYz6SC2n_oDcg@mail.gmail.com
* Fix slot data persistency when advancing physical replication slotsMichael Paquier2020-01-30
| | | | | | | | | | | | | | | | | | | Advancing a physical replication slot with pg_replication_slot_advance() did not mark the slot as dirty if any advancing was done, preventing the follow-up checkpoint to flush the slot data to disk. This caused the advancing to be lost even on clean restarts. This does not happen for logical slots as any advancing marked the slot as dirty. Per discussion, the original feature has been implemented so as in the event of a crash the slot may move backwards to a past LSN. This property is kept and more documentation is added about that. This commit adds some new TAP tests to check the persistency of physical and logical slots after advancing across clean restarts. Author: Alexey Kondratov, Michael Paquier Reviewed-by: Andres Freund, Kyotaro Horiguchi, Craig Ringer Discussion: https://postgr.es/m/059cc53a-8b14-653a-a24d-5f867503b0ee@postgrespro.ru Backpatch-through: 11
* Avoid unnecessary shm writes in Parallel Hash Join.Thomas Munro2020-01-27
| | | | | | | | | | | | | | Currently, Parallel Hash Join cannot be used for full/right joins, so there is no point in setting the match flag. It turns out that the cache coherence traffic generated by those writes slows down large systems running many-core joins, so let's stop doing that. In future, if we need to use match bits in parallel joins, we might want to consider setting them only if not already set. Back-patch to 11, where Parallel Hash Join arrived. Reported-by: Deng, Gang Discussion: https://postgr.es/m/0F44E799048C4849BAE4B91012DB910462E9897A%40SHSMSX103.ccr.corp.intel.com
* In postgres_fdw, don't try to ship MULTIEXPR updates to remote server.Tom Lane2020-01-26
| | | | | | | | | | | | | | | | | | | | | | | | In a statement like "UPDATE remote_tab SET (x,y) = (SELECT ...)", we'd conclude that the statement could be directly executed remotely, because the sub-SELECT is in a resjunk tlist item that's not examined for shippability. Currently that ends up crashing if the sub-SELECT contains any remote Vars. Prevent the crash by deeming MULTIEXEC Params to be unshippable. This is a bit of a brute-force solution, since if the sub-SELECT *doesn't* contain any remote Vars, the current execution technology would work; but that's not a terribly common use-case for this syntax, I think. In any case, we generally don't try to ship sub-SELECTs, so it won't surprise anybody that this doesn't end up as a remote direct update. I'd be inclined to see if that general limitation can be fixed before worrying about this case further. Per report from Lukáš Sobotka. Back-patch to 9.6. 9.5 had MULTIEXPR, but we didn't try to perform remote direct updates then, so the case didn't arise anyway. Discussion: https://postgr.es/m/CAJif3k+iA_ekBB5Zw2hDBaE1wtiQa4LH4_JUXrrMGwTrH0J01Q@mail.gmail.com
* Doc: Fix list of storage parameters available for ALTER TABLEMichael Paquier2020-01-24
| | | | | | | | | | | Only the parameter parallel_workers can be used directly with ALTER TABLE. Issue introduced in 6f3a13f, so backpatch down to 10. Author: Justin Pryzby Discussion: https://postgr.es/m/20200106025623.GA12066@telsasoft.com Backpatch-through: 10
* Fix an oversight in commit 4c70098ff.Tom Lane2020-01-23
| | | | | | | | | | | | | | | | | | | | I had supposed that the from_char_seq_search() call sites were all passing the constant arrays you'd expect them to pass ... but on looking closer, the one for DY format was passing the days[] array not days_short[]. This accidentally worked because the day abbreviations in English are all the same as the first three letters of the full day names. However, once we took out the "maximum comparison length" logic, it stopped working. As penance for that oversight, add regression test cases covering this, as well as every other switch case in DCH_from_char() that was not reached according to the code coverage report. Also, fold the DCH_RM and DCH_rm cases into one --- now that seq_search is case independent, there's no need to pass different comparison arrays for those cases. Back-patch, as the previous commit was.
* Clean up formatting.c's logic for matching constant strings.Tom Lane2020-01-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | seq_search(), which is used to match input substrings to constants such as month and day names, had a lot of bizarre and unnecessary behaviors. It was mostly possible to avert our eyes from that before, but we don't want to duplicate those behaviors in the upcoming patch to allow recognition of non-English month and day names. So it's time to clean this up. In particular: * seq_search scribbled on the input string, which is a pretty dangerous thing to do, especially in the badly underdocumented way it was done here. Fortunately the input string is a temporary copy, but that was being made three subroutine levels away, making it something easy to break accidentally. The behavior is externally visible nonetheless, in the form of odd case-folding in error reports about unrecognized month/day names. The scribbling is evidently being done to save a few calls to pg_tolower, but that's such a cheap function (at least for ASCII data) that it's pretty pointless to worry about. In HEAD I switched it to be pg_ascii_tolower to ensure it is cheap in all cases; but there are corner cases in Turkish where this'd change behavior, so leave it as pg_tolower in the back branches. * seq_search insisted on knowing the case form (all-upper, all-lower, or initcap) of the constant strings, so that it didn't have to case-fold them to perform case-insensitive comparisons. This likewise seems like excessive micro-optimization, given that pg_tolower is certainly very cheap for ASCII data. It seems unsafe to assume that we know the case form that will come out of pg_locale.c for localized month/day names, so it's better just to define the comparison rule as "downcase all strings before comparing". (The choice between downcasing and upcasing is arbitrary so far as English is concerned, but it might not be in other locales, so follow citext's lead here.) * seq_search also had a parameter that'd cause it to report a match after a maximum number of characters, even if the constant string were longer than that. This was not actually used because no caller passed a value small enough to cut off a comparison. Replicating that behavior for localized month/day names seems expensive as well as useless, so let's get rid of that too. * from_char_seq_search used the maximum-length parameter to truncate the input string in error reports about not finding a matching name. This leads to rather confusing reports in many cases. Worse, it is outright dangerous if the input string isn't all-ASCII, because we risk truncating the string in the middle of a multibyte character. That'd lead either to delivering an illegible error message to the client, or to encoding-conversion failures that obscure the actual data problem. Get rid of that in favor of truncating at whitespace if any (a suggestion due to Alvaro Herrera). In addition to fixing these things, I const-ified the input string pointers of DCH_from_char and its subroutines, to make sure there aren't any other scribbling-on-input problems. The risk of generating a badly-encoded error message seems like enough of a bug to justify back-patching, so patch all supported branches. Discussion: https://postgr.es/m/29432.1579731087@sss.pgh.pa.us
* Fix concurrent indexing operations with temporary tablesMichael Paquier2020-01-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Attempting to use CREATE INDEX, DROP INDEX or REINDEX with CONCURRENTLY on a temporary relation with ON COMMIT actions triggered unexpected errors because those operations use multiple transactions internally to complete their work. Here is for example one confusing error when using ON COMMIT DELETE ROWS: ERROR: index "foo" already contains data Issues related to temporary relations and concurrent indexing are fixed in this commit by enforcing the non-concurrent path to be taken for temporary relations even if using CONCURRENTLY, transparently to the user. Using a non-concurrent path does not matter in practice as locks cannot be taken on a temporary relation by a session different than the one owning the relation, and the non-concurrent operation is more effective. The problem exists with REINDEX since v12 with the introduction of CONCURRENTLY, and with CREATE/DROP INDEX since CONCURRENTLY exists for those commands. In all supported versions, this caused only confusing error messages to be generated. Note that with REINDEX, it was also possible to issue a REINDEX CONCURRENTLY for a temporary relation owned by a different session, leading to a server crash. The idea to enforce transparently the non-concurrent code path for temporary relations comes originally from Andres Freund. Reported-by: Manuel Rigger Author: Michael Paquier, Heikki Linnakangas Reviewed-by: Andres Freund, Álvaro Herrera, Heikki Linnakangas Discussion: https://postgr.es/m/CA+u7OA6gP7YAeCguyseusYcc=uR8+ypjCcgDDCTzjQ+k6S9ksQ@mail.gmail.com Backpatch-through: 9.4
* Fix edge case leading to agg transitions skipping ExecAggTransReparent() calls.Andres Freund2020-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code checking whether an aggregate transition value needs to be reparented into the current context has always only compared the transition return value with the previous transition value by datum, i.e. without regard for NULLness. This normally works, because when the transition function returns NULL (via fcinfo->isnull), it'll return a value that won't be the same as its input value. But there's no hard requirement that that's the case. And it turns out, it's possible to hit this case (see discussion or reproducers), leading to a non-null transition value not being reparented, followed by a crash caused by that. Instead of adding another comparison of NULLness, instead have ExecAggTransReparent() ensure that pergroup->transValue ends up as 0 when the new transition value is NULL. That avoids having to add an additional branch to the much more common cases of the transition function returning the old transition value (which is a pointer in this case), and when the new value is different, but not NULL. In branches since 69c3936a149, also deduplicate the reparenting code between the expression evaluation based transitions, and the path for ordered aggregates. Reported-By: Teodor Sigaev, Nikita Glukhov Author: Andres Freund Discussion: https://postgr.es/m/bd34e930-cfec-ea9b-3827-a8bc50891393@sigaev.ru Backpatch: 9.4-, this issue has existed since at least 7.4