aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Remove some code related to 7.3 and older servers from tools of src/bin/Michael Paquier2019-05-07
| | | | | | | | | | This code was broken as of 582edc3, and is most likely not used anymore. Note that pg_dump supports servers down to 8.0, and psql has code to support servers down to 7.4. Author: Julien Rouhaud Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAOBaU_Y5y=zo3+2gf+2NJC1pvMYPcbRXoQaPXx=U7+C8Qh4CzQ@mail.gmail.com
* Stamp 11.3.REL_11_3Tom Lane2019-05-06
|
* Revert "Make pg_dump emit ATTACH PARTITION instead of PARTITION OF"Alvaro Herrera2019-05-06
| | | | | | | | | ... and fallout (from branches 10, 11 and master). The change was ill-considered, and it broke a few normal use cases; since we don't have time to fix it, we'll try again after this week's minor releases. Reported-by: Rushabh Lathia Discussion: https://postgr.es/m/CAGPqQf0iQV=PPOv2Btog9J9AwOQp6HmuVd6SbGTR_v3Zp2XT1w@mail.gmail.com
* Translation updatesPeter Eisentraut2019-05-06
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 96d81aab04631d76c9ca90a3b12885100c061775
* Fix tuple printing in error message of tuple routing for partitionsMichael Paquier2019-05-06
| | | | | | | | | | | | With correctly crafted DDLs, this could lead to disclosure of arbitrary backend memory a user may have no right to access. This impacts only REL_11_STABLE, as the issue has been introduced by 34295b8. On HEAD, add regression tests to cover this issue in the future. Author: Michael Paquier Reviewed-by: Noah Misch Security: CVE-2019-10129
* Use checkAsUser for selectivity estimator checks, if it's set.Dean Rasheed2019-05-06
| | | | | | | | | | | | | | | | | | | | | | | | | In examine_variable() and examine_simple_variable(), when checking the user's table and column privileges to determine whether to grant access to the pg_statistic data, use checkAsUser for the privilege checks, if it's set. This will be the case if we're accessing the table via a view, to indicate that we should perform privilege checks as the view owner rather than the current user. This change makes this planner check consistent with the check in the executor, so the planner will be able to make use of statistics if the table is accessible via the view. This fixes a performance regression introduced by commit e2d4ef8de8, which affects queries against non-security barrier views in the case where the user doesn't have privileges on the underlying table, but the view owner does. Note that it continues to provide the same safeguards controlling access to pg_statistic for direct table access (in which case checkAsUser won't be set) and for security barrier views, because of the nearby checks on rte->security_barrier and rte->securityQuals. Back-patch to all supported branches because e2d4ef8de8 was. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost.
* Fix security checks for selectivity estimation functions with RLS.Dean Rasheed2019-05-06
| | | | | | | | | | | | | | | | | | | | | | In commit e2d4ef8de8, security checks were added to prevent user-supplied operators from running over data from pg_statistic unless the user has table or column privileges on the table, or the operator is leakproof. For a table with RLS, however, checking for table or column privileges is insufficient, since that does not guarantee that the user has permission to view all of the column's data. Fix this by also checking for securityQuals on the RTE, and insisting that the operator be leakproof if there are any. Thus the leakproofness check will only be skipped if there are no securityQuals and the user has table or column privileges on the table -- i.e., only if we know that the user has access to all the data in the column. Back-patch to 9.5 where RLS was added. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Security: CVE-2019-10130
* Remove reindex_catalog test from test schedules.Andres Freund2019-05-05
| | | | | | | | | | | | | | | | | | | As the test currently causes occasional deadlocks (due to the schema cleanup from previous sessions potentially still running), and the patch from f912d7dec2 has gotten a fair bit of buildfarm coverage, remove the test from the test schedules. There's a set of minor releases coming up. Leave the tests in place, so it can manually be run using EXTRA_TESTS. For now also leave it in master, as there's no imminent release, and there's plenty (re-)index related work in 12. But we'll have to disable it before long there too, unless somebody comes up with simple enough fixes for the deadlock (I'm about to post a vague idea to the list). Discussion: https://postgr.es/m/4622.1556982247@sss.pgh.pa.us Backpatch: 9.4-11 (no master!)
* Fix style violations in syscache lookups.Tom Lane2019-05-05
| | | | | | | | | | | | | | | | | Project style is to check the success of SearchSysCacheN and friends by applying HeapTupleIsValid to the result. A tiny minority of calls creatively did it differently. Bring them into line with the rest. This is just cosmetic, since HeapTupleIsValid is indeed just a null check at the moment ... but that may not be true forever, and in any case it puts a mental burden on readers who may wonder why these call sites are not like the rest. Back-patch to v11 just to keep the branches in sync. (The bulk of these errors seem to have originated in v11 or v12, though a few are old.) Per searching to see if anyplace else had made the same error repaired in 62148c352.
* Add check for syscache lookup failure in update_relispartition().Tom Lane2019-05-05
| | | | | | | | Omitted in commit 05b38c7e6 (though it looks like the original blame belongs to 9e9befac4). A failure is admittedly unlikely, but if it did happen, SIGSEGV is not the approved method of reporting it. Per Coverity. Back-patch to v11 where the broken code originated.
* pg_verify_checksums: Fix message punctuationPeter Eisentraut2019-05-04
|
* pg_dump: Fix newline in error messagePeter Eisentraut2019-05-04
| | | | | The newline was incorrectly dropped in a98c48debcd0620ab07608d53ee08fdb0e7a1edb.
* Fix reindexing of pg_class indexes some more.Tom Lane2019-05-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits 3dbb317d3 et al failed under CLOBBER_CACHE_ALWAYS testing. Investigation showed that to reindex pg_class_oid_index, we must suppress accesses to the index (via SetReindexProcessing) before we call RelationSetNewRelfilenode, or at least before we do CommandCounterIncrement therein; otherwise, relcache reloads happening within the CCI may try to fetch pg_class rows using the index's new relfilenode value, which is as yet an empty file. Of course, the point of 3dbb317d3 was that that ordering didn't work either, because then RelationSetNewRelfilenode's own update of the index's pg_class row cannot access the index, should it need to. There are various ways we might have got around that, but Andres Freund came up with a brilliant solution: for a mapped index, we can really just skip the pg_class update altogether. The only fields it was actually changing were relpages etc, but it was just setting them to zeroes which is useless make-work. (Correct new values will be installed at the end of index build.) All pg_class indexes are mapped and probably always will be, so this eliminates the problem by removing work rather than adding it, always a pleasant outcome. Having taught RelationSetNewRelfilenode to do it that way, we can revert the code reordering in reindex_index. (But I left the moved setup code where it was; there seems no reason why it has to run without use of the old index. If you're trying to fix a busted pg_class index, you'll have had to disable system index use altogether to get this far.) Moreover, this means we don't need RelationSetIndexList at all, because reindex_relation's hacking to make "REINDEX TABLE pg_class" work is likewise now unnecessary. We'll leave that code in place in the back branches, but a follow-on patch will remove it in HEAD. In passing, do some minor cleanup for commit 5c1560606 (in HEAD only), notably removing a duplicate newrnode assignment. Patch by me, using a core idea due to Andres Freund. Back-patch to all supported branches, as 3dbb317d3 was. Discussion: https://postgr.es/m/28926.1556664156@sss.pgh.pa.us
* Run catalog reindexing test from 3dbb317d32 serially, to avoid deadlocks.Andres Freund2019-04-30
| | | | | | | | | | | | | | | | | | | | | | | | | The tests turn out to cause deadlocks in some circumstances. Fairly reproducibly so with -DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE. Some of the deadlocks may be hard to fix without disproportionate measures, but others probably should be fixed - but not in 12. We discussed removing the new tests until we can fix the issues underlying the deadlocks, but results from buildfarm animal markhor (which runs with CLOBBER_CACHE_ALWAYS) indicates that there might be a more severe, as of yet undiagnosed, issue (including on stable branches) with reindexing catalogs. The failure is: ERROR: could not read block 0 in file "base/16384/28025": read only 0 of 8192 bytes Therefore it seems advisable to keep the tests. It's not certain that running the tests in isolation removes the risk of deadlocks. It's possible that additional locks are needed to protect against a concurrent auto-analyze or such. Per discussion with Tom Lane. Discussion: https://postgr.es/m/28926.1556664156@sss.pgh.pa.us Backpatch: 9.4-, like 3dbb317d3
* Fix unused variable compiler warning in !debug builds.Andres Freund2019-04-30
| | | | | | | | Introduced in 3dbb317d3. Fix by using the new local variable in more places. Reported-By: Bruce Momjian (off-list) Backpatch: 9.4-, like 3dbb317d3
* Clean up handling of constraint_exclusion and enable_partition_pruning.Tom Lane2019-04-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The interaction of these parameters was a bit confused/confusing, and in fact v11 entirely misses the opportunity to apply partition constraints when a partition is accessed directly (rather than indirectly from its parent). In HEAD, establish the principle that enable_partition_pruning controls partition pruning and nothing else. When accessing a partition via its parent, we do partition pruning (if enabled by enable_partition_pruning) and then there is no need to consider partition constraints in the constraint_exclusion logic. When accessing a partition directly, its partition constraints are applied by the constraint_exclusion logic, only if constraint_exclusion = on. In v11, we can't have such a clean division of these GUCs' effects, partly because we don't want to break compatibility too much in a released branch, and partly because the clean coding requires inheritance_planner to have applied partition pruning to a partitioned target table, which it doesn't in v11. However, we can tweak things enough to cover the missed case, which seems like a good idea since it's potentially a performance regression from v10. This patch keeps v11's previous behavior in which enable_partition_pruning overrides constraint_exclusion for an inherited target table, though. In HEAD, also teach relation_excluded_by_constraints that it's okay to use inheritable constraints when trying to prune a traditional inheritance tree. This might not be thought worthy of effort given that that feature is semi-deprecated now, but we have enough infrastructure that it only takes a couple more lines of code to do it correctly. Amit Langote and Tom Lane Discussion: https://postgr.es/m/9813f079-f16b-61c8-9ab7-4363cab28d80@lab.ntt.co.jp Discussion: https://postgr.es/m/29069.1555970894@sss.pgh.pa.us
* Fix potential assertion failure when reindexing a pg_class index.Andres Freund2019-04-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When reindexing individual indexes on pg_class it was possible to either trigger an assertion failure: TRAP: FailedAssertion("!(!ReindexIsProcessingIndex(((index)->rd_id))) That's because reindex_index() called SetReindexProcessing() - which enables an asserts ensuring no index insertions happen into the index - before calling RelationSetNewRelfilenode(). That not correct for indexes on pg_class, because RelationSetNewRelfilenode() updates the relevant pg_class row, which needs to update the indexes. The are two reasons this wasn't noticed earlier. Firstly the bug doesn't trigger when reindexing all of pg_class, as reindex_relation has code "hiding" all yet-to-be-reindexed indexes. Secondly, the bug only triggers when the the update to pg_class doesn't turn out to be a HOT update - otherwise there's no index insertion to trigger the bug. Most of the time there's enough space, making this bug hard to trigger. To fix, move RelationSetNewRelfilenode() to before the SetReindexProcessing() (and, together with some other code, to outside of the PG_TRY()). To make sure the error checking intended by SetReindexProcessing() is more robust, modify CatalogIndexInsert() to check ReindexIsProcessingIndex() even when the update is a HOT update. Also add a few regression tests for REINDEXing of system catalogs. The last two improvements would have prevented some of the issues fixed in 5c1560606dc4c from being introduced in the first place. Reported-By: Michael Paquier Diagnosed-By: Tom Lane and Andres Freund Author: Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/20190418011430.GA19133@paquier.xyz Backpatch: 9.4-, the bug is present in all branches
* Fix potential catalog corruption with temporary identity columnsPeter Eisentraut2019-04-29
| | | | | | | | | | | | | | | | | | | | | | | | If a temporary table with an identity column and ON COMMIT DROP is created in a single-statement transaction (not useful, but allowed), it would leave the catalog corrupted. We need to add a CommandCounterIncrement() so that PreCommit_on_commit_actions() sees the created dependency between table and sequence and can clean it up. The analogous and more useful case of doing this in a transaction block already runs some CommandCounterIncrement() before it gets to the on-commit cleanup, so it wasn't a problem in practical use. Several locations for placing the new CommandCounterIncrement() call were discussed. This patch places it at the end of standard_ProcessUtility(). That would also help if other commands were to create catalog entries that some on-commit action would like to see. Bug: #15631 Reported-by: Serge Latyntsev <dnsl48@gmail.com> Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Reviewed-by: Michael Paquier <michael@paquier.xyz>
* Portability fix for zic.c.Tom Lane2019-04-26
| | | | Missed an inttypes.h dependency in previous patch. Per buildfarm.
* Sync our copy of the timezone library with IANA release tzcode2019a.Tom Lane2019-04-26
| | | | | | | | | | | | | | | | | | | | | | This corrects a small bug in zic that caused it to output an incorrect year-2440 transition in the Africa/Casablanca zone. More interestingly, zic has grown a "-r" option that limits the range of zone transitions that it will put into the output files. That might be useful to people who don't like the weird GMT offsets that tzdb likes to use for very old dates. It appears that for dates before the cutoff time specified with -r, zic will use the zone's standard-time offset as of the cutoff time. So for example one might do make install ZIC_OPTIONS='-r @-1893456000' to cause all dates before 1910-01-01 to be treated as though 1910 standard time prevailed indefinitely far back. (Don't blame me for the unfriendly way of specifying the cutoff time --- it's seconds since or before the Unix epoch. You can use extract(epoch ...) to calculate it.) As usual, back-patch to all supported branches.
* Update time zone data files to tzdata release 2019a.Tom Lane2019-04-26
| | | | | | | | | | DST law changes in Palestine and Metlakatla. Historical corrections for Israel. Etc/UCT is now a backward-compatibility link to Etc/UTC, instead of being a separate zone that generates the abbreviation "UCT", which nowadays is typically a typo. Postgres will still accept "UCT" as an input zone name, but it won't output it.
* Apply stopgap fix for bug #15672.Tom Lane2019-04-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix DefineIndex so that it doesn't attempt to pass down a to-be-reused index relfilenode to a child index creation, and fix TryReuseIndex to not think that reuse is sensible for a partitioned index. In v11, this fixes a problem where ALTER TABLE on a partitioned table could assign the same relfilenode to several different child indexes, causing very nasty catalog corruption --- in fact, attempting to DROP the partitioned table then leads not only to a database crash, but to inability to restart because the same crash will recur during WAL replay. Either of these two changes would be enough to prevent the failure, but since neither action could possibly be sane, let's put in both changes for future-proofing. In HEAD, no such bug manifests, but that's just an accidental consequence of having changed the pg_class representation of partitioned indexes to have relfilenode = 0. Both of these changes still seem like smart future-proofing. This is only a stop-gap because the code for ALTER TABLE on a partitioned table with a no-op type change still leaves a great deal to be desired. As the added regression tests show, it gets things wrong for comments on child indexes/constraints, and it is regenerating child indexes it doesn't have to. However, fixing those problems will take more work which may not get back-patched into v11. We need a fix for the corruption problem now. Per bug #15672 from Jianing Yang. Patch by me, regression test cases based on work by Amit Langote, who also did a lot of the investigative work. Discussion: https://postgr.es/m/15672-b9fa7db32698269f@postgresql.org
* Fix partitioned index attachmentAlvaro Herrera2019-04-25
| | | | | | | | | | | | | | When an existing index in a partition is attached to a new index on its parent, we forgot to set the "relispartition" flag correctly, which meant that it was not possible to find the index in various operations, such as adding a foreign key constraint that references that partitioned table. One of four places that was assigning the parent index was forgetting to do that, so fix by shifting responsibility of updating the flag to the routine that changes the parent. Author: Amit Langote, Álvaro Herrera Reported-by: Hubert "depesz" Lubaczewski Discussion: https://postgr.es/m/CA+HiwqHMsRtRYRWYTWavKJ8x14AFsv7bmAV46mYwnfD3vy8goQ@mail.gmail.com
* Make pg_dump emit ATTACH PARTITION instead of PARTITION OFAlvaro Herrera2019-04-24
| | | | | | | | | | | | | | | | | | | | | | | Using PARTITION OF can result in column ordering being changed from the database being dumped, if the partition uses a column layout different from the parent's. It's not pg_dump's job to editorialize on table definitions, so this is not acceptable; back-patch all the way back to pg10, where partitioned tables where introduced. This change also ensures that partitions end up in the correct tablespace, if different from the parent's; this is an oversight in ca4103025dfe (in pg12 only). Partitioned indexes (in pg11) don't have this problem, because they're already created as independent indexes and attached to their parents afterwards. This change also has the advantage that the partition is restorable from the dump (as a standalone table) even if its parent table isn't restored. Author: David Rowley Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/CAKJS1f_1c260nOt_vBJ067AZ3JXptXVRohDVMLEBmudX1YEx-A@mail.gmail.com Discussion: https://postgr.es/m/20190423185007.GA27954@alvherre.pgsql
* Fix some minor postmaster-state-machine issues.Tom Lane2019-04-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In sigusr1_handler, don't ignore PMSIGNAL_ADVANCE_STATE_MACHINE based on pmState. The restriction is unnecessary (PostmasterStateMachine should work in any state), not future-proof (since it makes too many assumptions about why the signal might be sent), and broken even today because a race condition can make it necessary to respond to the signal in PM_WAIT_READONLY state. The race condition seems unlikely, but if it did happen, a hot-standby postmaster could fail to shut down after receiving a smart-shutdown request. In MaybeStartWalReceiver, don't clear the WalReceiverRequested flag if the fork attempt fails. Leaving it set allows us to try again in future iterations of the postmaster idle loop. (The startup process would eventually send a fresh request signal, but this change may allow us to retry the fork sooner.) Remove an obsolete comment and unnecessary test in PostmasterStateMachine's handling of PM_SHUTDOWN_2 state. It's not possible to have a live walreceiver in that state, and AFAICT has not been possible since commit 5e85315ea. This isn't a live bug, but the false comment is quite confusing to readers. In passing, rearrange sigusr1_handler's CheckPromoteSignal tests so that we don't uselessly perform stat() calls that we're going to ignore the results of. Add some comments clarifying the behavior of MaybeStartWalReceiver; I very nearly rearranged it in a way that'd reintroduce the race condition fixed in e5d494d78. Mea culpa for not commenting that properly at the time. Back-patch to all supported branches. The PMSIGNAL_ADVANCE_STATE_MACHINE change is the only one of even minor significance, but we might as well keep this code in sync across branches. Discussion: https://postgr.es/m/9001.1556046681@sss.pgh.pa.us
* Repair assorted issues in locale data extraction.Tom Lane2019-04-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cache_locale_time (extraction of LC_TIME-related info) had never been taught the lessons we previously learned about extraction of info related to LC_MONETARY and LC_NUMERIC. Specifically, commit 95a777c61 taught PGLC_localeconv() that data coming out of localeconv() was in an encoding determined by the relevant locale, but we didn't realize that there's a similar issue with strftime(). And commit a4930e7ca hardened PGLC_localeconv() against errors occurring partway through, but failed to do likewise for cache_locale_time(). So, rearrange the latter function to perform encoding conversion and not risk failure while it's got the locales set to temporary values. This time around I also changed PGLC_localeconv() to treat it as FATAL if it can't restore the previous settings of the locale values. There is no reason (except possibly OOM) for that to fail, and proceeding with the wrong locale values seems like a seriously bad idea --- especially on Windows where we have to also temporarily change LC_CTYPE. Also, protect against the possibility that we can't identify the codeset reported for LC_MONETARY or LC_NUMERIC; rather than just failing, try to validate the data without conversion. The user-visible symptom this fixes is that if LC_TIME is set to a locale name that implies an encoding different from the database encoding, non-ASCII localized day and month names would be retrieved in the wrong encoding, leading to either unexpected encoding-conversion error reports or wrong output from to_char(). The other possible failure modes are unlikely enough that we've not seen reports of them, AFAIK. The encoding conversion problems do not manifest on Windows, since we'd already created special-case code to handle that issue there. Per report from Juan José Santamaría Flecha. Back-patch to all supported versions. Juan José Santamaría Flecha and Tom Lane Discussion: https://postgr.es/m/CAC+AXB22So5aZm2vZe+MChYXec7gWfr-n-SK-iO091R0P_1Tew@mail.gmail.com
* Fix detection of passwords hashed with MD5 or SCRAM-SHA-256Michael Paquier2019-04-23
| | | | | | | | | | | | | | | | | | | | | | This commit fixes a couple of issues related to the way password verifiers hashed with MD5 or SCRAM-SHA-256 are detected, leading to being able to store in catalogs passwords which do not follow the supported hash formats: - A MD5-hashed entry was checked based on if its header uses "md5" and if the string length matches what is expected. Unfortunately the code never checked if the hash only used hexadecimal characters, as reported by Tom Lane. - A SCRAM-hashed entry was checked based on only its header, which should be "SCRAM-SHA-256$", but it never checked for any fields afterwards, as reported by Jonathan Katz. Backpatch down to v10, which is where SCRAM has been introduced, and where password verifiers in plain format have been removed. Author: Jonathan Katz Reviewed-by: Tom Lane, Michael Paquier Discussion: https://postgr.es/m/016deb6b-1f0a-8e9f-1833-a8675b170aa9@postgresql.org Backpatch-through: 10
* Fix problems with auto-held portals.Tom Lane2019-04-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | HoldPinnedPortals() did things in the wrong order: it must not mark a portal autoHeld until it's been successfully held. Otherwise, a failure while persisting the portal results in a server crash because we think the portal is in a good state when it's not. Also add a check that portal->status is READY before attempting to hold a pinned portal. We have such a check before the only other use of HoldPortal(), so it seems unwise not to check it here. Lastly, rethink the responsibility for where to call HoldPinnedPortals. The comment for it imagined that it was optional for any individual PL to call it or not, but that cannot be the case: if some outer level of procedure has a pinned portal, failing to persist it when an inner procedure commits is going to be trouble. Let's have SPI do it instead of the individual PLs. That's not a complete solution, since in theory a PL might not be using SPI to perform commit/rollback, but such a PL is going to have to be aware of lots of related requirements anyway. (This change doesn't cause an API break for any external PLs that might be calling HoldPinnedPortals per the old regime, because calling it twice during a commit or rollback sequence won't hurt.) Per bug #15703 from Julian Schauder. Back-patch to v11 where this code came in. Discussion: https://postgr.es/m/15703-c12c5bc0ea34ba26@postgresql.org
* Fix handling of temp and unlogged tables in FOR ALL TABLES publicationsPeter Eisentraut2019-04-18
| | | | | | | | | | | If a FOR ALL TABLES publication exists, temporary and unlogged tables are ignored for publishing changes. But CheckCmdReplicaIdentity() would still check in that case that such a table has a replica identity set before accepting updates. To fix, have GetRelationPublicationActions() return that such a table publishes no actions. Discussion: https://www.postgresql.org/message-id/f3f151f7-c4dd-1646-b998-f60bd6217dd3@2ndquadrant.com
* postgresql.conf.sample: add proper defaults for include actionsBruce Momjian2019-04-17
| | | | | | | | | | | | | Previously, include actions include_dir, include_if_exists, and include listed commented-out values which were not the defaults, which is inconsistent with other entries. Instead, replace them with '', which is the default value. Reported-by: Emanuel Araújo Discussion: https://postgr.es/m/CAMuTAkYMx6Q27wpELDR3_v9aG443y7ZjeXu15_+1nGUjhMWOJA@mail.gmail.com Backpatch-through: 9.4
* Fix unportable code in pgbench.Tom Lane2019-04-17
| | | | | | | | | The buildfarm points out that UINT64_FORMAT might not work with sscanf; it's calibrated for our printf implementation, which might not agree with the platform-supplied sscanf. Fall back to just accepting an unsigned long, which is already more than the documentation promises. Oversight in e6c3ba7fb; back-patch to v11, as that was.
* Don't write to stdin of a test process that could have already exited.Noah Misch2019-04-15
| | | | | | | Instead, close that stdin. Per buildfarm member conchuela. Back-patch to 9.6, where the test was introduced. Discussion: https://postgr.es/m/26478.1555373328@sss.pgh.pa.us
* Fix division by zero in _bt_vacuum_needs_cleanup()Alexander Korotkov2019-04-15
| | | | | | | | | | | Checks inside _bt_vacuum_needs_cleanup() allow division by zero to happen when metad->btm_last_cleanup_num_heap_tuples == 0. This commit adjusts the expression so that no division by zero might happen. Reported-by: Piotr Stefaniak Discussion: https://postgr.es/m/DB8PR03MB5931C41F7787A95313F08322F22A0%40DB8PR03MB5931.eurprd03.prod.outlook.com Reviewed-by: Masahiko Sawada Backpatch-through: 11
* Fix SHOW ALL command for non-superusers with replication connectionMichael Paquier2019-04-15
| | | | | | | | | | | | | | | | | | | | | | Since Postgres 10, SHOW commands can be triggered with replication connections in a WAL sender context, however it missed that a transaction context is needed for syscache lookups. This commit makes sure that the syscache lookups can happen correctly by setting a transaction context when running SHOW commands in a WAL sender. Superuser-only parameters can be displayed using SHOW commands not only to superusers, but also to members of system role pg_read_all_settings, which requires a syscache lookup to check if the connected role is a member of this system role or not, or the instance crashes. Superusers do not need to check the syscache so it worked correctly in this case. New tests are added to cover this issue. Reported-by: Alexander Kukushkin Author: Michael Paquier Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/15734-2daa8761eeed8e20@postgresql.org Backpatch-through: 10
* Test both 0.0.0.0 and 127.0.0.x addresses to find a usable port.Noah Misch2019-04-14
| | | | | | | | | | | | | Commit c098509927f9a49ebceb301a2cb6a477ecd4ac3c changed PostgresNode::get_new_node() to probe 0.0.0.0 instead of 127.0.0.1, but the new test was less effective for Windows native Perl. This increased the failure rate of buildfarm members bowerbird and jacana. Instead, test 0.0.0.0 and concrete addresses. This restores the old level of defense, but the algorithm is still subject to its longstanding time of check to time of use race condition. Back-patch to 9.6, like the previous change. Discussion: https://postgr.es/m/GrdLgAdUK9FdyZg8VIcTDKVOkys122ZINEb3CjjoySfGj2KyPiMKTh1zqtRp0TAD7FJ27G-OBB3eplxIB5GhcQH5o8zzGZfp0MuJaXJxVxk=@yesql.se
* MSYS: Translate REGRESS_SHLIB to a Windows file name.Noah Misch2019-04-14
| | | | | | | Per buildfarm member jacana. Back-patch to v11; earlier branches skip the affected test under msys. Discussion: https://postgr.es/m/GrdLgAdUK9FdyZg8VIcTDKVOkys122ZINEb3CjjoySfGj2KyPiMKTh1zqtRp0TAD7FJ27G-OBB3eplxIB5GhcQH5o8zzGZfp0MuJaXJxVxk=@yesql.se
* When Perl "kill(9, ...)" fails, try "pg_ctl kill".Noah Misch2019-04-13
| | | | | | | Per buildfarm member jacana, the former fails under msys Perl 5.8.8. Back-patch to 9.6, like the code in question. Discussion: https://postgr.es/m/GrdLgAdUK9FdyZg8VIcTDKVOkys122ZINEb3CjjoySfGj2KyPiMKTh1zqtRp0TAD7FJ27G-OBB3eplxIB5GhcQH5o8zzGZfp0MuJaXJxVxk=@yesql.se
* Prevent memory leaks associated with relcache rd_partcheck structures.Tom Lane2019-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | The original coding of generate_partition_qual() just copied the list of predicate expressions into the global CacheMemoryContext, making it effectively impossible to clean up when the owning relcache entry is destroyed --- the relevant code in RelationDestroyRelation() only managed to free the topmost List header :-(. This resulted in a session-lifespan memory leak whenever a table partition's relcache entry is rebuilt. Fortunately, that's not normally a large data structure, and rebuilds shouldn't occur all that often in production situations; but this is still a bug worth fixing back to v10 where the code was introduced. To fix, put the cached expression tree into its own small memory context, as we do with other complicated substructures of relcache entries. Also, deal more honestly with the case that a partition has an empty partcheck list; while that probably isn't a case that's very interesting for production use, it's legal. In passing, clarify comments about how partitioning-related relcache data structures are managed, and add some Asserts that we're not leaking old copies when we overwrite these data fields. Amit Langote and Tom Lane Discussion: https://postgr.es/m/7961.1552498252@sss.pgh.pa.us
* Consistently test for in-use shared memory.Noah Misch2019-04-12
| | | | | | | | | | | | | | | | | | | | | | postmaster startup scrutinizes any shared memory segment recorded in postmaster.pid, exiting if that segment matches the current data directory and has an attached process. When the postmaster.pid file was missing, a starting postmaster used weaker checks. Change to use the same checks in both scenarios. This increases the chance of a startup failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1 postmaster.pid` && rm postmaster.pid && pg_ctl -w start". A postmaster will no longer stop if shmat() of an old segment fails with EACCES. A postmaster will no longer recycle segments pertaining to other data directories. That's good for production, but it's bad for integration tests that crash a postmaster and immediately delete its data directory. Such a test now leaks a segment indefinitely. No "make check-world" test does that. win32_shmem.c already avoided all these problems. In 9.6 and later, enhance PostgresNode to facilitate testing. Back-patch to 9.4 (all supported versions). Reviewed (in earlier versions) by Daniel Gustafsson and Kyotaro HORIGUCHI. Discussion: https://postgr.es/m/20190408064141.GA2016666@rfd.leadboat.com
* Fix off-by-one check that can lead to a memory overflow in ecpg.Michael Meskes2019-04-11
| | | | Patch by Liu Huailing <liuhuailing@cn.fujitsu.com>
* Fix backwards test in operator_precedence_warning logic.Tom Lane2019-04-10
| | | | | | | | | | Warnings about unary minus might have been wrong. It's a bit surprising that nobody noticed yet ... probably the precedence-warning feature hasn't really been used much in the field. Rikard Falkeborn Discussion: https://postgr.es/m/CADRDgG6fzA8A2oeygUw4=o7ywo4kvz26NxCSgpq22nMD73Bx4Q@mail.gmail.com
* Avoid counting transaction stats for parallel worker cooperatingAmit Kapila2019-04-10
| | | | | | | | | | | | | | | | | | | | | | transaction. The transaction that is initiated by the parallel worker to cooperate with the actual transaction started by the main backend to complete the query execution should not be counted as a separate transaction. The other internal transactions started and committed by the parallel worker are still counted as separate transactions as we that is what we do in other places like autovacuum. This will partially fix the bloat in transaction stats due to additional transactions performed by parallel workers. For a complete fix, we need to decide how we want to show all the transactions that are started internally for various operations and that is a matter of separate patch. Reported-by: Haribabu Kommi Author: Haribabu Kommi Reviewed-by: Amit Kapila, Jamison Kirk and Rahila Syed Backpatch-through: 9.6 Discussion: https://postgr.es/m/CAJrrPGc9=jKXuScvNyQ+VNhO0FZk7LLAShAJRyZjnedd2D61EQ@mail.gmail.com
* Define WIN32_STACK_RLIMIT throughout win32 and cygwin builds.Noah Misch2019-04-09
| | | | | | | | The MSVC build system already did this, and commit 617dc6d299c957e2784320382b3277ede01d9c63 used it in a second file. Back-patch to 9.4, like that commit. Discussion: https://postgr.es/m/CAA8=A7_1SWc3+3Z=-utQrQFOtrj_DeohRVt7diA2tZozxsyUOQ@mail.gmail.com
* Avoid "could not reattach" by providing space for concurrent allocation.Noah Misch2019-04-08
| | | | | | | | | | | | | | | | | We've long had reports of intermittent "could not reattach to shared memory" errors on Windows. Buildfarm member dory fails that way when PGSharedMemoryReAttach() execution overlaps with creation of a thread for the process's "default thread pool". Fix that by providing a second region to receive asynchronous allocations that would otherwise intrude into UsedShmemSegAddr. In pgwin32_ReserveSharedMemoryRegion(), stop trying to free reservations landing at incorrect addresses; the caller's next step has been to terminate the affected process. Back-patch to 9.4 (all supported versions). Reviewed by Tom Lane. He also did much of the prerequisite research; see commit bcbf2346d69f6006f126044864dd9383d50d87b4. Discussion: https://postgr.es/m/20190402135442.GA1173872@rfd.leadboat.com
* Fix improper interaction of FULL JOINs with lateral references.Tom Lane2019-04-08
| | | | | | | | | | | | | | | | | | | | | join_is_legal() needs to reject forming certain outer joins in cases where that would lead the planner down a blind alley. However, it mistakenly supposed that the way to handle full joins was to treat them as applying the same constraints as for left joins, only to both sides. That doesn't work, as shown in bug #15741 from Anthony Skorski: given a lateral reference out of a join that's fully enclosed by a full join, the code would fail to believe that any join ordering is legal, resulting in errors like "failed to build any N-way joins". However, we don't really need to consider full joins at all for this purpose, because we effectively force them to be evaluated in syntactic order, and that order is always legal for lateral references. Hence, get rid of this broken logic for full joins and just ignore them instead. This seems to have been an oversight in commit 7e19db0c0. Back-patch to all supported branches, as that was. Discussion: https://postgr.es/m/15741-276f1f464b3f40eb@postgresql.org
* Fix EvalPlanQualStart to handle partitioned result rels correctly.Tom Lane2019-04-08
| | | | | | | | | | | | The es_root_result_relations array needs to be shallow-copied in the same way as the main es_result_relations array, else EPQ rechecks on partitioned result relations fail, as seen in bug #15677 from Norbert Benkocs. Amit Langote, isolation test case added by me Discussion: https://postgr.es/m/15677-0bf089579b4cd02d@postgresql.org Discussion: https://postgr.es/m/19321.1554567786@sss.pgh.pa.us
* Fix partition tuple routing with dropped attributesMichael Paquier2019-04-08
| | | | | | | | | | | | | | | | | | | | | | When trying to insert a tuple into a partitioned table, the routing to the correct partition has been messed up by mixing when a tuple needs to be stored in an intermediate parent's slot and when a tuple needs to be converted because of attribute changes between the immediate parent relation and the parent relation one level above that (the grandparent). This could trigger errors like the following: ERROR: cannot extract attribute from empty tuple slot SQL state: XX000 This was not detected because regression tests with dropped attributes only included tests with two levels of partitioning, and this can be triggered with three levels or more. This fixes bug #15733, which has been introduced by 34295b8. The bug happens only on REL_11_STABLE and HEAD gains the regression tests added for this bug. Reported-by: Petr Fedorov Author: Amit Langote, Michael Paquier Discussion: https://postgr.es/m/15733-7692379e310b80ec@postgresql.org
* Avoid fetching past the end of the indoption array.Tom Lane2019-04-07
| | | | | | | | | | | pg_get_indexdef_worker carelessly fetched indoption entries even for non-key index columns that don't have one. 99.999% of the time this would be harmless, since the code wouldn't examine the value ... but some fine day this will be a fetch off the end of memory, resulting in SIGSEGV. Detected through valgrind testing. Odd that the buildfarm's valgrind critters haven't noticed.
* Clean up side-effects of commits ab5fcf2b0 et al.Tom Lane2019-04-07
| | | | | | | | | | | | | | | | | | | | | | | Before those commits, partitioning-related code in the executor could assume that ModifyTableState.resultRelInfo[] contains only leaf partitions. However, now a fully-pruned update results in a dummy ModifyTable that references the root partitioned table, and that breaks some stuff. In v11, this led to an assertion or core dump in the tuple routing code. Fix by disabling tuple routing, since we don't need that anyway. (I chose to do that in HEAD as well for safety, even though the problem doesn't manifest in HEAD as it stands.) In v10, this confused ExecInitModifyTable's decision about whether it needed to close the root table. But we can get rid of that altogether by being smarter about where to find the root table. Note that since the referenced commits haven't shipped yet, this isn't fixing any bug the field has seen. Amit Langote, per a report from me Discussion: https://postgr.es/m/20710.1554582479@sss.pgh.pa.us
* Fix failures in validateForeignKeyConstraint's slow path.Tom Lane2019-04-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | The foreign-key-checking loop in ATRewriteTables failed to ignore relations without storage (e.g., partitioned tables), unlike the initial loop. This accidentally worked as long as RI_Initial_Check succeeded, which it does in most practical cases (including all the ones exercised in the existing regression tests :-(). However, if that failed, as for instance when there are permissions issues, then we entered the slow fire-the-trigger-on-each-tuple path. And that would try to read from the referencing relation, and fail if it lacks storage. A second problem, recently introduced in HEAD, was that this loop had been broken by sloppy refactoring for the tableam API changes. Repair both issues, and add a regression test case so we have some coverage on this code path. Back-patch as needed to v11. (It looks like this code could do with additional bulletproofing, but let's get a working test case in place first.) Hadi Moshayedi, Tom Lane, Andres Freund Discussion: https://postgr.es/m/CAK=1=WrnNmBbe5D9sm3t0a6dnAq3cdbF1vXY816j1wsMqzC8bw@mail.gmail.com Discussion: https://postgr.es/m/19030.1554574075@sss.pgh.pa.us Discussion: https://postgr.es/m/20190325180405.jytoehuzkeozggxx%40alap3.anarazel.de