aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* pg_upgrade: make get_major_server_version() err msg consistentBruce Momjian2020-03-19
| | | | | | | | | | | | | | | | | | | This patch fixes the error message in get_major_server_version() to be "could not parse version file", and uses the full file path name, rather than just the data directory path. Also, commit 4109bb5de4 added the cause of the failure to the "could not open" error message, and improved quoting. This patch backpatches the "could not open" cause to PG 12, where it was first widely used, and backpatches the quoting fix in that patch to all supported releases. Reported-by: Tom Lane Discussion: https://postgr.es/m/87pne2w98h.fsf@wibble.ilmari.org Author: Dagfinn Ilmari Mannsåker Backpatch-through: 9.5
* Add missing errcode() in a few ereport calls.Amit Kapila2020-03-18
| | | | | | | | | | This will allow to specifying SQLSTATE error code for the errors in the missing places. Reported-by: Sawada Masahiko Author: Sawada Masahiko Backpatch-through: 9.5 Discussion: https://postgr.es/m/CA+fd4k6N8EjNvZpM8nme+y+05mz-SM8Z_BgkixzkA34R+ej0Kw@mail.gmail.com
* Avoid holding a directory FD open across assorted SRF calls.Tom Lane2020-03-16
| | | | | | | | | | | | | | | | | This extends the fixes made in commit 085b6b667 to other SRFs with the same bug, namely pg_logdir_ls(), pgrowlocks(), pg_timezone_names(), pg_ls_dir(), and pg_tablespace_databases(). Also adjust various comments and documentation to warn against expecting to clean up resources during a ValuePerCall SRF's final call. Back-patch to all supported branches, since these functions were all born broken. Justin Pryzby, with cosmetic tweaks by me Discussion: https://postgr.es/m/20200308173103.GC1357@telsasoft.com
* Plug memory leakAlvaro Herrera2020-03-16
| | | | Introduced by b08dee24a557. Noted by Coverity.
* Restructure polymorphic-type resolution in funcapi.c.Tom Lane2020-03-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | resolve_polymorphic_tupdesc() and resolve_polymorphic_argtypes() failed to cover the case of having to resolve anyarray given only an anyrange input. The bug was masked if anyelement was also used (as either input or output), which probably helps account for our not having noticed. While looking at this I noticed that resolve_generic_type() would produce the wrong answer if asked to make that same resolution. ISTM that resolve_generic_type() is confusingly defined and overly complex, so rather than fix it, let's just make funcapi.c do the specific lookups it requires for itself. With this change, resolve_generic_type() is not used anywhere, so remove it in HEAD. In the back branches, leave it alone (complete with bug) just in case any external code is using it. While we're here, make some other refactoring adjustments in funcapi.c with an eye to upcoming future expansion of the set of polymorphic types: * Simplify quick-exit tests by adding an overall have_polymorphic_result flag. This is about a wash now but will be a win when there are more flags. * Reduce duplication of code between resolve_polymorphic_tupdesc() and resolve_polymorphic_argtypes(). * Don't bother to validate correct matching of anynonarray or anyenum; the parser should have done that, and even if it didn't, just doing "return false" here would lead to a very confusing, off-point error message. (Really, "return false" in these two functions should only occur if the call_expr isn't supplied or we can't obtain data type info from it.) * For the same reason, throw an elog rather than "return false" if we fail to resolve a polymorphic type. The bug's been there since we added anyrange, so back-patch to all supported branches. Discussion: https://postgr.es/m/6093.1584202130@sss.pgh.pa.us
* Preserve replica identity index across ALTER TABLE rewritePeter Eisentraut2020-03-13
| | | | | | | | | | | | If an index was explicitly set as replica identity index, this setting was lost when a table was rewritten by ALTER TABLE. Because this setting is part of pg_index but actually controlled by ALTER TABLE (not part of CREATE INDEX, say), we have to do some extra work to restore it. Based-on-patch-by: Quan Zongliang <quanzongliang@gmail.com> Reviewed-by: Euler Taveira <euler.taveira@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/c70fcab2-4866-0d9f-1d01-e75e189db342@gmail.com
* Fix nextXid tracking bug on standbys (9.5-11 only).Thomas Munro2020-03-12
| | | | | | | | | | | | | | | | | | | | | | RecordKnownAssignedTransactionIds() should never move nextXid backwards. Before this commit, that could happen if some other code path had advanced it without advancing latestObservedXid. One consequence is that a well timed XLOG_CHECKPOINT_ONLINE could cause hot standby feedback messages to get confused and report an xmin from a future epoch, potentially allowing vacuum to run too soon on the primary. Repair, by making sure RecordKnownAssignedTransactionIds() can only move nextXid forwards. In release 12 and master, this was already done by commit 2fc7af5e, which consolidated similar code and straightened out this bug. Back-patch to supported releases before that. Author: Eka Palamadai <ekanatha@amazon.com> Discussion: https://postgr.es/m/98BB4805-D0A2-48E1-96F4-15014313EADC@amazon.com
* Fix test case instability introduced in 085b6b667.Tom Lane2020-03-11
| | | | | | | | | I forgot that the WAL directory might hold other files besides WAL segments, notably including new segments still being filled. That means a blind test for the first file's size being 16MB can fail. Restrict based on file name length to make it more robust. Per buildfarm.
* Add pg_dump support for ALTER obj DEPENDS ON EXTENSIONAlvaro Herrera2020-03-11
| | | | | | | | | | | pg_dump is oblivious to this kind of dependency, so they're lost on dump/restores (and pg_upgrade). Have pg_dump emit ALTER lines so that they're preserved. Add some pg_dump tests for the whole thing, also. Reviewed-by: Tom Lane (offlist) Reviewed-by: Ibrar Ahmed Reviewed-by: Ahsan Hadi (who also reviewed commit 899a04f5ed61) Discussion: https://postgr.es/m/20200217225333.GA30974@alvherre.pgsql
* Avoid holding a directory FD open across pg_ls_dir_files() calls.Tom Lane2020-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This coding technique is undesirable because (a) it leaks the FD for the rest of the transaction if the SRF is not run to completion, and (b) allocated FDs are a scarce resource, but multiple interleaved uses of the relevant functions could eat many such FDs. In v11 and later, a query such as "SELECT pg_ls_waldir() LIMIT 1" yields a warning about the leaked FD, and the only reason there's no warning in earlier branches is that fd.c didn't whine about such leaks before commit 9cb7db3f0. Even disregarding the warning, it wouldn't be too hard to run a backend out of FDs with careless use of these SQL functions. Hence, rewrite the function so that it reads the directory within a single call, returning the results as a tuplestore rather than via value-per-call mode. There are half a dozen other built-in SRFs with similar problems, but let's fix this one to start with, just to see if the buildfarm finds anything wrong with the code. In passing, fix bogus error report for stat() failure: it was whining about the directory when it should be fingering the individual file. Doubtless a copy-and-paste error. Back-patch to v10 where this function was added. Justin Pryzby, with cosmetic tweaks and test cases by me Discussion: https://postgr.es/m/20200308173103.GC1357@telsasoft.com
* Avoid duplicates in ALTER ... DEPENDS ON EXTENSIONAlvaro Herrera2020-03-11
| | | | | | | | | | | | | | | If the command is attempted for an extension that the object already depends on, silently do nothing. In particular, this means that if a database containing multiple such entries is dumped, the restore will silently do the right thing and record just the first one. (At least, in a world where pg_dump does dump such entries -- which it doesn't currently, but it will.) Backpatch to 9.6, where this kind of dependency was introduced. Reviewed-by: Ibrar Ahmed, Tom Lane (offlist) Discussion: https://postgr.es/m/20200217225333.GA30974@alvherre.pgsql
* Fix pg_dump/pg_restore to restore event triggers later.Tom Lane2020-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, event triggers were restored just after regular triggers (and FK constraints, which are basically triggers). This is risky since an event trigger, once installed, could interfere with subsequent restore commands. Worse, because event triggers don't have any particular dependencies on any post-data objects, a parallel restore would consider them eligible to be restored the moment the post-data phase starts, allowing them to also interfere with restoration of a whole bunch of objects that would have been restored before them in a serial restore. There's no way to completely remove the risk of a misguided event trigger breaking the restore, since if nothing else it could break other event triggers. But we can certainly push them to later in the process to minimize the hazard. To fix, tweak the RestorePass mechanism introduced by commit 3eb9a5e7c so that event triggers are handled as part of the post-ACL processing pass (renaming the "REFRESH" pass to "POST_ACL" to reflect its more general use). This will cause them to restore after everything except matview refreshes, which seems OK since matview refreshes really ought to run in the post-restore state of the database. In a parallel restore, event triggers and matview refreshes might be intermixed, but that seems all right as well. Also update the code and comments in pg_dump_sort.c so that its idea of how things are sorted agrees with what actually happens due to the RestorePass mechanism. This is mostly cosmetic: it'll affect the order of objects in a dump's TOC, but not the actual restore order. But not changing that would be quite confusing to somebody reading the code. Back-patch to all supported branches. Fabrízio de Royes Mello, tweaked a bit by me Discussion: https://postgr.es/m/CAFcNs+ow1hmFox8P--3GSdtwz-S3Binb6ZmoP6Vk+Xg=K6eZNA@mail.gmail.com
* Fix bug that causes to report waiting in PS display twice, in hot standby.Fujii Masao2020-03-10
| | | | | | | | | | | | | | | | | | | | | Previously "waiting" could appear twice via PS in case of lock conflict in hot standby mode. Specifically this issue happend when the delay in WAL application determined by max_standby_archive_delay and max_standby_streaming_delay had passed but it took more than 500 msec to cancel all the conflicting transactions. Especially we can observe this easily by setting those delay parameters to -1. The cause of this issue was that WaitOnLock() and ResolveRecoveryConflictWithVirtualXIDs() added "waiting" to the process title in that case. This commit prevents ResolveRecoveryConflictWithVirtualXIDs() from reporting waiting in case of lock conflict, to fix the bug. Back-patch to all back branches. Author: Masahiko Sawada Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CA+fd4k4mXWTwfQLS3RPwGr4xnfAEs1ysFfgYHvmmoUgv6Zxvmg@mail.gmail.com
* Avoid assertion failure with targeted recovery in standby mode.Fujii Masao2020-03-09
| | | | | | | | | | | | | | | | | | | | | | | | At the end of recovery, standby mode is turned off to re-fetch the last valid record from archive or pg_wal. Previously, if recovery target was reached and standby mode was turned off while the current WAL source was stream, recovery could try to retrieve WAL file containing the last valid record unexpectedly from stream even though not in standby mode. This caused an assertion failure. That is, the assertion test confirms that WAL file should not be retrieved from stream if standby mode is not true. This commit moves back the current WAL source to archive if it's stream even though not in standby mode, to avoid that assertion failure. This issue doesn't cause the server to crash when built with assertion disabled. In this case, the attempt to retrieve WAL file from stream not in standby mode just fails. And then recovery tries to retrieve WAL file from archive or pg_wal. Back-patch to all supported branches. Author: Kyotaro Horiguchi Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/20200227.124830.2197604521555566121.horikyota.ntt@gmail.com
* Fix typoPeter Eisentraut2020-03-07
|
* Fix assertion failure with ALTER TABLE ATTACH PARTITION and indexesMichael Paquier2020-03-03
| | | | | | | | | | | | | | | | | | Using ALTER TABLE ATTACH PARTITION causes an assertion failure when attempting to work on a partitioned index, because partitioned indexes cannot have partition bounds. The grammar of ALTER TABLE ATTACH PARTITION requires partition bounds, but not ALTER INDEX, so mixing ALTER TABLE with partitioned indexes is confusing. Hence, on HEAD, prevent ALTER TABLE to attach a partition if the relation involved is a partitioned index. On back-branches, as applications may rely on the existing behavior, just remove the culprit assertion. Reported-by: Alexander Lakhin Author: Amit Langote, Michael Paquier Discussion: https://postgr.es/m/16276-5cd1dcc8fb8be7b5@postgresql.org Backpatch-through: 11
* Correctly re-use hash tables in buildSubPlanHash().Tom Lane2020-02-29
| | | | | | | | | | | | | | | | | | | | | | Commit 356687bd8 omitted to remove leftover code for destroying a hashed subplan's hash tables, with the result that the tables were always rebuilt not reused; this leads to severe memory leakage if a hashed subplan is re-executed enough times. Moreover, the code for reusing the hashnulls table had a typo that would have made it do the wrong thing if it were reached. Looking at the code coverage report shows severe under-coverage of the potential callers of ResetTupleHashTable, so add some test cases that exercise them. Andreas Karlsson and Tom Lane, per reports from Ranier Vilela and Justin Pryzby. Backpatch to v11, as the faulty commit was. Discussion: https://postgr.es/m/edb62547-c453-c35b-3ed6-a069e4d6b937@proxel.se Discussion: https://postgr.es/m/CAEudQAo=DCebm1RXtig9OH+QivpS97sMkikt0A9qHmMUs+g6ZA@mail.gmail.com Discussion: https://postgr.es/m/20200210032547.GA1412@telsasoft.com
* Avoid failure if autovacuum tries to access a just-dropped temp namespace.Tom Lane2020-02-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Such an access became possible when commit 246a6c8f7 added more aggressive cleanup of orphaned temp relations by autovacuum. Since autovacuum's snapshot might be slightly stale, it could attempt to access an already-dropped temp namespace, resulting in an assertion failure or null-pointer dereference. (In practice, since we don't drop temp namespaces automatically but merely recycle them, this situation could only arise if a superuser does a manual drop of a temp namespace. Still, that should be allowed.) The core of the bug, IMO, is that isTempNamespaceInUse and its callers failed to think hard about whether to treat "temp namespace isn't there" differently from "temp namespace isn't in use". In hopes of forestalling future mistakes of the same ilk, replace that function with a new one checkTempNamespaceStatus, which makes the same tests but returns a three-way enum rather than just a bool. isTempNamespaceInUse is gone entirely in HEAD; but just in case some external code is relying on it, keep it in the back branches, as a bug-compatible wrapper around the new function. Per report originally from Prabhat Kumar Sahu, investigated by Mahendra Singh and Michael Paquier; the final form of the patch is my fault. This replaces the failed fix attempt in a052f6cbb. Backpatch as far as v11, as 246a6c8f7 was. Discussion: https://postgr.es/m/CAKYtNAr9Zq=1-ww4etHo-VCC-k120YxZy5OS01VkaLPaDbv2tg@mail.gmail.com
* Remove TAP test for createdb --lc-ctypeMichael Paquier2020-02-27
| | | | | | | | | | | OpenBSD falls back to "C" when using an incorrect input with setlocale() and LC_CTYPE, causing this test, introduced by 008cf04, to fail. This removes the culprit test to avoid the portability issue. Per report from Robert Haas, via buildfarm member curculio. Discussion: https://postgr.es/m/CA+TgmoZ6ddh3mHD9gU8DvNYoFmuJaYYn1+4AvZNp25vTdRwCAQ@mail.gmail.com Backpatch-through: 11
* Skip foreign tablespaces when running pg_checksums/pg_verify_checksumsMichael Paquier2020-02-27
| | | | | | | | | | | | | | | | | | | Attempting to use pg_checksums (pg_verify_checksums in 11) on a data folder which includes tablespace paths used across multiple major versions would cause pg_checksums to scan all directories present in pg_tblspc, and not only marked with TABLESPACE_VERSION_DIRECTORY. This could lead to failures when for example running sanity checks on an upgraded instance with --check. Even worse, it was possible to rewrite on-disk pages with --enable for a cluster potentially online. This commit makes pg_checksums skip any directories not named TABLESPACE_VERSION_DIRECTORY, similarly to what is done for base backups. Reported-by: Michael Banck Author: Michael Banck, Bernd Helmle Discussion: https://postgr.es/m/62031974fd8e941dd8351fbc8c7eff60d59c5338.camel@credativ.de backpatch-through: 11
* createdb: Fix quoting of --encoding, --lc-ctype and --lc-collateMichael Paquier2020-02-27
| | | | | | | | | | | | The original coding failed to properly quote those arguments, leading to failures when using quotes in the values used. As the quoting can be encoding-sensitive, the connection to the backend needs to be taken before applying the correct quoting. Author: Michael Paquier Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/20200214041004.GB1998@paquier.xyz Backpatch-through: 9.5
* Add prefix checks in exclude lists for pg_rewind, pg_checksums and base backupsMichael Paquier2020-02-24
| | | | | | | | | | | | | | | | | | An instance of PostgreSQL crashing with a bad timing could leave behind temporary pg_internal.init files, potentially causing failures when verifying checksums. As the same exclusion lists are used between pg_rewind, pg_checksums and basebackup.c, all those tools are extended with prefix checks to keep everything in sync, with dedicated checks added for pg_internal.init. Backpatch down to 11, where pg_checksums (pg_verify_checksums in 11) and checksum verification for base backups have been introduced. Reported-by: Michael Banck Author: Michael Paquier Reviewed-by: Kyotaro Horiguchi, David Steele Discussion: https://postgr.es/m/62031974fd8e941dd8351fbc8c7eff60d59c5338.camel@credativ.de Backpatch-through: 11
* Remove extra word from comment.Etsuro Fujita2020-02-20
|
* Fix typoPeter Eisentraut2020-02-19
| | | | Reported-by: Daniel Verite <daniel@manitou-mail.org>
* Fix confusion about event trigger vs. plain function in plpgsql.Tom Lane2020-02-19
| | | | | | | | | | | | | | | | | | | | | | | | | The function hash table keys made by compute_function_hashkey() failed to distinguish event-trigger call context from regular call context. This meant that once we'd successfully made a hash entry for an event trigger (either by validation, or by normal use as an event trigger), an attempt to call the trigger function as a plain function would find this hash entry and thereby bypass the you-can't-do-that check in do_compile(). Thus we'd attempt to execute the function, leading to strange errors or even crashes, depending on function contents and server version. To fix, add an isEventTrigger field to PLpgSQL_func_hashkey, paralleling the longstanding infrastructure for regular triggers. This fits into what had been pad space, so there's no risk of an ABI break, even assuming that any third-party code is looking at these hash keys. (I considered replacing isTrigger with a PLpgSQL_trigtype enum field, but felt that that carried some API/ABI risk. Maybe we should change it in HEAD though.) Per bug #16266 from Alexander Lakhin. This has been broken since event triggers were invented, so back-patch to all supported branches. Discussion: https://postgr.es/m/16266-fcd7f838e97ba5d4@postgresql.org
* Fix mesurement of elapsed time during truncating heap in VACUUM.Fujii Masao2020-02-19
| | | | | | | | | | | | | | | | | | | | | | | | | | VACUUM may truncate heap in several batches. The activity report is logged for each batch, and contains the number of pages in the table before and after the truncation, and also the elapsed time during the truncation. Previously the elapsed time reported in each batch was the total elapsed time since starting the truncation until finishing each batch. For example, if the truncation was processed dividing into three batches, the second batch reported the accumulated time elapsed during both first and second batches. This is strange and confusing because the number of pages in the table reported together is not total. Instead, each batch should report the time elapsed during only that batch. The cause of this issue was that the resource usage snapshot was initialized only at the beginning of the truncation and was never reset later. This commit fixes the issue by changing VACUUM so that the resource usage snapshot is reset at each batch. Back-patch to all supported branches. Reported-by: Tatsuhito Kasahara Author: Tatsuhito Kasahara Reviewed-by: Masahiko Sawada, Fujii Masao Discussion: https://postgr.es/m/CAP0=ZVJsf=NvQuy+QXQZ7B=ZVLoDV_JzsVC1FRsF1G18i3zMGg@mail.gmail.com
* Stop demanding that top xact must be seen before subxact in decoding.Amit Kapila2020-02-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | Manifested as ERROR: subtransaction logged without previous top-level txn record this check forbids legit behaviours like - First xl_xact_assignment record is beyond reading, i.e. earlier restart_lsn. - After restart_lsn there is some change of a subxact. - After that, there is second xl_xact_assignment (for another subxact) revealing the relationship between top and first subxact. Such a transaction won't be streamed anyway because we hadn't seen it in full. Saying for sure whether xact of some record encountered after the snapshot was deserialized can be streamed or not requires to know whether it wrote something before deserialization point --if yes, it hasn't been seen in full and can't be decoded. Snapshot doesn't have such info, so there is no easy way to relax the check. Reported-by: Hsu, John Diagnosed-by: Arseny Sher Author: Arseny Sher, Amit Kapila Reviewed-by: Amit Kapila, Dilip Kumar Backpatch-through: 9.5 Discussion: https://postgr.es/m/AB5978B2-1772-4FEE-A245-74C91704ECB0@amazon.com
* Teach pg_dump to dump comments on RLS policy objects.Tom Lane2020-02-17
| | | | | | | | | | This was unaccountably omitted in the original RLS patch. The SQL syntax is basically the same as for comments on triggers, so crib code from dumpTrigger(). Per report from Marc Munro. Back-patch to all supported branches. Discussion: https://postgr.es/m/1581889298.18009.15.camel@bloodnok.com
* Document the pg_upgrade -j/--jobs option as taking an argumentPeter Eisentraut2020-02-11
|
* Stamp 11.7.REL_11_7Tom Lane2020-02-10
|
* createuser: fix parsing of --connection-limit argumentAlvaro Herrera2020-02-10
| | | | | | | The original coding failed to quote the argument properly. Reported-by: Daniel Gustafsson Discussion: 1B8AE66C-85AB-4728-9BB4-612E8E61C219@yesql.se
* Fix priv checks for ALTER <object> DEPENDS ON EXTENSIONAlvaro Herrera2020-02-10
| | | | | | | | | | | | | | Marking an object as dependant on an extension did not have any privilege check whatsoever; this allowed any user to mark objects as droppable by anyone able to DROP EXTENSION, which could be used to cause system-wide havoc. Disallow by checking that the calling user owns the mentioned object. (No constraints are placed on the extension.) Security: CVE-2020-1720 Reported-by: Tom Lane Discussion: 31605.1566429043@sss.pgh.pa.us
* Translation updatesPeter Eisentraut2020-02-10
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 85c682262712155b8026c05a3b09066e85a6af98
* Revert "pg_upgrade: Fix quoting of some arguments in pg_ctl command"Michael Paquier2020-02-10
| | | | | | | This reverts commit d1c0b61. The patch has some downsides that require more attention, as discussed with Noah Misch. Backpatch-through: 9.5
* pg_upgrade: Fix quoting of some arguments in pg_ctl commandMichael Paquier2020-02-10
| | | | | | | | | | | | | | | | | | | | The previous coding forgot to apply shell quoting to the socket directory and the data folder, leading to failures when running pg_upgrade. This refactors the code generating the pg_ctl command starting clusters to use a more correct shell quoting. Failures are easier to trigger in 12 and newer versions by using a value of --socketdir that includes quotes, but it is also possible to cause failures with quotes included in the default socket directory used by pg_upgrade or the data folders of the clusters involved in the upgrade. As 9.4 is going to be EOL'd with the next minor release, nobody is likely going to upgrade to it now so this branch is not included in the set of branches fixed. Author: Michael Paquier Reviewed-by: Álvaro Herrera, Noah Misch Backpatch-through: 9.5
* Store the deletion horizon XID for a deleted GIN page on the right page.Tom Lane2020-02-09
| | | | | | | | | | | | | | | | | Commit b10714080 moved the GinPageSetDeleteXid() call to a spot where the "page" variable was pointing to the wrong page, causing the XID to be inserted on a page that's not being deleted, thus allowing later GinPageIsRecyclable tests to recycle the deleted page too soon. It might be a good idea to stop using the single "page" variable for multiple purposes in this function. But for the moment I just moved the GinPageSetDeleteXid() call down beside the GinPageSetDeleted() call, which seems like a more logical place for it anyway. Back-patch to v11, as the faulty patch was. (Fortunately, the bug hasn't made it into any release yet.) Discussion: https://postgr.es/m/21620.1581098806@sss.pgh.pa.us
* Fix typo.Amit Kapila2020-02-06
| | | | | | | Reported-by: Amit Langote Author: Amit Langote Backpatch-through: 9.6, where it was introduced Discussion: https://postgr.es/m/CA+HiwqFNADeukaaGRmTqANbed9Fd81gLi08AWe_F86_942Gspw@mail.gmail.com
* Fix bug in LWLock statistics mechanism.Fujii Masao2020-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously PostgreSQL built with -DLWLOCK_STATS could report more than one LWLock statistics entries for the same backend process and the same LWLock. This is strange and only one statistics should be output in that case, instead. The cause of this issue is that the key variable used for LWLock stats hash table was not fully initialized. The key consists of two fields and they were initialized. But the following 4 bytes allocated in the key variable for the alignment was not initialized. So even if the same key was specified, hash_search(HASH_ENTER) could not find the existing entry for that key and created new one. This commit fixes this issue by initializing the key variable with zero. As the side effect of this commit, the volume of LWLock statistics output would be reduced very much. Back-patch to v10, where commit 3761fe3c20 introduced the issue. Author: Fujii Masao Reviewed-by: Julien Rouhaud, Kyotaro Horiguchi Discussion: https://postgr.es/m/26359edb-798a-568f-d93a-6aafac49752d@oss.nttdata.com
* Force tuple conversion when the source has missing attributes.Andrew Gierth2020-02-05
| | | | | | | | | | | | | | | | | | | | | Tuple conversion incorrectly concluded that no conversion was needed as long as all the attributes lined up. But if the source tuple has a missing attribute (from addition of a column with default), then the destination tupdesc might not reflect the same default. The typical symptom was that the affected columns would be unexpectedly NULL. Repair by always forcing conversion if the source has missing attributes, which will be filled in by the deform operation. (In theory we could optimize for when the destination has the same default, but that seemed overkill.) Backpatch to 11 where missing attributes were added. Per bug #16242. Vik Fearing (discovery, code, testing) and me (analysis, testcase). Discussion: https://postgr.es/m/16242-d1c9fca28445966b@postgresql.org
* When a TAP file has non-zero exit status, retain temporary directories.Noah Misch2020-02-05
| | | | | | | | | | | | PostgresNode already retained base directories in such cases. Stop using $SIG{__DIE__}, which is redundant with the exit status check, in lieu of proliferating it to TestLib. Back-patch to 9.6, where commit 88802e068017bee8cea7a5502a712794e761c7b5 introduced retention on failure. Reviewed by Daniel Gustafsson. Discussion: https://postgr.es/m/20200202170155.GA3264196@rfd.leadboat.com
* Handle lack of DSM slots in parallel btree build, take 2.Thomas Munro2020-02-05
| | | | | | | | | | Commit 74618e77 added a new check intended to fix a bug, but put it in the wrong place so that parallel btree build was always disabled. Do the check after we've actually tried to create a DSM segment. Back-patch to 11, like the earlier commit. Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/CAH2-WzmDABkJzrNnvf%2BOULK-_A_j9gkYg_Dz-H62jzNv4eKQTw%40mail.gmail.com
* Fix handling of "Subplans Removed" field in EXPLAIN output.Tom Lane2020-02-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 499be013d added this field in a rather poorly-thought-through manner, with the result being that rather than being a field of the Append or MergeAppend plan node as intended (and as it seems to be, in text format), it was actually an element of the "Plans" subgroup. At least in JSON format, that's flat out invalid syntax, because "Plans" is an array not an object. While it's not hard to move the generation of the field so that it appears where it's supposed to, this does result in a visible change in field order in text format, in cases where a Append or MergeAppend plan node has any InitPlans attached. That's slightly annoying to do in stable branches; but the alternative of continuing to emit broken non-text formats seems worse. Also, since the set of fields emitted is not supposed to be data-dependent in non-text formats, make sure that "Subplans Removed" appears in Append and MergeAppend nodes even when it's zero, in those formats. (The previous coding made it look like it could appear in some other node types such as BitmapAnd, but we don't actually support runtime pruning there, so don't emit it in those cases.) Per bug #16171 from Mahadevan Ramachandran. Fix by Daniel Gustafsson and Tom Lane, reviewed by Hamid Akhtar. Back-patch to v11 where this code came in. Discussion: https://postgr.es/m/16171-b72259ab75505fa2@postgresql.org
* Add missing break out seqscan loop in logical replicationAlvaro Herrera2020-02-03
| | | | | | | | | | | | | When replica identity is FULL (an admittedly unusual case), the loop that searches for tuples in execReplication.c didn't stop scanning the table when once a matching tuple was found. Add the missing 'break'. Note slight behavior change: we now return the first matching tuple rather than the last one. They are supposed to be indistinguishable anyway, so this shouldn't matter. Author: Konstantin Knizhnik Discussion: https://postgr.es/m/379743f6-ae91-b866-f7a2-5624e6d2b0a4@postgrespro.ru
* Revert commit a5b652f3a0.Fujii Masao2020-02-03
| | | | | | | | | | | | This commit reverts the fix "Make inherited TRUNCATE perform access permission checks on parent table only" only in the back branches. It's not hard to imagine that there are some applications expecting the old behavior and the fix breaks their security. To avoid this compatibility problem, we decided to apply the fix only in HEAD and revert it in all supported back branches. Discussion: https://postgr.es/m/21015.1580400165@sss.pgh.pa.us
* Fix memory leak on DSM slot exhaustion.Thomas Munro2020-02-01
| | | | | | | | | | | | | | If we attempt to create a DSM segment when no slots are available, we should return the memory to the operating system. Previously we did that if the DSM_CREATE_NULL_IF_MAXSEGMENTS flag was passed in, but we didn't do it if an error was raised. Repair. Back-patch to 9.4, where DSM segments arrived. Author: Thomas Munro Reviewed-by: Robert Haas Reported-by: Julian Backes Discussion: https://postgr.es/m/CA%2BhUKGKAAoEw-R4om0d2YM4eqT1eGEi6%3DQot-3ceDR-SLiWVDw%40mail.gmail.com
* Fix CheckAttributeType's handling of collations for ranges.Tom Lane2020-01-31
| | | | | | | | | | | | | | | | | | | | Commit fc7695891 changed CheckAttributeType to recurse into ranges, but made it pass down the wrong collation (always InvalidOid, since ranges as such have no collation). This would result in guaranteed failure when considering a range type whose subtype is collatable. Embarrassingly, we lack any regression tests that would expose such a problem (but fortunately, somebody noticed before we shipped this bug in any release). Fix it to pass down the range's subtype collation property instead, and add some regression test cases to exercise collatable-subtype ranges a bit more. Back-patch to all supported branches, as the previous patch was. Report and patch by Julien Rouhaud, test cases tweaked by me Discussion: https://postgr.es/m/CAOBaU_aBWqNweiGUFX0guzBKkcfJ8mnnyyGC_KBQmO12Mj5f_A@mail.gmail.com
* Fix parallel pg_dump/pg_restore for failure to create worker processes.Tom Lane2020-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we failed to fork a worker process, or create a communication pipe for one, WaitForTerminatingWorkers would suffer an assertion failure if assert-enabled, otherwise crash or go into an infinite loop. This was a consequence of not accounting for the startup condition where we've not yet forked all the workers. The original bug was that ParallelBackupStart would set workerStatus to WRKR_IDLE before it had successfully forked a worker. I made things worse in commit b7b8cc0cf by not understanding the undocumented fact that the WRKR_TERMINATED state was also meant to represent the case where a worker hadn't been started yet: I changed enum T_WorkerStatus so that *all* the worker slots were initially in WRKR_IDLE state. But this wasn't any more broken in practice, since even one slot in the wrong state would keep WaitForTerminatingWorkers from terminating. In v10 and later, introduce an explicit T_WorkerStatus value for worker-not-started, in hopes of preventing future oversights of the same ilk. Before that, just document that WRKR_TERMINATED is supposed to cover that case (partly because it wasn't actively broken, and partly because the enum is exposed outside parallel.c in those branches, so there's microscopically more risk involved in changing it). In all branches, introduce a WORKER_IS_RUNNING status test macro to hide which T_WorkerStatus values mean that, and be more careful not to access ParallelSlot fields till we're sure they're valid. Per report from Vignesh C, though this is my patch not his. Back-patch to all supported branches. Discussion: https://postgr.es/m/CALDaNm1Luv-E3sarR+-unz-BjchquHHyfP+YC+2FS2pt_J+wxg@mail.gmail.com
* Fix typo in recently-added TAP test for replication slotsMichael Paquier2020-01-31
| | | | Oversight in commit b0afdca.
* Handle lack of DSM slots in parallel btree build.Thomas Munro2020-01-31
| | | | | | | | | | | | | If no DSM slots are available, a ParallelContext can still be created, but its seg pointer is NULL. Teach parallel btree build to cope with that by falling back to a regular non-parallel build, to avoid crashing with a segmentation fault. Back-patch to 11, where parallel CREATE INDEX landed. Reported-by: Nicola Contu Reviewed-by: Peter Geoghegan Discussion: https://postgr.es/m/CA%2BhUKGJgJEBnkuODBVomyK3MWFvDBbMVj%3Dgdt6DnRPU-5sQ6UQ%40mail.gmail.com
* Make inherited TRUNCATE perform access permission checks on parent table only.Fujii Masao2020-01-31
| | | | | | | | | | | | | | Previously, TRUNCATE command through a parent table checked the permissions on not only the parent table but also the children tables inherited from it. This was a bug and inherited queries should perform access permission checks on the parent table only. This commit fixes that bug. Back-patch to all supported branches. Author: Amit Langote Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CAHGQGwFHdSvifhJE+-GSNqUHSfbiKxaeQQ7HGcYz6SC2n_oDcg@mail.gmail.com