aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Fix old bug with coercing the result of a COLLATE expression.Tom Lane2021-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | There are hacks in parse_coerce.c to push down a requested coercion to below any CollateExpr that may appear. However, we did that even if the requested data type is non-collatable, leading to an invalid expression tree in which CollateExpr is applied to a non-collatable type. The fix is just to drop the CollateExpr altogether, reasoning that it's useless. This bug is ten years old, dating to the original addition of COLLATE support. The lack of field complaints suggests that there aren't a lot of user-visible consequences. We noticed the problem because it would trigger an assertion in DefineVirtualRelation if the invalid structure appears as an output column of a view; however, in a non-assert build, you don't see a crash just a (subtly incorrect) complaint about applying collation to a non-collatable type. I found that by putting the incorrect structure further down in a view, I could make a view definition that would fail dump/reload, per the added regression test case. But CollateExpr doesn't do anything at run-time, so this likely doesn't lead to any really exciting consequences. Per report from Yulin Pei. Back-patch to all supported branches. Discussion: https://postgr.es/m/HK0PR01MB22744393C474D503E16C8509F4709@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
* Fix out-of-bound memory access for interval -> char conversionMichael Paquier2021-04-12
| | | | | | | | | | | | | | | | | | | | Using Roman numbers (via "RM" or "rm") for a conversion to calculate a number of months has never considered the case of negative numbers, where a conversion could easily cause out-of-bound memory accesses. The conversions in themselves were not completely consistent either, as specifying 12 would result in NULL, but it should mean XII. This commit reworks the conversion calculation to have a more consistent behavior: - If the number of months and years is 0, return NULL. - If the number of months is positive, return the exact month number. - If the number of months is negative, do a backward calculation, with -1 meaning December, -2 November, etc. Reported-by: Theodor Arsenij Larionov-Trichkin Author: Julien Rouhaud Discussion: https://postgr.es/m/16953-f255a18f8c51f1d5@postgresql.org backpatch-through: 9.6
* Fix typoMagnus Hagander2021-04-09
| | | | | | Author: Daniel Westermann Backpatch-through: 9.6 Discussion: https://postgr.es/m/GV0P278MB0483A7AA85BAFCC06D90F453D2739@GV0P278MB0483.CHEP278.PROD.OUTLOOK.COM
* Don't add non-existent pages to bitmap from BRINTomas Vondra2021-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code in bringetbitmap() simply added the whole matching page range to the TID bitmap, as determined by pages_per_range, even if some of the pages were beyond the end of the heap. The query then might fail with an error like this: ERROR: could not open file "base/20176/20228.2" (target block 262144): previous segment is only 131021 blocks In this case, the relation has 262093 pages (131072 and 131021 pages), but we're trying to acess block 262144, i.e. first block of the 3rd segment. At that point _mdfd_getseg() notices the preceding segment is incomplete, and fails. Hitting this in practice is rather unlikely, because: * Most indexes use power-of-two ranges, so segments and page ranges align perfectly (segment end is also a page range end). * The table size has to be just right, with the last segment being almost full - less than one page range from full segment, so that the last page range actually crosses the segment boundary. * Prefetch has to be enabled. The regular page access checks that pages are not beyond heap end, but prefetch does not. On older releases (before 12) the execution stops after hitting the first non-existent page, so the prefetch distance has to be sufficient to reach the first page in the next segment to trigger the issue. Since 12 it's enough to just have prefetch enabled, the prefetch distance does not matter. Fixed by not adding non-existent pages to the TID bitmap. Backpatch all the way back to 9.6 (BRIN indexes were introduced in 9.5, but that release is EOL). Backpatch-through: 9.6
* Fix potential rare failure in the kerberos TAP testsMichael Paquier2021-04-07
| | | | | | | | | | | | | | | Instead of writing a query to psql's stdin, which can cause a failure where psql exits before writing, reporting a write failure with a broken pipe, this changes the logic to use -c. This was not seen in the buildfarm as no animals with a sensitive environment are running the kerberos tests, but let's be safe. HEAD is able to handle the situation as of 6d41dd0 for all the test suites doing connection checks. f44b9b6 has fixed the same problem for the LDAP tests. Discussion: https://postgr.es/m/YGu7ceWAiSNQDgH5@paquier.xyz Backpatch-through: 11
* Shut down transaction tracking at startup process exit.Fujii Masao2021-04-06
| | | | | | | | | | | | | | | | | | | | | | | Maxim Orlov reported that the shutdown of standby server could result in the following assertion failure. The cause of this issue was that, when the shutdown caused the startup process to exit, recovery-time transaction tracking was not shut down even if it's already initialized, and some locks the tracked transactions were holding could not be released. At this situation, if other process was invoked and the PGPROC entry that the startup process used was assigned to it, it found such unreleased locks and caused the assertion failure, during the initialization of it. TRAP: FailedAssertion("SHMQueueEmpty(&(MyProc->myProcLocks[i]))" This commit fixes this issue by making the startup process shut down transaction tracking and release all locks, at the exit of it. Back-patch to all supported branches. Reported-by: Maxim Orlov Author: Fujii Masao Reviewed-by: Maxim Orlov Discussion: https://postgr.es/m/ad4ce692cc1d89a093b471ab1d969b0b@postgrespro.ru
* Fix more confusion in SP-GiST.Tom Lane2021-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | spg_box_quad_leaf_consistent unconditionally returned the leaf datum as leafValue, even though in its usage for poly_ops that value is of completely the wrong type. In versions before 12, that was harmless because the core code did nothing with leafValue in non-index-only scans ... but since commit 2a6368343, if we were doing a KNN-style scan, spgNewHeapItem would unconditionally try to copy the value using the wrong datatype parameters. Said copying is a waste of time and space if we're not going to return the data, but it accidentally failed to fail until I fixed the datatype confusion in ac9099fc1. Hence, change spgNewHeapItem to not copy the datum unless we're actually going to return it later. This saves cycles and dodges the question of whether lossy opclasses are returning the right type. Also change spg_box_quad_leaf_consistent to not return data that might be of the wrong type, as insurance against somebody introducing a similar bug into the core code in future. It seems like a good idea to back-patch these two changes into v12 and v13, although I'm afraid to change spgNewHeapItem's mistaken idea of which datatype to use in those branches. Per buildfarm results from ac9099fc1. Discussion: https://postgr.es/m/3728741.1617381471@sss.pgh.pa.us
* Use macro MONTHS_PER_YEAR instead of '12' in /ecpg/pgtypeslibBruce Momjian2021-04-02
| | | | | | All other places already use MONTHS_PER_YEAR appropriately. Backpatch-through: 9.6
* pg_checksums: Fix progress reporting.Fujii Masao2021-04-03
| | | | | | | | | | | | | | | | | | | | | | pg_checksums uses two counters, total size and current size, to calculate the progress. Previously the progress that pg_checksums reported could not reach 100% at the end. The cause of this issue was that the sizes of only pages excluding new ones in each file were counted as the current size while the size of each file is counted as the total size. That is, the total size of all new pages could be reported as the difference between the total size and current size. This commit fixes this issue by making pg_checksums count the sizes of all pages including new ones in each file as the current size. Back-patch to v12 where progress reporting was added to pg_checksums. Reported-by: Shinya Kato Author: Shinya Kato Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/TYAPR01MB289656B1ACA0A5E7CAD07BE3C47A9@TYAPR01MB2896.jpnprd01.prod.outlook.com
* Improve stability of test with vacuum_truncate in reloptions.sqlMichael Paquier2021-04-02
| | | | | | | | | | | | | | | | | | This test has been using a simple VACUUM with pg_relation_size() to check if a relation gets physically truncated or not, but forgot the fact that some concurrent activity, like checkpoint buffer writes, could cause some pages to be skipped. The second test enabling vacuum_truncate could fail, seeing a non-empty relation. The first test would not have failed, but could finish by testing a behavior different than the one aimed for. Both tests gain a FREEZE option, to make the vacuums more aggressive and prevent page skips. This is similar to the issues fixed in c2dc1a7. Author: Arseny Sher Reviewed-by: Masahiko Sawada Discussion: https://postgr.es/m/87tuotr2hh.fsf@ars-thinkpad backpatch-through: 12
* Fix pg_restore's misdesigned code for detecting archive file format.Tom Lane2021-04-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Despite the clear comments pointing out that the duplicative code segments in ReadHead() and _discoverArchiveFormat() needed to be in sync, they were not: the latter did not bother to apply any of the sanity checks in the former. We'd missed noticing this partly because none of those checks would fail in scenarios we customarily test, and partly because the oversight would be masked if both segments execute, which they would in cases other than needing to autodetect the format of a non-seekable stdin source. However, in a case meeting all these requirements --- for example, trying to read a newer-than-supported archive format from non-seekable stdin --- pg_restore missed applying the version check and would likely dump core or otherwise misbehave. The whole thing is silly anyway, because there seems little reason to duplicate the logic beyond the one-line verification that the file starts with "PGDMP". There seems to have been an undocumented assumption that multiple major formats (major enough to require separate reader modules) would nonetheless share the first half-dozen fields of the custom-format header. This seems unlikely, so let's fix it by just nuking the duplicate logic in _discoverArchiveFormat(). Also get rid of the pointless attempt to seek back to the start of the file after successful autodetection. That wastes cycles and it means we have four behaviors to verify not two. Per bug #16951 from Sergey Koposov. This has been broken for decades, so back-patch to all supported versions. Discussion: https://postgr.es/m/16951-a4dd68cf0de23048@postgresql.org
* Fix ndistinct estimates with system attributesTomas Vondra2021-03-26
| | | | | | | | | | | | | | | | | | | | When estimating the number of groups using extended statistics, the code was discarding information about system attributes. This led to strange situation that SELECT 1 FROM t GROUP BY ctid; could have produced higher estimate (equal to pg_class.reltuples) than SELECT 1 FROM t GROUP BY a, b, ctid; with extended statistics on (a,b). Fixed by retaining information about the system attribute. Backpatch all the way to 10, where extended statistics were introduced. Author: Tomas Vondra Backpatch-through: 10
* Remove StoreSingleInheritance reimplementationAlvaro Herrera2021-03-25
| | | | | | | I introduced this duplicate code in commit 8b08f7d4820f for no good reason. Remove it, and backpatch to 11 where it was introduced. Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
* Fix bug in WAL replay of COMMIT_TS_SETTS record.Fujii Masao2021-03-25
| | | | | | | | | | | | | | | | | | Previously the WAL replay of COMMIT_TS_SETTS record called TransactionTreeSetCommitTsData() with the argument write_xlog=true, which generated and wrote new COMMIT_TS_SETTS record. This should not be acceptable because it's during recovery. This commit fixes the WAL replay of COMMIT_TS_SETTS record so that it calls TransactionTreeSetCommitTsData() with write_xlog=false and doesn't generate new WAL during recovery. Back-patch to all supported branches. Reported-by: lx zou <zoulx1982@163.com> Author: Fujii Masao Reviewed-by: Alvaro Herrera Discussion: https://postgr.es/m/16931-620d0f2fdc6108f1@postgresql.org
* Fix psql's \connect command some more.Tom Lane2021-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Jasen Betts reported yet another unintended side effect of commit 85c54287a: reconnecting with "\c service=whatever" did not have the expected results. The reason is that starting from the output of PQconndefaults() effectively allows environment variables (such as PGPORT) to override entries in the service file, whereas the normal priority is the other way around. Not using PQconndefaults at all would require yet a third main code path in do_connect's parameter setup, so I don't really want to fix it that way. But we can have the logic effectively ignore all the default values for just a couple more lines of code. This patch doesn't change the behavior for "\c -reuse-previous=on service=whatever". That remains significantly different from before 85c54287a, because many more parameters will be re-used, and thus not be possible for service entries to replace. But I think this is (mostly?) intentional. In any case, since libpq does not report where it got parameter values from, it's hard to do differently. Per bug #16936 from Jasen Betts. As with the previous patches, back-patch to all supported branches. (9.5 is unfortunately now out of support, so this won't get fixed there.) Discussion: https://postgr.es/m/16936-3f524322a53a29f0@postgresql.org
* Use correct spelling of statistics kindTomas Vondra2021-03-23
| | | | | | | | A couple error messages and comments used 'statistic kind', not the correct 'statistics kind'. Fix and backpatch all the way back to 10, where extended statistics were introduced. Backpatch-through: 10
* pg_waldump: Fix bug in per-record statistics.Fujii Masao2021-03-23
| | | | | | | | | | | | | | | | | | | | | | | | pg_waldump --stats=record identifies a record by a combination of the RmgrId and the four bits of the xl_info field of the record. But XACT records use the first bit of those four bits for an optional flag variable, and the following three bits for the opcode to identify a record. So previously the same type of XACT record could have different four bits (three bits are the same but the first one bit is different), and which could cause pg_waldump --stats=record to show two lines of per-record statistics for the same XACT record. This is a bug. This commit changes pg_waldump --stats=record so that it processes only XACT record differently, i.e., filters the opcode out of xl_info and uses a combination of the RmgrId and those three bits as the identifier of a record, only for XACT record. For other records, the four bits of the xl_info field are still used. Back-patch to all supported branches. Author: Kyotaro Horiguchi Reviewed-by: Shinya Kato, Fujii Masao Discussion: https://postgr.es/m/2020100913412132258847@highgo.ca
* Fix new TAP test for 2PC transactions and PITRs on WindowsMichael Paquier2021-03-22
| | | | | | | | | | | | | The test added by 595b9cb forgot that on Windows it is necessary to set up pg_hba.conf (see PostgresNode::set_replication_conf) with a specific entry or base backups fail. Any node that requires to support replication just needs to pass down allows_streaming at initialization. This updates the test to do so. Simplify things a bit while on it. Per buildfarm member fairywren. Any Windows hosts running this test would have failed, and I have reproduced the problem as well. Backpatch-through: 10
* Fix timeline assignment in checkpoints with 2PC transactionsMichael Paquier2021-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Any transactions found as still prepared by a checkpoint have their state data read from the WAL records generated by PREPARE TRANSACTION before being moved into their new location within pg_twophase/. While reading such records, the WAL reader uses the callback read_local_xlog_page() to read a page, that is shared across various parts of the system. This callback, since 1148e22a, has introduced an update of ThisTimeLineID when reading a record while in recovery, which is potentially helpful in the context of cascading WAL senders. This update of ThisTimeLineID interacts badly with the checkpointer if a promotion happens while some 2PC data is read from its record, as, by changing ThisTimeLineID, any follow-up WAL records would be written to an timeline older than the promoted one. This results in consistency issues. For instance, a subsequent server restart would cause a failure in finding a valid checkpoint record, resulting in a PANIC, for instance. This commit changes the code reading the 2PC data to reset the timeline once the 2PC record has been read, to prevent messing up with the static state of the checkpointer. It would be tempting to do the same thing directly in read_local_xlog_page(). However, based on the discussion that has led to 1148e22a, users may rely on the updates of ThisTimeLineID when a WAL record page is read in recovery, so changing this callback could break some cases that are working currently. A TAP test reproducing the issue is added, relying on a PITR to precisely trigger a promotion with a prepared transaction still tracked. Per discussion with Heikki Linnakangas, Kyotaro Horiguchi, Fujii Masao and myself. Author: Soumyadeep Chakraborty, Jimmy Yih, Kevin Yeap Discussion: https://postgr.es/m/CAE-ML+_EjH_fzfq1F3RJ1=XaaNG=-Jz-i3JqkNhXiLAsM3z-Ew@mail.gmail.com Backpatch-through: 10
* Fix memory leak when rejecting bogus DH parameters.Tom Lane2021-03-20
| | | | | | | | | | | | | While back-patching e0e569e1d, I noted that there were some other places where we ought to be applying DH_free(); namely, where we load some DH parameters from a file and then reject them as not being sufficiently secure. While it seems really unlikely that anybody would hit these code paths in production, let alone do so repeatedly, let's fix it for consistency. Back-patch to v10 where this code was introduced. Discussion: https://postgr.es/m/16160-18367e56e9a28264@postgresql.org
* Fix memory leak when initializing DH parameters in backendTom Lane2021-03-20
| | | | | | | | | | | | | | | | | | | When loading DH parameters used for the generation of ephemeral DH keys in the backend, the code has never bothered releasing the memory used for the DH information loaded from a file or from libpq's default. This commit makes sure that the information is properly free()'d. Back-patch of e0e569e1d. We originally thought the leak was minor and not worth back-patching, but Jelte Fennema pointed out that repeated SIGHUP's can result in very serious bloat of the postmaster, which is then multiplied by being duplicated into eadh forked child. Back-patch to v10; the code looked different before c0a15e07c, and didn't have a leak in the actually-live code paths. Michael Paquier Discussion: https://postgr.es/m/16160-18367e56e9a28264@postgresql.org
* Don't leak malloc'd error string in libpqrcv_check_conninfo().Tom Lane2021-03-18
| | | | | | | | | | | | We leaked the error report from PQconninfoParse, when there was one. It seems unlikely that real usage patterns would repeat the failure often enough to create serious bloat, but let's back-patch anyway to keep the code similar in all branches. Found via valgrind testing. Back-patch to v10 where this code was added. Discussion: https://postgr.es/m/3816764.1616104288@sss.pgh.pa.us
* Don't leak malloc'd strings when a GUC setting is rejected.Tom Lane2021-03-18
| | | | | | | | | | | | | | Because guc.c prefers to keep all its string values in malloc'd not palloc'd storage, it has to be more careful than usual to avoid leaks. Error exits out of string GUC hook checks failed to clear the proposed value string, and error exits out of ProcessGUCArray() failed to clear the malloc'd results of ParseLongOption(). Found via valgrind testing. This problem is ancient, so back-patch to all supported branches. Discussion: https://postgr.es/m/3816764.1616104288@sss.pgh.pa.us
* Don't leak compiled regex(es) when an ispell cache entry is dropped.Tom Lane2021-03-18
| | | | | | | | | | | | | | | | | | The text search cache mechanisms assume that we can clean up an invalidated dictionary cache entry simply by resetting the associated long-lived memory context. However, that does not work for ispell affixes that make use of regular expressions, because the regex library deals in plain old malloc. Hence, we leaked compiled regex(es) any time we dropped such a cache entry. That could quickly add up, since even a fairly trivial regex can use up tens of kB, and a large one can eat megabytes. Add a memory context callback to ensure that a regex gets freed when its owning cache entry is cleared. Found via valgrind testing. This problem is ancient, so back-patch to all supported branches. Discussion: https://postgr.es/m/3816764.1616104288@sss.pgh.pa.us
* Don't run RelationInitTableAccessMethod in a long-lived context.Tom Lane2021-03-18
| | | | | | | | | | | | | | | Some code paths in this function perform syscache lookups, which can lead to table accesses and possibly leakage of cruft into the caller's context. If said context is CacheMemoryContext, we eventually will have visible bloat. But fixing this is no harder than moving one memory context switch step. (The other callers don't have a problem.) Andres Freund and I independently found this via valgrind testing. Back-patch to v12 where this code was added. Discussion: https://postgr.es/m/20210317023101.anvejcfotwka6gaa@alap3.anarazel.de Discussion: https://postgr.es/m/3816764.1616104288@sss.pgh.pa.us
* Don't leak rd_statlist when a relcache entry is dropped.Tom Lane2021-03-18
| | | | | | | | | | | Although these lists are usually NIL, and even when not empty are unlikely to be large, constant relcache update traffic could eventually result in visible bloat of CacheMemoryContext. Found via valgrind testing. Back-patch to v10 where this field was added. Discussion: https://postgr.es/m/3816764.1616104288@sss.pgh.pa.us
* Fix function name in error hintMagnus Hagander2021-03-18
| | | | | | | | | | | pg_read_file() is the function that's in core, pg_file_read() is in adminpack. But when using pg_file_read() in adminpack it calls the *C* level function pg_read_file() in core, which probably threw the original author off. But the error hint should be about the SQL function. Reported-By: Sergei Kornilov Backpatch-through: 11 Discussion: https://postgr.es/m/373021616060475@mail.yandex.ru
* Prevent buffer overrun in read_tablespace_map().Tom Lane2021-03-17
| | | | | | | | | | | | | | | | Robert Foggia of Trustwave reported that read_tablespace_map() fails to prevent an overrun of its on-stack input buffer. Since the tablespace map file is presumed trustworthy, this does not seem like an interesting security vulnerability, but still we should fix it just in the name of robustness. While here, document that pg_basebackup's --tablespace-mapping option doesn't work with tar-format output, because it doesn't. To make it work, we'd have to modify the tablespace_map file within the tarball sent by the server, which might be possible but I'm not volunteering. (Less-painful solutions would require changing the basebackup protocol so that the source server could adjust the map. That's not very appetizing either.)
* Revert "Fix race in Parallel Hash Join batch cleanup."Thomas Munro2021-03-18
| | | | | | This reverts commit 8fa2478b407ef867d501fafcdea45fd827f70799. Discussion: https://postgr.es/m/CA%2BhUKGJmcqAE3MZeDCLLXa62cWM0AJbKmp2JrJYaJ86bz36LFA%40mail.gmail.com
* Fix race in Parallel Hash Join batch cleanup.Thomas Munro2021-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | With very unlucky timing and parallel_leader_participation off, PHJ could attempt to access per-batch state just as it was being freed. There was code intended to prevent that by checking for a cleared pointer, but it was buggy. Fix, by introducing an extra barrier phase. The new phase PHJ_BUILD_RUNNING means that it's safe to access the per-batch state to find a batch to help with, and PHJ_BUILD_DONE means that it is too late. The last to detach will free the array of per-batch state as before, but now it will also atomically advance the phase at the same time, so that late attachers can avoid the hazard, without the data race. This mirrors the way per-batch hash tables are freed (see phases PHJ_BATCH_PROBING and PHJ_BATCH_DONE). Revealed by a one-off build farm failure, where BarrierAttach() failed a sanity check assertion, because the memory had been clobbered by dsa_free(). Back-patch to 11, where the code arrived. Reported-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/20200929061142.GA29096%40paquier.xyz
* Avoid corner-case memory leak in SSL parameter processing.Tom Lane2021-03-16
| | | | | | | | | | | | | | | | | | | | | | | After reading the root cert list from the ssl_ca_file, immediately install it as client CA list of the new SSL context. That gives the SSL context ownership of the list, so that SSL_CTX_free will free it. This avoids a permanent memory leak if we fail further down in be_tls_init(), which could happen if bogus CRL data is offered. The leak could only amount to something if the CRL parameters get broken after server start (else we'd just quit) and then the server is SIGHUP'd many times without fixing the CRL data. That's rather unlikely perhaps, but it seems worth fixing, if only because the code is clearer this way. While we're here, add some comments about the memory management aspects of this logic. Noted by Jelte Fennema and independently by Andres Freund. Back-patch to v10; before commit de41869b6 it doesn't matter, since we'd not re-execute this code during SIGHUP. Discussion: https://postgr.es/m/16160-18367e56e9a28264@postgresql.org
* Fix race condition in psql \e's detection of file modification.Tom Lane2021-03-12
| | | | | | | | | | | | | | | | | | | | | | | | psql's editing commands decide whether the user has edited the file by checking for change of modification timestamp. This is probably fine for a pre-existing file, but with a temporary file that is created within the command, it's possible for a fast typist to save-and-exit in less than the one-second granularity of stat(2) timestamps. On Windows FAT filesystems the granularity is even worse, 2 seconds, making the race a bit easier to hit. To fix, try to set the temp file's mod time to be two seconds ago. It's unlikely this would fail, but then again the race condition itself is unlikely, so just ignore any error. Also, we might as well check the file size as well as its mod time. While this is a difficult bug to hit, it still seems worth back-patching, to ensure that users' edits aren't lost. Laurenz Albe, per gripe from Jacob Champion; based on fix suggestions from Jacob and myself Discussion: https://postgr.es/m/0ba3f2a658bac6546d9934ab6ba63a805d46a49b.camel@cybertec.at
* Forbid marking an identity column as nullable.Tom Lane2021-03-12
| | | | | | | | | | | | | | | GENERATED ALWAYS AS IDENTITY implies NOT NULL, but the code failed to complain if you overrode that with "GENERATED ALWAYS AS IDENTITY NULL". One might think the old behavior was a feature, but it was inconsistent because the outcome varied depending on the order of the clauses, so it seems to have been just an oversight. Per bug #16913 from Pavel Boev. Back-patch to v10 where identity columns were introduced. Vik Fearing (minor tweaks by me) Discussion: https://postgr.es/m/16913-3b5198410f67d8c6@postgresql.org
* Restore vacuum_cleanup_index_scale_factor coverage.Peter Geoghegan2021-03-11
| | | | | | | | | | | Revert two recent commits that had btree_index.sql drop regression test indexes rather than leave them behind for pg_dump testing. This is intended to restore pg_upgrade coverage of indexes with the vacuum_cleanup_index_scale_factor storage parameter set on buildfarm member crake. Backpatch: 11-12 only
* Re-simplify management of inStart in pqParseInput3's subroutines.Tom Lane2021-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 92785dac2 copied some logic related to advancement of inStart from pqParseInput3 into getRowDescriptions and getAnotherTuple, because it wanted to allow user-defined row processor callbacks to potentially longjmp out of the library, and inStart would have to be updated before that happened to avoid an infinite loop. We later decided that that API was impossibly fragile and reverted it, but we didn't undo all of the related code changes, and this bit of messiness survived. Undo it now so that there's just one place in pqParseInput3's processing where inStart is advanced; this will simplify addition of better tracing support. getParamDescriptions had grown similar processing somewhere along the way (not in 92785dac2; I didn't track down just when), but it's actually buggy because its handling of corrupt-message cases seems to have been copied from the v2 logic where we lacked a known message length. The cases where we "goto not_enough_data" should not simply return EOF, because then we won't consume the message, potentially creating an infinite loop. That situation now represents a definitively corrupt message, and we should report it as such. Although no field reports of getParamDescriptions getting stuck in a loop have been seen, it seems appropriate to back-patch that fix. I chose to back-patch all of this to keep the logic looking more alike in supported branches. Discussion: https://postgr.es/m/2217283.1615411989@sss.pgh.pa.us
* Drop other index behind pg_upgrade test issue.Peter Geoghegan2021-03-10
| | | | | | | | | Fix the test failure by dropping the index in question. Missed by commit 57ae7885. Per buildfarm member crake. Backpatch: 11-12 only
* Drop index behind pg_upgrade test issue.Peter Geoghegan2021-03-10
| | | | | | | | | | | | | | | | The vacuum_cleanup_index_scale_factor storage parameter was set in a btree index that was previously left behind in the regression test database. As a result, the index gets tested within pg_dump and pg_restore tests, as well as pg_upgrade testing. This won't work when upgrading to Postgres 14, though, because the storage parameter was removed on that version by commit 9f3665fb. Fix the test failure by dropping the index in question. Per buildfarm member crake. Discussion: https://postgr.es/m/CAH2-WzmeXYBWdhF7BMhNjhq9exsk=E1ohqBFAwzPdXJZ1XDMUA@mail.gmail.com Backpatch: 11-12 only
* tutorial: land height is "elevation", not "altitude"Bruce Momjian2021-03-10
| | | | | | | | | | | | | | | This is a follow-on patch to 92c12e46d5. In that patch, we renamed "altitude" to "elevation" in the docs, based on these details: https://mapscaping.com/blogs/geo-candy/what-is-the-difference-between-elevation-relief-and-altitude This renames the tutorial SQL files to match the documentation. Reported-by: max1@inbox.ru Discussion: https://postgr.es/m/161512392887.1046.3137472627109459518@wrigleys.postgresql.org Backpatch-through: 9.6
* Validate the OID argument of pg_import_system_collations().Tom Lane2021-03-08
| | | | | | | | | | | | | "SELECT pg_import_system_collations(0)" caused an assertion failure. With a random nonzero argument --- or indeed with zero, in non-assert builds --- it would happily make pg_collation entries with garbage values of collnamespace. These are harmless as far as I can tell (unless maybe the OID happens to become used for a schema, later on?). In any case this isn't a security issue, since the function is superuser-only. But it seems like a gotcha for unwary DBAs, so let's add a check that the given OID belongs to some schema. Back-patch to v10 where this function was introduced.
* Use native path separators to pg_ctl in initdbAlvaro Herrera2021-03-02
| | | | | | | | | | | On Windows, CMD.EXE allegedly does not run a command that uses forward slashes, so let's convert the path to use backslashes instead. Backpatch to 10. Author: Nitin Jadhav <nitinjadhavpostgres@gmail.com> Reviewed-by: Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> Discussion: https://postgr.es/m/CAMm1aWaNDuaPYFYMAqDeJrZmPtNvLcJRS++CcZWY8LT6KcoBZw@mail.gmail.com
* Fix duplicated test case in TAP tests of reindexdbMichael Paquier2021-03-02
| | | | | | | | | | The same test for REINDEX (VERBOSE) was done twice, while it is clear that the second test should use --concurrently. Issue introduced in 5dc92b8, for what looks like a copy-paste mistake. Reviewed-by: Mark Dilger Discussion: https://postgr.es/m/A7AE97EA-F4B0-4CAB-8FFF-3FECD31F9D63@enterprisedb.com Backpatch-through: 12
* Fix use-after-free bug with AfterTriggersTableData.storeslotAlvaro Herrera2021-02-27
| | | | | | | | | | | | | | | AfterTriggerSaveEvent() wrongly allocates the slot in execution-span memory context, whereas the correct thing is to allocate it in a transaction-span context, because that's where the enclosing AfterTriggersTableData instance belongs into. Backpatch to 12 (the test back to 11, where it works well with no code changes, and it's good to have to confirm that the case was previously well supported); this bug seems introduced by commit ff11e7f4b9ae. Reported-by: Bertrand Drouvot <bdrouvot@amazon.com> Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/39a71864-b120-5a5c-8cc5-c632b6f16761@amazon.com
* Reinstate HEAP_XMAX_LOCK_ONLY|HEAP_KEYS_UPDATED as allowedAlvaro Herrera2021-02-23
| | | | | | | | | | | | | | | | | | Commit 866e24d47db1 added an assert that HEAP_XMAX_LOCK_ONLY and HEAP_KEYS_UPDATED cannot appear together, on the faulty assumption that the latter necessarily referred to an update and not a tuple lock; but that's wrong, because SELECT FOR UPDATE can use precisely that combination, as evidenced by the amcheck test case added here. Remove the Assert(), and also patch amcheck's verify_heapam.c to not complain if the combination is found. Also, out of overabundance of caution, update (across all branches) README.tuplock to be more explicit about this. Author: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Mahendra Singh Thalor <mahi6run@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Discussion: https://postgr.es/m/20210124061758.GA11756@nol
* Fix psql's ON_ERROR_ROLLBACK so that it handles COMMIT AND CHAIN.Fujii Masao2021-02-19
| | | | | | | | | | | | | | | | | | | | | | When ON_ERROR_ROLLBACK is enabled, psql releases a temporary savepoint if it's idle in a valid transaction block after executing a query. But psql doesn't do that after RELEASE or ROLLBACK is executed because a temporary savepoint has already been destroyed in that case. This commit changes psql's ON_ERROR_ROLLBACK so that it doesn't release a temporary savepoint also when COMMIT AND CHAIN is executed. A temporary savepoint doesn't need to be released in that case because COMMIT AND CHAIN also destroys any savepoints defined within the transaction to commit. Otherwise psql tries to release the savepoint that COMMIT AND CHAIN has already destroyed and cause an error "ERROR: savepoint "pg_psql_temporary_savepoint" does not exist". Back-patch to v12 where transaction chaining was added. Reported-by: Arthur Nascimento Author: Arthur Nascimento Reviewed-by: Fujii Masao, Vik Fearing Discussion: https://postgr.es/m/16867-3475744069228158@postgresql.org
* Fix bug in COMMIT AND CHAIN command.Fujii Masao2021-02-19
| | | | | | | | | | | | | | | | | | This commit fixes COMMIT AND CHAIN command so that it starts new transaction immediately even if savepoints are defined within the transaction to commit. Previously COMMIT AND CHAIN command did not in that case because commit 280a408b48 forgot to make CommitTransactionCommand() handle a transaction chaining when the transaction state was TBLOCK_SUBCOMMIT. Also this commit adds the regression test for COMMIT AND CHAIN command when savepoints are defined. Back-patch to v12 where transaction chaining was added. Reported-by: Arthur Nascimento Author: Fujii Masao Reviewed-by: Arthur Nascimento, Vik Fearing Discussion: https://postgr.es/m/16867-3475744069228158@postgresql.org
* Fix another ancient bug in parsing of BRE-mode regular expressions.Tom Lane2021-02-18
| | | | | | | | | | | | | | While poking at the regex code, I happened to notice that the bug squashed in commit afcc8772e had a sibling: next() failed to return a specific value associated with the '}' token for a "\{m,n\}" quantifier when parsing in basic RE mode. Again, this could result in treating the quantifier as non-greedy, which it never should be in basic mode. For that to happen, the last character before "\}" that sets "nextvalue" would have to set it to zero, or it'd have to have accidentally been zero from the start. The failure can be provoked repeatably with, for example, a bound ending in digit "0". Like the previous patch, back-patch all the way.
* Make ExecGetInsertedCols() and friends more robust and improve comments.Heikki Linnakangas2021-02-15
| | | | | | | | | | | | | | | | | | If ExecGetInsertedCols(), ExecGetUpdatedCols() or ExecGetExtraUpdatedCols() were called with a ResultRelInfo that's not in the range table and isn't a partition routing target, the functions would dereference a NULL pointer, relinfo->ri_RootResultRelInfo. Such ResultRelInfos are created when firing RI triggers in tables that are not modified directly. None of the current callers of these functions pass such relations, so this isn't a live bug, but let's make them more robust. Also update comment in ResultRelInfo; after commit 6214e2b228, ri_RangeTableIndex is zero for ResultRelInfos created for partition tuple routing. Noted by Coverity. Backpatch down to v11, like commit 6214e2b228. Reviewed-by: Tom Lane, Amit Langote
* Default to wal_sync_method=fdatasync on FreeBSD.Thomas Munro2021-02-15
| | | | | | | | | | | | FreeBSD 13 gained O_DSYNC, which would normally cause wal_sync_method to choose open_datasync as its default value. That may not be a good choice for all systems, and performs worse than fdatasync in some scenarios. Let's preserve the existing default behavior for now. Like commit 576477e73c4, which did the same for Linux, back-patch to all supported releases. Discussion: https://postgr.es/m/CA%2BhUKGLsAMXBQrCxCXoW-JsUYmdOL8ALYvaX%3DCrHqWxm-nWbGA%40mail.gmail.com
* Hold interrupts while running dsm_detach() callbacks.Thomas Munro2021-02-15
| | | | | | | | | | | | | | | | | | While cleaning up after a parallel query or parallel index creation that created temporary files, we could be interrupted by a statement timeout. The error handling path would then fail to clean up the files when it ran dsm_detach() again, because the callback was already popped off the list. Prevent this hazard by holding interrupts while the cleanup code runs. Thanks to Heikki Linnakangas for this suggestion, and also to Kyotaro Horiguchi, Masahiko Sawada, Justin Pryzby and Tom Lane for discussion of this and earlier ideas on how to fix the problem. Back-patch to all supported releases. Reported-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20191212180506.GR2082@telsasoft.com
* pg_attribute_no_sanitize_alignment() macroTom Lane2021-02-13
| | | | | | | | | | | | | | | | | Modern gcc and clang compilers offer alignment sanitizers, which help to detect pointer misalignment. However, our codebase already contains x86-specific crc32 computation code, which uses unalignment access. Thankfully, those compilers also support the attribute, which disables alignment sanitizers at the function level. This commit adds pg_attribute_no_sanitize_alignment(), which wraps this attribute, and applies it to pg_comp_crc32c_sse42() function. Back-patch of commits 993bdb9f9 and ad2ad698a, to enable doing alignment testing in all supported branches. Discussion: https://postgr.es/m/CAPpHfdsne3%3DT%3DfMNU45PtxdhSL_J2PjLTeS8rwKnJzUR4YNd4w%40mail.gmail.com Discussion: https://postgr.es/m/475514.1612745257%40sss.pgh.pa.us Author: Alexander Korotkov, revised by Tom Lane Reviewed-by: Tom Lane