aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Save Kerberos and LDAP daemon logs where the buildfarm can find them.Tom Lane2019-08-06
| | | | | | | | | | | | | src/test/kerberos and src/test/ldap try to run private authentication servers, which of course might fail. The logs from these servers were being dropped into the tmp_check/ subdirectory, but they should be put in tmp_check/log/, because the buildfarm will only capture log files in that subdirectory. Without the log output there's little hope of diagnosing buildfarm failures related to these servers. Backpatch to v11 where these test suites were added. Discussion: https://postgr.es/m/16017.1565047605@sss.pgh.pa.us
* Stamp 12beta3.REL_12_BETA3Tom Lane2019-08-05
|
* Fix choice of comparison operators for cross-type hashed subplans.Tom Lane2019-08-05
| | | | | | | | | | | | | | | | | | | | | | Commit bf6c614a2 rearranged the lookup of the comparison operators needed in a hashed subplan, and in so doing, broke the cross-type case: it caused the original LHS-vs-RHS operator to be used to compare hash table entries too (which of course are all of the RHS type). This leads to C functions being passed a Datum that is not of the type they expect, with the usual hazards of crashes and unauthorized server memory disclosure. For the set of hashable cross-type operators present in v11 core Postgres, this bug is nearly harmless on 64-bit machines, which may explain why it escaped earlier detection. But it is a live security hazard on 32-bit machines; and of course there may be extensions that add more hashable cross-type operators, which would increase the risk. Reported by Andreas Seltenreich. Back-patch to v11 where the problem came in. Security: CVE-2019-10209
* Require the schema qualification in pg_temp.type_name(arg).Noah Misch2019-08-05
| | | | | | | | | | | | Commit aa27977fe21a7dfa4da4376ad66ae37cb8f0d0b5 introduced this restriction for pg_temp.function_name(arg); do likewise for types created in temporary schemas. Programs that this breaks should add "pg_temp." schema qualification or switch to arg::type_name syntax. Back-patch to 9.4 (all supported versions). Reviewed by Tom Lane. Reported by Tom Lane. Security: CVE-2019-10208
* Translation updatesPeter Eisentraut2019-08-05
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: e255bc8b15d0f173f9de9048d3d6ad6e40085a48
* Fix tab completion for ALTER LANGUAGE in psqlMichael Paquier2019-08-05
| | | | | | | | | | | OWNER_TO was used for the completion, which is not a supported grammar, but OWNER TO is. This error has been introduced by d37b816, so backpatch down to 9.6. Author: Alexander Lakhin Discussion: https://postgr.es/m/7ab243e0-116d-3e44-d120-76b3df7abefd@gmail.com Backpatch-through: 9.6
* Revert "Add log_statement_sample_rate parameter"Tomas Vondra2019-08-04
| | | | | | | | | | | | | | This reverts commit 88bdbd3f746049834ae3cc972e6e650586ec3c9d. As committed, statement sampling used the existing duration threshold (log_min_duration_statement) when decide which statements to sample. The issue is that even the longest statements are subject to sampling, and so may not end up logged. An improvement was proposed, introducing a second duration threshold, but it would not be backwards compatible. So we've decided to revert this feature - the separate threshold should be part of the feature itself. Discussion: https://postgr.es/m/CAFj8pRDS8tQ3Wviw9%3DAvODyUciPSrGeMhJi_WPE%2BEB8%2B4gLL-Q%40mail.gmail.com
* Revert "Silence compiler warning"Tomas Vondra2019-08-04
| | | | | | | | | | | | | | This reverts commit 9dc122585551516309c9362e673effdbf3bd79bd. As committed, statement sampling used the existing duration threshold (log_min_duration_statement) when decide which statements to sample. The issue is that even the longest statements are subject to sampling, and so may not end up logged. An improvement was proposed, introducing a second duration threshold, but it would not be backwards compatible. So we've decided to revert this feature - the separate threshold should be part of the feature itself. Discussion: https://postgr.es/m/CAFj8pRDS8tQ3Wviw9%3DAvODyUciPSrGeMhJi_WPE%2BEB8%2B4gLL-Q%40mail.gmail.com
* Avoid picking already-bound TCP ports in kerberos and ldap test suites.Tom Lane2019-08-04
| | | | | | | | | | | | | | | | | | | | | | | src/test/kerberos and src/test/ldap need to run a private authentication server of the relevant type, for which they need a free TCP port. They were just picking a random port number in 48K-64K, which works except when something's already using the particular port. Notably, the probability of failure rises dramatically if one simply runs those tests in a tight loop, because each test cycle leaves behind a bunch of high ports that are transiently in TIME_WAIT state. To fix, split out the code that PostgresNode.pm already had for identifying a free TCP port number, so that it can be invoked to choose a port for the KDC or LDAP server. This isn't 100% bulletproof, since conceivably something else on the machine could grab the port between the time we check and the time we actually start the server. But that's a pretty short window, so in practice this should be good enough. Back-patch to v11 where these test suites were added. Patch by me, reviewed by Andrew Dunstan. Discussion: https://postgr.es/m/3397.1564872168@sss.pgh.pa.us
* Improve pruning of a default partitionAlvaro Herrera2019-08-04
| | | | | | | | | | | | | | | | | | | When querying a partitioned table containing a default partition, we were wrongly deciding to include it in the scan too early in the process, failing to exclude it in some cases. If we reinterpret the PruneStepResult.scan_default flag slightly, we can do a better job at detecting that it can be excluded. The change is that we avoid setting the flag for that pruning step unless the step absolutely requires the default partition to be scanned (in contrast with the previous arrangement, which was to set it unless the step was able to prune it). So get_matching_partitions() must explicitly check the partition that each returned bound value corresponds to in order to determine whether the default one needs to be included, rather than relying on the flag from the final step result. Author: Yuzuko Hosoya <hosoya.yuzuko@lab.ntt.co.jp> Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> Discussion: https://postgr.es/m/00e601d4ca86$932b8bc0$b982a340$@lab.ntt.co.jp
* Fix representation of hash keys in Hash/HashJoin nodes.Andres Freund2019-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In 5f32b29c1819 I changed the creation of HashState.hashkeys to actually use HashState as the parent (instead of HashJoinState, which was incorrect, as they were executed below HashState), to fix the problem of hashkeys expressions otherwise relying on slot types appropriate for HashJoinState, rather than HashState as would be correct. That reliance was only introduced in 12, which is why it previously worked to use HashJoinState as the parent (although I'd be unsurprised if there were problematic cases). Unfortunately that's not a sufficient solution, because before this commit, the to-be-hashed expressions referenced inner/outer as appropriate for the HashJoin, not Hash. That didn't have obvious bad consequences, because the slots containing the tuples were put into ecxt_innertuple when hashing a tuple for HashState (even though Hash doesn't have an inner plan). There are less common cases where this can cause visible problems however (rather than just confusion when inspecting such executor trees). E.g. "ERROR: bogus varno: 65000", when explaining queries containing a HashJoin where the subsidiary Hash node's hash keys reference a subplan. While normally hashkeys aren't displayed by EXPLAIN, if one of those expressions references a subplan, that subplan may be printed as part of the Hash node - which then failed because an inner plan was referenced, and Hash doesn't have that. It seems quite possible that there's other broken cases, too. Fix the problem by properly splitting the expression for the HashJoin and Hash nodes at plan time, and have them reference the proper subsidiary node. While other workarounds are possible, fixing this correctly seems easy enough. It was a pretty ugly hack to have ExecInitHashJoin put the expression into the already initialized HashState, in the first place. I decided to not just split inner/outer hashkeys inside make_hashjoin(), but also to separate out hashoperators and hashcollations at plan time. Otherwise we would have ended up having two very similar loops, one at plan time and the other during executor startup. The work seems to more appropriately belong to plan time, anyway. Reported-By: Nikita Glukhov, Alexander Korotkov Author: Andres Freund Reviewed-By: Tom Lane, in an earlier version Discussion: https://postgr.es/m/CAPpHfdvGVegF_TKKRiBrSmatJL2dR9uwFCuR+teQ_8tEXU8mxg@mail.gmail.com Backpatch: 12-
* Fix pg_dump's handling of dependencies for custom opclasses.Tom Lane2019-07-31
| | | | | | | | | | | | | | | | | | | | | | | | Since pg_dump doesn't treat the member operators and functions of operator classes/families (that is, the pg_amop and pg_amproc entries, not the underlying operators/functions) as separate dumpable objects, it missed their dependency information. I think this was safe when the code was designed, because the default object sorting rule emits operators and functions before opclasses, and there were no dependency types that could mess that up. However, the introduction of range types in 9.2 broke it: now a type can have a dependency on an opclass, allowing dependency rules to push the opclass before the type and hence before custom operators. Lacking any information showing that it shouldn't do so, pg_dump emitted the objects in the wrong order. Fix by teaching getDependencies() to translate pg_depend entries for pg_amop/amproc rows to look like dependencies for their parent opfamily. I added a regression test for this in HEAD/v12, but not further back; life is too short to fight with 002_pg_dump.pl. Per bug #15934 from Tom Gottfried. Back-patch to all supported branches. Discussion: https://postgr.es/m/15934-58b8c8ab7a09ea15@postgresql.org
* Remove superfluous newlines in function prototypes.Andres Freund2019-07-31
| | | | | | | | | | | | | | | These were introduced by pgindent due to fixe to broken indentation (c.f. 8255c7a5eeba8). Previously the mis-indentation of function prototypes was creatively used to reduce indentation in a few places. As that formatting only exists in master and REL_12_STABLE, it seems better to fix it in both, rather than having some odd indentation in v12 that somebody might copy for future patches or such. Author: Andres Freund Discussion: https://postgr.es/m/20190728013754.jwcbe5nfyt3533vx@alap3.anarazel.de Backpatch: 12-
* Allow table AM's to use rd_amcache, too.Heikki Linnakangas2019-07-30
| | | | | | | | | | | The rd_amcache allows an index AM to cache arbitrary information in a relcache entry. This commit moves the cleanup of rd_amcache so that it can also be used by table AMs. Nothing takes advantage of that yet, but I'm sure it'll come handy for anyone writing new table AMs. Backpatch to v12, where table AM interface was introduced. Reviewed-by: Julien Rouhaud
* Print WAL position correctly in pg_rewind error message.Heikki Linnakangas2019-07-30
| | | | | | | | | | This has been wrong ever since pg_rewind was added. The if-branch just above this, where we print the same error with an extra message supplied by XLogReadRecord() got this right, but the variable name was wrong in the else-branch. As a consequence, the error printed the WAL position as 0/0 if there was an error reading a WAL file. Backpatch to 9.5, where pg_rewind was added.
* Don't build extended statistics on inheritance treesTomas Vondra2019-07-30
| | | | | | | | | | | | | | | | | | | | | | | | | | When performing ANALYZE on inheritance trees, we collect two samples for each relation - one for the relation alone, and one for the inheritance subtree (relation and its child relations). And then we build statistics on each sample, so for each relation we get two sets of statistics. For regular (per-column) statistics this works fine, because the catalog includes a flag differentiating statistics built from those two samples. But we don't have such flag in the extended statistics catalogs, and we ended up updating the same row twice, triggering this error: ERROR: tuple already updated by self The simplest solution is to disable extended statistics on inheritance trees, which is what this commit is doing. In the future we may need to do something similar to per-column statistics, but that requires adding a flag to the catalog - and that's not backpatchable. Moreover, the current selectivity estimation code only works with individual relations, so building statistics on inheritance trees would be pointless anyway. Author: Tomas Vondra Backpatch-to: 10- Discussion: https://postgr.es/m/20190618231233.GA27470@telsasoft.com Reported-by: Justin Pryzby
* Fix busted logic for parallel lock grouping in TopoSort().Tom Lane2019-07-29
| | | | | | | | | | | | | | | | | | | | | A "break" statement erroneously left behind by commit a1c1af2a1 caused TopoSort to do the wrong thing if a lock's wait list contained multiple members of the same locking group. Because parallel workers don't normally need any locks not already taken by their leader, this is very hard --- maybe impossible --- to hit in production. Still, if it did happen, the queries involved in an otherwise-resolvable deadlock would block until canceled. In addition to removing the bogus "break", add an Assert showing that the conflicting uses of the beforeConstraints[] array (for both counts and flags) don't overlap, and add some commentary explaining why not; because it's not obvious without explanation, IMHO. Original report and patch from Rui Hai Jiang; additional assert and commentary by me. Back-patch to 9.6 where the bug came in. Discussion: https://postgr.es/m/CAEri+mLd3bpHLyW+a9pSe1y=aEkeuJpwBSwvo-+m4n7-ceRmXw@mail.gmail.com
* Fix handling of expressions and predicates in REINDEX CONCURRENTLYMichael Paquier2019-07-29
| | | | | | | | | | | | | | | | | | | | | | | When copying the definition of an index rebuilt concurrently for the new entry, the index information was taken directly from the old index using the relation cache. In this case, predicates and expressions have some post-processing to prepare things for the planner, which loses some information including the collations added in any of them. This inconsistency can cause issues when attempting for example a table rewrite, and makes the new indexes rebuilt concurrently inconsistent with the old entries. In order to fix the problem, fetch expressions and predicates directly from the catalog of the old entry, and fill in IndexInfo for the new index with that. This makes the process more consistent with DefineIndex(), and the code is refactored with the addition of a routine to create an IndexInfo node. Reported-by: Manuel Rigger Author: Michael Paquier Discussion: https://postgr.es/m/CA+u7OA5Hp0ra235F3czPom_FyAd-3+XwSJmX95r1+sRPOJc9VQ@mail.gmail.com Backpatch-through: 12
* Avoid macro clash with LLVM 9.Thomas Munro2019-07-29
| | | | | | | | | | Early previews of LLVM 9 reveal that our Min() macro causes compiler errors in LLVM headers reached by the #include directives in llvmjit_inline.cpp. Let's just undefine it. Per buildfarm animal seawasp. Back-patch to 11. Reviewed-by: Fabien Coelho, Tom Lane Discussion: https://postgr.es/m/20190606173216.GA6306%40alvherre.pgsql
* Don't uselessly escape a string that doesn't need escapingAlvaro Herrera2019-07-26
| | | | | | | Per gripe from Ian Barwick Co-authored-by: Ian Barwick <ian@2ndquadrant.com> Discussion: https://postgr.es/m/CABvVfJWNnNKb8cHsTLhkTsvL1+G6BVcV+57+w1JZ61p8YGPdWQ@mail.gmail.com
* Tweak our special-case logic for the IANA "Factory" timezone.Tom Lane2019-07-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_timezone_names() tries to avoid showing the "Factory" zone in the view, mainly because that has traditionally had a very long "abbreviation" such as "Local time zone must be set--see zic manual page", so that showing it messes up psql's formatting of the whole view. Since tzdb version 2016g, IANA instead uses the abbreviation "-00", which is sane enough that there's no reason to discriminate against it. On the other hand, it emerges that FreeBSD and possibly other packagers are so wedded to backwards compatibility that they hack the IANA data to keep the old spelling --- and not just that old spelling, but even older spellings that IANA used back in the stone age. This caused the filter logic to fail to suppress "Factory" at all on such platforms, though the formatting problem is definitely real in that case. To solve both problems, get rid of the hard-wired assumption about exactly what Factory's abbreviation is, and instead reject abbreviations exceeding 31 characters. This will allow Factory to appear in the view if and only if it's using the modern abbreviation. In passing, simplify the code we add to zic.c to support "zic -P" to remove its now-obsolete hacks to not print the Factory zone's abbreviation. Unlike pg_timezone_names(), there's no reason for that code to support old/nonstandard timezone data. Since we generally prefer to keep timezone-related behavior the same in all branches, and since this is arguably a bug fix, back-patch to all supported branches. Discussion: https://postgr.es/m/3961.1564086915@sss.pgh.pa.us
* Avoid choosing "localtime" or "posixrules" as TimeZone during initdb.Tom Lane2019-07-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some platforms create a file named "localtime" in the system timezone directory, making it a copy or link to the active time zone file. If Postgres is built with --with-system-tzdata, initdb will see that file as an exact match to localtime(3)'s behavior, and it may decide that "localtime" is the most preferred spelling of the active zone. That's a very bad choice though, because it's neither informative, nor portable, nor stable if someone changes the system timezone setting. Extend the preference logic added by commit e3846a00c so that we will prefer any other zone file that matches localtime's behavior over "localtime". On the same logic, also discriminate against "posixrules", which is another not-really-a-zone file that is often present in the timezone directory. (Since we install "posixrules" but not "localtime", this change can affect the behavior of Postgres with or without --with-system-tzdata.) Note that this change doesn't prevent anyone from choosing these pseudo-zones if they really want to (i.e., by setting TZ for initdb, or modifying the timezone GUC later on). It just prevents initdb from preferring these zone names when there are multiple matches to localtime's behavior. Since we generally prefer to keep timezone-related behavior the same in all branches, and since this is arguably a bug fix, back-patch to all supported branches. Discussion: https://postgr.es/m/CADT4RqCCnj6FKLisvT8tTPfTP4azPhhDFJqDF1JfBbOH5w4oyQ@mail.gmail.com Discussion: https://postgr.es/m/27991.1560984458@sss.pgh.pa.us
* Fix loss of fractional digits for large values in cash_numeric().Tom Lane2019-07-26
| | | | | | | | | | | | | | | | Money values exceeding about 18 digits (depending on lc_monetary) could be inaccurately converted to numeric, due to select_div_scale() deciding it didn't need to compute any fractional digits. Force its hand by setting the dscale of one division input to equal the number of fractional digits we need. In passing, rearrange the logic to not do useless work in locales where money values are considered integral. Per bug #15925 from Slawomir Chodnicki. Back-patch to all supported branches. Discussion: https://postgr.es/m/15925-da9953e2674bb5c8@postgresql.org
* Fix LDAP test instability.Thomas Munro2019-07-26
| | | | | | | | | | After starting slapd, wait until it can accept a connection before beginning the real test work. This avoids occasional test failures. Back-patch to 11, where the LDAP tests arrived. Author: Thomas Munro Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/20190719033013.GI1859%40paquier.xyz
* Add missing (COSTS OFF) to EXPLAIN added in previous commit.Andres Freund2019-07-25
| | | | Backpatch: 12-, like the previous commit
* Fix slot type handling for Agg nodes performing internal sorts.Andres Freund2019-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 15d8f8312 we assert that - and since 7ef04e4d2cb2, 4da597edf1 rely on - the slot type for an expression's ecxt_{outer,inner,scan}tuple not changing, unless explicitly flagged as such. That allows to either skip deforming (for a virtual tuple slot) or optimize the code for JIT accelerated deforming appropriately (for other known slot types). This assumption was sometimes violated for grouping sets, when nodeAgg.c internally uses tuplesorts, and the child node doesn't return a TTSOpsMinimalTuple type slot. Detect that case, and flag that the outer slot might not be "fixed". It's probably worthwhile to optimize this further in the future, and more granularly determine whether the slot is fixed. As we already instantiate per-phase transition and equal expressions, we could cheaply set the slot type appropriately for each phase. But that's a separate change from this bugfix. This commit does include a very minor optimization by avoiding to create a slot for handling tuplesorts, if no such sorts are performed. Previously we created that slot unnecessarily in the common case of computing all grouping sets via hashing. The code looked too confusing without that, as the conditions for needing a sort slot and flagging that the slot type isn't fixed, are the same. Reported-By: Ashutosh Sharma Author: Andres Freund Discussion: https://postgr.es/m/CAE9k0PmNaMD2oHTEAhRyxnxpaDaYkuBYkLa1dpOpn=RS0iS2AQ@mail.gmail.com Backpatch: 12-, where the bug was introduced in 15d8f8312
* Fix syntax error in commit 20e99cddd.Tom Lane2019-07-25
| | | | Per buildfarm.
* Fix failures to ignore \r when reading Windows-style newlines.Tom Lane2019-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | libpq failed to ignore Windows-style newlines in connection service files. This normally wasn't a problem on Windows itself, because fgets() would convert \r\n to just \n. But if libpq were running inside a program that changes the default fopen mode to binary, it would see the \r's and think they were data. In any case, it's project policy to ignore \r in text files unconditionally, because people sometimes try to use files with DOS-style newlines on Unix machines, where the C library won't hide that from us. Hence, adjust parseServiceFile() to ignore \r as well as \n at the end of the line. In HEAD, go a little further and make it ignore all trailing whitespace, to match what it's always done with leading whitespace. In HEAD, also run around and fix up everyplace where we have newline-chomping code to make all those places look consistent and uniformly drop \r. It is not clear whether any of those changes are fixing live bugs. Most of the non-cosmetic changes are in places that are reading popen output, and the jury is still out as to whether popen on Windows can return \r\n. (The Windows-specific code in pipe_read_line seems to think so, but our lack of support for this elsewhere suggests maybe it's not a problem in practice.) Hence, I desisted from applying those changes to back branches, except in run_ssl_passphrase_command() which is new enough and little-tested enough that we'd probably not have heard about any problems there. Tom Lane and Michael Paquier, per bug #15827 from Jorge Gustavo Rocha. Back-patch the parseServiceFile() change to all supported branches, and the run_ssl_passphrase_command() change to v11 where that was added. Discussion: https://postgr.es/m/15827-e6ba53a3a7ed543c@postgresql.org
* Honor MSVC WindowsSDKVersion if setAndrew Dunstan2019-07-25
| | | | | | | | | | | | Add a line to the project file setting the target SDK. Otherwise, in for example VS2017, if the default but optional 8.1 SDK is not installed the build will fail. Patch from Peifeng Qiu, slightly edited by me. Discussion: https://postgr.es/m/CABmtVJhw1boP_bd4=b3Qv5YnqEdL696NtHFi2ruiyQ6mFHkeQQ@mail.gmail.com Backpatch to all live branches.
* Fix system column accesses in ON CONFLICT ... RETURNING.Andres Freund2019-07-24
| | | | | | | | | | | | | | | | | After 277cb789836 ON CONFLICT ... SET ... RETURNING failed with ERROR: virtual tuple table slot does not have system attributes when taking the update path, as the slot used to insert into the table (and then process RETURNING) was defined to be a virtual slot in that commit. Virtual slots don't support system columns except for tableoid and ctid, as the other system columns are AM dependent. Fix that by using a slot of the table's type. Add tests for system column accesses in ON CONFLICT ... RETURNING. Reported-By: Roby, bisected to the relevant commit by Jeff Janes Author: Andres Freund Discussion: https://postgr.es/m/73436355-6432-49B1-92ED-1FE4F7E7E100@finefun.com.au Backpatch: 12-, where the bug was introduced in 277cb789836
* Fix failure with pgperlcritic from the TAP test of synchronous replicationMichael Paquier2019-07-25
| | | | | | | | | | | Oversight in 7d81bdc, which introduced a new routine in perl lacking a return clause. Per buildfarm member crake. Backpatch down to 9.6 like its parent. Reported-by: Andrew Dunstan Discussion: https://postgr.es/m/16da29fa-d504-1380-7095-40de586dc038@2ndQuadrant.com Backpatch-through: 9.6
* Fix infelicities in describeOneTableDetails' partitioned-table handling.Tom Lane2019-07-24
| | | | | | | | | | | | | | | | | | | | | | | | | describeOneTableDetails issued a partition-constraint-fetching query for every table, even ones it knows perfectly well are not partitions. To add insult to injury, it then proceeded to leak the empty PGresult if the table wasn't a partition. Doing that a lot of times might amount to a meaningful leak, so this seems like a back-patchable bug. Fix that, and also fix a related PGresult leak in the partition-parent case (though that leak would occur only if we got no row, which is unexpected). Minor code beautification too, to make this code look more like the pre-existing code around it. Back-patch the whole change into v12. However, the fact that we already know whether the table is a partition dates only to commit 1af25ca0c; back-patching the relevant changes from that is probably more churn than is justified in released branches. Hence, in v11 and v10, just do the minimum to fix the PGresult leaks. Noted while messing around with adjacent code for yesterday's \d improvements.
* Use full 64-bit XID for checking if a deleted GiST page is old enough.Heikki Linnakangas2019-07-24
| | | | | | | | | | | Otherwise, after a deleted page gets even older, it becomes unrecyclable again. B-tree has the same problem, and has had since time immemorial, but let's at least fix this in GiST, where this is new. Backpatch to v12, where GiST page deletion was introduced. Reviewed-by: Andrey Borodin Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
* Refactor checks for deleted GiST pages.Heikki Linnakangas2019-07-24
| | | | | | | | | | | | The explicit check in gistScanPage() isn't currently really necessary, as a deleted page is always empty, so the loop would fall through without doing anything, anyway. But it's a marginal optimization, and it gives a nice place to attach a comment to explain how it works. Backpatch to v12, where GiST page deletion was introduced. Reviewed-by: Andrey Borodin Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
* Don't assume expr is available in pgbench testsAndrew Dunstan2019-07-24
| | | | | | | | | | Windows hosts do not normally come with expr, so instead of using that to test the \setshell command, use echo instead, which is fairly universally available. Backpatch to release 11, where this came in. Problem found by me, patch by Fabien Coelho.
* Improve stability of TAP test for synchronous replicationMichael Paquier2019-07-24
| | | | | | | | | | | | | | | | | | Slow buildfarm machines have run into issues with this TAP test caused by a race condition related to the startup of a set of standbys, where it is possible to finish with an unexpected order in the WAL sender array of the primary. This closes the race condition by making sure that any standby started is registered into the WAL sender array of the primary before starting the next one based on lookups of pg_stat_replication. Backpatch down to 9.6 where the test has been introduced. Author: Michael Paquier Reviewed-by: Álvaro Herrera, Noah Misch Discussion: https://postgr.es/m/20190617055145.GB18917@paquier.xyz Backpatch-through: 9.6
* Check that partitions are not in use when dropping constraintsAlvaro Herrera2019-07-23
| | | | | | | | | | | | | | | | | | | | | | If the user creates a deferred constraint in a partition, and in a transaction they cause the constraint's trigger execution to be deferred until commit time *and* drop the constraint, then when commit time comes the queued trigger will fail to run because the trigger object will have been dropped. This is explained because when a constraint gets dropped in a partitioned table, the recursion to drop the ones in partitions is done by the dependency mechanism, not by ALTER TABLE traversing the recursion tree as in all other cases. In the non-partitioned case, this problem is avoided by checking that the table is not "in use" by alter-table; other alter-table subcommands that recurse to partitions do that check for each partition. But the dependency mechanism doesn't have a way to do that. Fix the problem by applying the same check to all partitions during ALTER TABLE's "prep" phase, which correctly raises the necessary error. Reported-by: Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> Discussion: https://postgr.es/m/CAKcux6nZiO9-eEpr1ZD84bT1mBoVmeZkfont8iSpcmYrjhGWgA@mail.gmail.com
* Install dependencies to prevent dropping partition key columns.Tom Lane2019-07-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The logic in ATExecDropColumn that rejects dropping partition key columns is quite an inadequate defense, because it doesn't execute in cases where a column needs to be dropped due to cascade from something that only the column, not the whole partitioned table, depends on. That leaves us with a badly broken partitioned table; even an attempt to load its relcache entry will fail. We really need to have explicit pg_depend entries that show that the column can't be dropped without dropping the whole table. Hence, add those entries. In v12 and HEAD, bump catversion to ensure that partitioned tables will have such entries. We can't do that in released branches of course, so in v10 and v11 this patch affords protection only to partitioned tables created after the patch is installed. Given the lack of field complaints (this bug was found by fuzz-testing not by end users), that's probably good enough. In passing, fix ATExecDropColumn and ATPrepAlterColumnType messages to be more specific about which partition key column they're complaining about. Per report from Manuel Rigger. Back-patch to v10 where partitioned tables were added. Discussion: https://postgr.es/m/CA+u7OA4JKCPFrdrAbOs7XBiCyD61XJxeNav4LefkSmBLQ-Vobg@mail.gmail.com Discussion: https://postgr.es/m/31920.1562526703@sss.pgh.pa.us
* Use column collation for extended statisticsTomas Vondra2019-07-20
| | | | | | | | | | | | | | | | | | | The current extended statistics code was a bit confused which collation to use. When building the statistics, the collations defined as default for the data types were used (since commit 5e0928005). The MCV code was however using the column collations for MCV serialization, and then DEFAULT_COLLATION_OID when computing estimates. So overall the code was using all three possible options, inconsistently. This uses the column colation everywhere - this makes it consistent with what 5e0928005 did for regular stats. We however do not track the collations in a catalog, because we can derive them from column-level information. This may need to change in the future, e.g. after allowing statistics on expressions. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu Backpatch-to: 12
* Rework examine_opclause_expression to use varonleftTomas Vondra2019-07-20
| | | | | | | | | | | | | | | | | | | | | | | The examine_opclause_expression function needs to return information on which side of the operator we found the Var, but the variable was called "isgt" which is rather misleading (it assumes the operator is either less-than or greater-than, but it may be equality or something else). Other places in the planner use a variable called "varonleft" for this purpose, so just adopt the same convention here. The code also assumed we don't care about this flag for equality, as (Var = Const) and (Const = Var) should be the same thing. But that does not work for cross-type operators, in which case we need to pass the parameters to the procedure in the right order. So just use the same code for all types of expressions. This means we don't need to care about the selectivity estimation function anymore, at least not in this code. We should only get the supported cases here (thanks to statext_is_compatible_clause). Reviewed-by: Tom Lane Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu Backpatch-to: 12
* Silence compiler warning, hopefully.Tom Lane2019-07-19
| | | | | | | | Absorb commit e5e04c962a5d12eebbf867ca25905b3ccc34cbe0 from upstream IANA code, in hopes of silencing warnings from MSVC about negating a bool value. Discussion: https://postgr.es/m/20190719035347.GJ1859@paquier.xyz
* Fix error in commit e6feef57.Jeff Davis2019-07-18
| | | | | | | I was careless passing a datum directly to DATE_NOT_FINITE without calling DatumGetDateADT() first. Backpatch-through: 9.4
* Fix daterange canonicalization for +/- infinity.Jeff Davis2019-07-18
| | | | | | | | | | | | | | | | | | | The values 'infinity' and '-infinity' are a part of the DATE type itself, so a bound of the date 'infinity' is not the same as an unbounded/infinite range. However, it is still wrong to try to canonicalize such values, because adding or subtracting one has no effect. Fix by treating 'infinity' and '-infinity' the same as unbounded ranges for the purposes of canonicalization (but not other purposes). Backpatch to all versions because it is inconsistent with the documented behavior. Note that this could be an incompatibility for applications relying on the behavior contrary to the documentation. Author: Laurenz Albe Reviewed-by: Thomas Munro Discussion: https://postgr.es/m/77f24ea19ab802bc9bc60ddbb8977ee2d646aec1.camel%40cybertec.at Backpatch-through: 9.4
* Fix nbtree metapage cache upgrade bug.Peter Geoghegan2019-07-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 857f9c36cda, which taught nbtree VACUUM to avoid unnecessary index scans, bumped the nbtree version number from 2 to 3, while adding the ability for nbtree indexes to be upgraded on-the-fly. Various assertions that assumed that an nbtree index was always on version 2 had to be changed to accept any supported version (version 2 or 3 on Postgres 11). However, a few assertions were missed in the initial commit, all of which were in code paths that cache a local copy of the metapage metadata, where the index had been expected to be on the current version (no longer version 2) as a generic sanity check. Rather than simply update the assertions, follow-up commit 0a64b45152b intentionally made the metapage caching code update the per-backend cached metadata version without changing the on-disk version at the same time. This could even happen when the planner needed to determine the height of a B-Tree for costing purposes. The assertions only fail on Postgres v12 when upgrading from v10, because they were adjusted to use the authoritative shared memory metapage by v12's commit dd299df8. To fix, remove the cache-only upgrade mechanism entirely, and update the assertions themselves to accept any supported version (go back to using the cached version in v12). The fix is almost a full revert of commit 0a64b45152b on the v11 branch. VACUUM only considers the authoritative metapage, and never bothers with a locally cached version, whereas everywhere else isn't interested in the metapage fields that were added by commit 857f9c36cda. It seems unlikely that this bug has affected any user on v11. Reported-By: Christoph Berg Bug: #15896 Discussion: https://postgr.es/m/15896-5b25e260fdb0b081%40postgresql.org Backpatch: 11-, where VACUUM was taught to avoid unnecessary index scans.
* Simplify bitmap updates in multivariate MCV codeTomas Vondra2019-07-18
| | | | | | | | | | | | | | | | | When evaluating clauses on a multivariate MCV list, we build a bitmap tracking how the clauses match each item of the MCV list. When updating the bitmap we need to consider the current value (tracking how the item matches preceding clauses), match for the current clause and whether the clauses are connected by AND or OR. Until now the logic was copied on every place updating the bitmap, which was not quite readable. So just move it to a separate function and call it where needed. Backpatch to 12, where the code was introduced. While not a bugfix, this should make maintenance and future backpatches easier. Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
* Fix handling of NULLs in MCV items and constantsTomas Vondra2019-07-18
| | | | | | | | | | | | | | | | | | | There were two issues in how the extended statistics handled NULL values in opclauses. Firstly, the code was oblivious to the possibility that Const may be NULL (constisnull=true) in which case the constvalue is undefined. We need to treat this as a mismatch, and not call the proc. Secondly, the MCV item itself may contain NULL values too - the code already did check that, and updated the match bitmap accordingly, but failed to ensure we won't call the operator procedure anyway. It did work for AND-clauses, because in that case false in the bitmap stops evaluation of further clauses. But for OR-clauses ir was not easy to get incorrect estimates or even trigger a crash. This fixes both issues by extending the existing check so that it looks at constisnull too, and making sure it skips calling the procedure. Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
* Fix handling of opclauses in extended statisticsTomas Vondra2019-07-18
| | | | | | | | | | | | | | | | | We expect opclauses to have exactly one Var and one Const, but the code was checking the Const by calling is_pseudo_constant_clause() which is incorrect - we need a proper constant. Fixed by using plain IsA(x,Const) to check type of the node. We need to do these checks in two places, so move it into a separate function that can be called in both places. Reported by Andreas Seltenreich, based on crash reported by sqlsmith. Backpatch to v12, where this code was introduced. Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu Backpatch-to: 12
* Remove unnecessary TYPECACHE_GT_OPR lookupTomas Vondra2019-07-18
| | | | | | | | | | | The TYPECACHE_GT_OPR is not needed (it used to be in older version of the MCV code), but the compiler failed to detect this as the result was used in a fmgr_info() call, populating a FmgrInfo entry. Backpatch to v12, where this code was introduced. Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu Backpatch-to: 12
* tableam: comment improvements.Andres Freund2019-07-17
| | | | | | Author: Brad DeJong Discussion: https://postgr.es/m/CAJnrtnxDYOQFsDfWz2iri0T_fFL2ZbbzgCOE=4yaMcszgcsf4A@mail.gmail.com Backpatch: 12-
* Update time zone data files to tzdata release 2019b.Tom Lane2019-07-17
| | | | | Brazil no longer observes DST. Historical corrections for Palestine, Hong Kong, and Italy.