aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Fix improper interaction of FULL JOINs with lateral references.Tom Lane2019-04-08
| | | | | | | | | | | | | | | | | | | | | join_is_legal() needs to reject forming certain outer joins in cases where that would lead the planner down a blind alley. However, it mistakenly supposed that the way to handle full joins was to treat them as applying the same constraints as for left joins, only to both sides. That doesn't work, as shown in bug #15741 from Anthony Skorski: given a lateral reference out of a join that's fully enclosed by a full join, the code would fail to believe that any join ordering is legal, resulting in errors like "failed to build any N-way joins". However, we don't really need to consider full joins at all for this purpose, because we effectively force them to be evaluated in syntactic order, and that order is always legal for lateral references. Hence, get rid of this broken logic for full joins and just ignore them instead. This seems to have been an oversight in commit 7e19db0c0. Back-patch to all supported branches, as that was. Discussion: https://postgr.es/m/15741-276f1f464b3f40eb@postgresql.org
* Fix EvalPlanQualStart to handle partitioned result rels correctly.Tom Lane2019-04-08
| | | | | | | | | | | | The es_root_result_relations array needs to be shallow-copied in the same way as the main es_result_relations array, else EPQ rechecks on partitioned result relations fail, as seen in bug #15677 from Norbert Benkocs. Amit Langote, isolation test case added by me Discussion: https://postgr.es/m/15677-0bf089579b4cd02d@postgresql.org Discussion: https://postgr.es/m/19321.1554567786@sss.pgh.pa.us
* Fix partition tuple routing with dropped attributesMichael Paquier2019-04-08
| | | | | | | | | | | | | | | | | | | | | | When trying to insert a tuple into a partitioned table, the routing to the correct partition has been messed up by mixing when a tuple needs to be stored in an intermediate parent's slot and when a tuple needs to be converted because of attribute changes between the immediate parent relation and the parent relation one level above that (the grandparent). This could trigger errors like the following: ERROR: cannot extract attribute from empty tuple slot SQL state: XX000 This was not detected because regression tests with dropped attributes only included tests with two levels of partitioning, and this can be triggered with three levels or more. This fixes bug #15733, which has been introduced by 34295b8. The bug happens only on REL_11_STABLE and HEAD gains the regression tests added for this bug. Reported-by: Petr Fedorov Author: Amit Langote, Michael Paquier Discussion: https://postgr.es/m/15733-7692379e310b80ec@postgresql.org
* Avoid fetching past the end of the indoption array.Tom Lane2019-04-07
| | | | | | | | | | | pg_get_indexdef_worker carelessly fetched indoption entries even for non-key index columns that don't have one. 99.999% of the time this would be harmless, since the code wouldn't examine the value ... but some fine day this will be a fetch off the end of memory, resulting in SIGSEGV. Detected through valgrind testing. Odd that the buildfarm's valgrind critters haven't noticed.
* Clean up side-effects of commits ab5fcf2b0 et al.Tom Lane2019-04-07
| | | | | | | | | | | | | | | | | | | | | | | Before those commits, partitioning-related code in the executor could assume that ModifyTableState.resultRelInfo[] contains only leaf partitions. However, now a fully-pruned update results in a dummy ModifyTable that references the root partitioned table, and that breaks some stuff. In v11, this led to an assertion or core dump in the tuple routing code. Fix by disabling tuple routing, since we don't need that anyway. (I chose to do that in HEAD as well for safety, even though the problem doesn't manifest in HEAD as it stands.) In v10, this confused ExecInitModifyTable's decision about whether it needed to close the root table. But we can get rid of that altogether by being smarter about where to find the root table. Note that since the referenced commits haven't shipped yet, this isn't fixing any bug the field has seen. Amit Langote, per a report from me Discussion: https://postgr.es/m/20710.1554582479@sss.pgh.pa.us
* Fix failures in validateForeignKeyConstraint's slow path.Tom Lane2019-04-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | The foreign-key-checking loop in ATRewriteTables failed to ignore relations without storage (e.g., partitioned tables), unlike the initial loop. This accidentally worked as long as RI_Initial_Check succeeded, which it does in most practical cases (including all the ones exercised in the existing regression tests :-(). However, if that failed, as for instance when there are permissions issues, then we entered the slow fire-the-trigger-on-each-tuple path. And that would try to read from the referencing relation, and fail if it lacks storage. A second problem, recently introduced in HEAD, was that this loop had been broken by sloppy refactoring for the tableam API changes. Repair both issues, and add a regression test case so we have some coverage on this code path. Back-patch as needed to v11. (It looks like this code could do with additional bulletproofing, but let's get a working test case in place first.) Hadi Moshayedi, Tom Lane, Andres Freund Discussion: https://postgr.es/m/CAK=1=WrnNmBbe5D9sm3t0a6dnAq3cdbF1vXY816j1wsMqzC8bw@mail.gmail.com Discussion: https://postgr.es/m/19030.1554574075@sss.pgh.pa.us Discussion: https://postgr.es/m/20190325180405.jytoehuzkeozggxx%40alap3.anarazel.de
* Revert "Consistently test for in-use shared memory."Noah Misch2019-04-05
| | | | | | | | | This reverts commits 2f932f71d9f2963bbd201129d7b971c8f5f077fd, 16ee6eaf80a40007a138b60bb5661660058d0422 and 6f0e190056fe441f7cf788ff19b62b13c94f68f3. The buildfarm has revealed several bugs. Back-patch like the original commits. Discussion: https://postgr.es/m/20190404145319.GA1720877@rfd.leadboat.com
* Silence -Wimplicit-fallthrough in sysv_shmem.c.Noah Misch2019-04-03
| | | | | | | | | | Commit 2f932f71d9f2963bbd201129d7b971c8f5f077fd added code that elicits a warning on buildfarm member flaviventris. Back-patch to 9.4, like that commit. Reported by Andres Freund. Discussion: https://postgr.es/m/20190404020057.galelv7by75ekqrh@alap3.anarazel.de
* Make src/test/recovery/t/017_shm.pl safe for concurrent execution.Noah Misch2019-04-03
| | | | | | | | Buildfarm members idiacanthus and komodoensis, which share a host, both executed this test in the same second. That failed. Back-patch to 9.6, where the test first appeared. Discussion: https://postgr.es/m/20190404020543.GA1319573@rfd.leadboat.com
* Handle USE_MODULE_DB for all tests able to use an installed postmaster.Noah Misch2019-04-03
| | | | | | | | | | | | | | | | | | | | When $(MODULES) and $(MODULE_big) are empty, derive the database name from the first element of $(REGRESS) instead of using a constant string. When deriving the database name from $(MODULES), use its first element instead of the entire list; the earlier approach would fail if any multi-module directory had $(REGRESS) tests. Treat isolation suites and src/pl correspondingly. Under USE_MODULE_DB=1, installcheck-world and check-world no longer reuse any database name in a given postmaster. Buildfarm members axolotl, mandrill and frogfish saw spurious "is being accessed by other users" failures that would not have happened without database name reuse. (The CountOtherDBBackends() 5s deadline expired during DROP DATABASE; a backend for an earlier test suite had used the same database name and had not yet exited.) Back-patch to 9.4 (all supported versions), except bits pertaining to isolation suites. Concept reviewed by Andrew Dunstan, Andres Freund and Tom Lane. Discussion: https://postgr.es/m/20190401135213.GE891537@rfd.leadboat.com
* Consistently test for in-use shared memory.Noah Misch2019-04-03
| | | | | | | | | | | | | | | | | | | | | postmaster startup scrutinizes any shared memory segment recorded in postmaster.pid, exiting if that segment matches the current data directory and has an attached process. When the postmaster.pid file was missing, a starting postmaster used weaker checks. Change to use the same checks in both scenarios. This increases the chance of a startup failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1 postmaster.pid` && rm postmaster.pid && pg_ctl -w start". A postmaster will no longer recycle segments pertaining to other data directories. That's good for production, but it's bad for integration tests that crash a postmaster and immediately delete its data directory. Such a test now leaks a segment indefinitely. No "make check-world" test does that. win32_shmem.c already avoided all these problems. In 9.6 and later, enhance PostgresNode to facilitate testing. Back-patch to 9.4 (all supported versions). Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI. Discussion: https://postgr.es/m/20130911033341.GD225735@tornado.leadboat.com
* Perform RLS subquery checks as the right user when going via a view.Dean Rasheed2019-04-02
| | | | | | | | | | | | | | When accessing a table with RLS via a view, the RLS checks are performed as the view owner. However, the code neglected to propagate that to any subqueries in the RLS checks. Fix that by calling setRuleCheckAsUser() for all RLS policy quals and withCheckOption checks for RTEs with RLS. Back-patch to 9.5 where RLS was added. Per bug #15708 from daurnimator. Discussion: https://postgr.es/m/15708-d65cab2ce9b1717a@postgresql.org
* Update HINT for pre-existing shared memory block.Noah Misch2019-03-31
| | | | | | | | | | | | One should almost always terminate an old process, not use a manual removal tool like ipcrm. Removal of the ipcclean script eleven years ago (39627b1ae680cba44f6e56ca5facec4fdbfe9495) and its non-replacement corroborate that manual shm removal is now a niche goal. Back-patch to 9.4 (all supported versions). Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI. Discussion: https://postgr.es/m/20180812064815.GB2301738@rfd.leadboat.com
* Have pg_upgrade's Makefile honor NO_TEMP_INSTALLAndrew Dunstan2019-03-31
| | | | | | Backpatch to 9.5, when pg_upgrade's location changed. Discussion: https://postgr.es/m/5506b8fa-7dad-8483-053c-7ca7ef04f01a@2ndQuadrant.com
* Avoid crash in partitionwise join planning under GEQO.Tom Lane2019-03-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While trying to plan a partitionwise join, we may be faced with cases where one or both input partitions for a particular segment of the join have been pruned away. In HEAD and v11, this is problematic because earlier processing didn't bother to make a pruned RelOptInfo fully valid. With an upcoming patch to make partition pruning more efficient, this'll be even more problematic because said RelOptInfo won't exist at all. The existing code attempts to deal with this by retroactively making the RelOptInfo fully valid, but that causes crashes under GEQO because join planning is done in a short-lived memory context. In v11 we could probably have fixed this by switching to the planner's main context while fixing up the RelOptInfo, but that idea doesn't scale well to the upcoming patch. It would be better not to mess with the base-relation data structures during join planning, anyway --- that's just a recipe for order-of-operations bugs. In many cases, though, we don't actually need the child RelOptInfo, because if the input is certainly empty then the join segment's result is certainly empty, so we can skip making a join plan altogether. (The existing code ultimately arrives at the same conclusion, but only after doing a lot more work.) This approach works except when the pruned-away partition is on the nullable side of a LEFT, ANTI, or FULL join, and the other side isn't pruned. But in those cases the existing code leaves a lot to be desired anyway --- the correct output is just the result of the unpruned side of the join, but we were emitting a useless outer join against a dummy Result. Pending somebody writing code to handle that more nicely, let's just abandon the partitionwise-join optimization in such cases. When the modified code skips making a join plan, it doesn't make a join RelOptInfo either; this requires some upper-level code to cope with nulls in part_rels[] arrays. We would have had to have that anyway after the upcoming patch. Back-patch to v11 since the crash is demonstrable there. Discussion: https://postgr.es/m/8305.1553884377@sss.pgh.pa.us
* Fix off-by-one error in txid_status().Thomas Munro2019-03-27
| | | | | | | | | | | | | The transaction ID returned by GetNextXidAndEpoch() is in the future, so we can't attempt to access its status or we might try to read a CLOG page that doesn't exist. The > vs >= confusion probably stemmed from the choice of a variable name containing the word "last" instead of "next", so fix that too. Back-patch to 10 where the function arrived. Author: Thomas Munro Discussion: https://postgr.es/m/CA%2BhUKG%2Buua_BV5cyfsioKVN2d61Lukg28ECsWTXKvh%3DBtN2DPA%40mail.gmail.com
* Track unowned relations in doubly-linked listTomas Vondra2019-03-27
| | | | | | | | | | | | | | | | | | | | | | | | | Relations dropped in a single transaction are tracked in a list of unowned relations. With large number of dropped relations this resulted in poor performance at the end of a transaction, when the relations are removed from the singly linked list one by one. Commit b4166911 attempted to address this issue (particularly when it happens during recovery) by removing the relations in a reverse order, resulting in O(1) lookups in the list of unowned relations. This did not work reliably, though, and it was possible to trigger the O(N^2) behavior in various ways. Instead of trying to remove the relations in a specific order with respect to the linked list, which seems rather fragile, switch to a regular doubly linked. That allows us to remove relations cheaply no matter where in the list they are. As b4166911 was a bugfix, backpatched to all supported versions, do the same thing here. Reviewed-by: Alvaro Herrera Discussion: https://www.postgresql.org/message-id/flat/80c27103-99e4-1d0c-642c-d9f3b94aaa0a%402ndquadrant.com Backpatch-through: 9.4
* Fix partitioned index creation bug with dropped columnsAlvaro Herrera2019-03-26
| | | | | | | | | | | | | | | | | | | ALTER INDEX .. ATTACH PARTITION fails if the partitioned table where the index is defined contains more dropped columns than its partition, with this message: ERROR: incorrect attribute map The cause was that one caller of CompareIndexInfo was passing the number of attributes of the partition rather than the parent, which confused the length check. Repair. This can cause pg_upgrade to fail when used on such a database. Leave some more objects around after regression tests, so that the case is detected by pg_upgrade test suite. Remove some spurious empty lines noticed while looking for other cases of the same problem. Discussion: https://postgr.es/m/20190326213924.GA2322@alvherre.pgsql
* psql: Schema-qualify typecast in one \d queryAlvaro Herrera2019-03-26
| | | | Bug introduced in my commit bc87f22ef6ef
* Fix WAL format incompatibility introduced by backpatching of 52ac6cd2d0Alexander Korotkov2019-03-24
| | | | | | | | | | | | | | | | | | | 52ac6cd2d0 added new field to ginxlogDeletePage and was backpatched to 9.4. That led to problems when patched postgres instance applies WAL records generated by non-patched one. WAL records generated by non-patched instance don't contain new field, which patched one is expecting to see. Thankfully, we can distinguish patched and non-patched WAL records by their data size. If we see that WAL record is generated by non-patched instance, we skip processing of new field. This commit comes with some assertions. In particular, if it appears that on some platform struct data size didn't change then static assertion will trigger. Reported-by: Simon Riggs Discussion: https://postgr.es/m/CANP8%2Bj%2BK4whxf7ET7%2BgO%2BG-baC3-WxqqH%3DnV4X2CgfEPA3Yu3g%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Simon Riggs, Alvaro Herrera Backpatch-through: 9.4
* Make current_logfiles use permissions assigned to files in data directoryMichael Paquier2019-03-24
| | | | | | | | | | | | | | | | | | | | | | | Since its introduction in 19dc233c, current_logfiles has been assigned the same permissions as a log file, which can be enforced with log_file_mode. This setup can lead to incompatibility problems with group access permissions as current_logfiles is not located in the log directory, but at the root of the data folder. Hence, if group permissions are used but log_file_mode is more restrictive, a backup with a user in the group having read access could fail even if the log directory is located outside of the data folder. Per discussion with the folks mentioned below, we have concluded that current_logfiles should not be treated as a log file as it only stores metadata related to log files, and that it should use the same permissions as all other files in the data directory. This solution has the merit to be simple and fixes all the interaction problems between group access and log_file_mode. Author: Haribabu Kommi Reviewed-by: Stephen Frost, Robert Haas, Tom Lane, Michael Paquier Discussion: https://postgr.es/m/CAJrrPGcEotF1P7AWoeQyD3Pqr-0xkQg_Herv98DjbaMj+naozw@mail.gmail.com Backpatch-through: 11, where group access has been added.
* Remove inadequate check for duplicate "xml" PI.Tom Lane2019-03-23
| | | | | | I failed to think about PIs starting with "xml". We don't really need this check at all, so just take it out. Oversight in commit 8d1dadb25 et al.
* Ensure xmloption = content while restoring pg_dump output.Tom Lane2019-03-23
| | | | | | | | In combination with the previous commit, this ensures that valid XML data can always be dumped and reloaded, whether it is "document" or "content". Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
* Accept XML documents when xmloption = content, as required by SQL:2006+.Tom Lane2019-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we were using the SQL:2003 definition, which doesn't allow this, but that creates a serious dump/restore gotcha: there is no setting of xmloption that will allow all valid XML data. Hence, switch to the 2006 definition. Since libxml doesn't accept <!DOCTYPE> directives in the mode we use for CONTENT parsing, the implementation is to detect <!DOCTYPE> in the input and switch to DOCUMENT parsing mode. This should not cost much, because <!DOCTYPE> should be close to the front of the input if it's there at all. It's possible that this causes the error messages for malformed input to be slightly different than they were before, if said input includes <!DOCTYPE>; but that does not seem like a big problem. In passing, buy back a few cycles in parsing of large XML documents by not doing strlen() of the whole input in parse_xml_decl(). Back-patch because dump/restore failures are not nice. This change shouldn't break any cases that worked before, so it seems safe to back-patch. Chapman Flack (revised a bit by me) Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
* Restore RI trigger sanity checkAlvaro Herrera2019-03-20
| | | | | | | | I unnecessarily removed this check in 3de241dba86f because I misunderstood what the final representation of constraints across a partitioning hierarchy was to be. Put it back (in both branches). Discussion: https://postgr.es/m/201901222145.t6wws6t6vrcu@alvherre.pgsql
* Hack back-branch SSL tests to avoid intermittent buildfarm failures.Tom Lane2019-03-19
| | | | | | | | | | | | | | | | Buildfarm member eelpout sometimes reports the wrong error message for an SSL connection failure. In HEAD, this problem is believed to be solved by commit 1f39a1c06, but I'm as yet unwilling to back-patch that. The problem seems fairly unlikely to be an issue in the field, since (as far as we can tell) it happens only during a failure of a local-loopback SSL connection, and it's improbable even then. It seems better to just live with it for the time being; but let's tweak the regression test to accept the other error message as a "pass". Needed in v11 only, since older branches didn't check the message text anyway. Discussion: https://postgr.es/m/CAEepm=2n6Nv+5tFfe8YnkUm1fXgvxR0Mm1FoD+QKG-vLNGLyKg@mail.gmail.com
* Make checkpoint requests more robust.Tom Lane2019-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 6f6a6d8b1 introduced a delay of up to 2 seconds if we're trying to request a checkpoint but the checkpointer hasn't started yet (or, much less likely, our kill() call fails). However buildfarm experience shows that that's not quite enough for slow or heavily-loaded machines. There's no good reason to assume that the checkpointer won't start eventually, so we may as well make the timeout much longer, say 60 sec. However, if the caller didn't say CHECKPOINT_WAIT, it seems like a bad idea to be waiting at all, much less for as long as 60 sec. We can remove the need for that, and make this whole thing more robust, by adjusting the code so that the existence of a pending checkpoint request is clear from the contents of shared memory, and making sure that the checkpointer process will notice it at startup even if it did not get a signal. In this way there's no need for a non-CHECKPOINT_WAIT call to wait at all; if it can't send the signal, it can nonetheless assume that the checkpointer will eventually service the request. A potential downside of this change is that "kill -INT" on the checkpointer process is no longer enough to trigger a checkpoint, should anyone be relying on something so hacky. But there's no obvious reason to do it like that rather than issuing a plain old CHECKPOINT command, so we'll assume that nobody is. There doesn't seem to be a way to preserve this undocumented quasi-feature without introducing race conditions. Since a principal reason for messing with this is to prevent intermittent buildfarm failures, back-patch to all supported branches. Discussion: https://postgr.es/m/27830.1552752475@sss.pgh.pa.us
* Fix error message in pg_verify_checksumsMichael Paquier2019-03-19
| | | | | 5864d24 has introduced a new error message, and I somewhat managed to fail adapting the back-patched version correctly with the tool name.
* Fix memory leak in printtup.c.Tom Lane2019-03-18
| | | | | | | | | | | | | | | | Commit f2dec34e1 changed things so that printtup's output stringinfo buffer was allocated outside the per-row temporary context, not inside it. This creates a need to free that buffer explicitly when the temp context is freed, but that was overlooked. In most cases, this is all happening inside a portal or executor context that will go away shortly anyhow, but that's not always true. Notably, the stringinfo ends up getting leaked when JDBC uses row-at-a-time fetches. For a query that returns wide rows, that adds up after awhile. Per bug #15700 from Matthias Otterbach. Back-patch to v11 where the faulty code was added. Discussion: https://postgr.es/m/15700-8c408321a87d56bb@postgresql.org
* Fix pg_rewind when rewinding new database with tables includedMichael Paquier2019-03-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes an issue introduced by 266b6ac, which has added filters to exclude file patterns on the target and source data directories to reduce the number of files transferred. Filters get applied to both the target and source data files, and include pg_internal.init which is present for each database once relations are created on it. However, if the target differed from the source with at least one new database with relations, the rewind would fail due to the exclusion filters applied on the target files, causing pg_internal.init to still be present on the target database folder, while its contents should have been completely removed so as there is nothing remaining inside at the time of the folder deletion. Applying exclusion filters on the source files is fine, because this way the amount of data copied from the source to the target is reduced. And actually, not applying the filters on the target is what pg_rewind should do, because this causes such files to be automatically removed during the rewind on the target. Exclusion filters apply to paths which are removed or recreated automatically at startup, so removing all those files on the target during the rewind is a win. The existing set of TAP tests already stresses the rewind of databases, but it did not include any tables on those newly-created databases. Creating extra tables in this case is enough to reproduce the failure, so the existing tests are extended to close the gap. Reported-by: Mithun Cy Author: Michael Paquier Discussion: https://postgr.es/m/CADq3xVYt6_pO7ZzmjOqPgY9HWsL=kLd-_tNyMtdfjKqEALDyTA@mail.gmail.com Backpatch-through: 11
* Error out in pg_verify_checksums on incompatible block sizeMichael Paquier2019-03-18
| | | | | | | | | | | | | | | | | pg_verify_checksums is compiled with a given block size and has a hard dependency to it per the way checksums are calculated via checksum_impl.h, and trying to use the tool on a data folder which has not the same block size would result in incorrect checksum calculations and/or block read errors, meaning that the data folder is corrupted. This is harmless as checksums are only checked now, but very confusing for the user so issue an error properly if the block size used at compilation and the block size used in the data folder do not match. Reported-by: Sergei Kornilov Author: Michael Banck, Michael Paquier Reviewed-by: Fabien Coelho, Magnus Hagander Discussion: https://postgr.es/m/20190317054657.GA3357@paquier.xyz ackpatch-through: 11
* Ensure dummy paths have correct required_outer if rel is parameterized.Tom Lane2019-03-14
| | | | | | | | | | | | | | | | The assertions added by commits 34ea1ab7f et al found another problem: set_dummy_rel_pathlist and mark_dummy_rel were failing to label the dummy paths they create with the correct outer_relids, in case the relation is necessarily parameterized due to having lateral references in its tlist. It's likely that this has no user-visible consequences in production builds, at the moment; but still an assertion failure is a bad thing, so back-patch the fix. Per bug #15694 from Roman Zharkov (via Alexander Lakhin) and an independent report by Tushar Ahuja. Discussion: https://postgr.es/m/15694-74f2ca97e7044f7f@postgresql.org Discussion: https://postgr.es/m/7d72ab20-c725-3ce2-f99d-4e64dd8a0de6@enterprisedb.com
* Fix thinko when bumping on temporary directories in pg_verify_checksumsMichael Paquier2019-03-14
| | | | | | | | | | | | | This fixes an oversight from 5c99513. This has no actual consequence as PG_TEMP_FILE_PREFIX and PG_TEMP_FILES_DIR have the same value so when bumping on a temporary path the directory scan was still moving on to the next entry instead of skipping the rest of the scan, but let's keep the logic correct. Author: Michael Banck Reviewed-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/20190314.115417.58230569.horiguchi.kyotaro@lab.ntt.co.jp Backpatch-through: 11
* Fix cross-version compatibility checks of pg_verify_checksumsMichael Paquier2019-03-13
| | | | | | | | | | | | | pg_verify_checksums performs a read of the control file, and the data it fetches should be from a data folder compatible with the major version of Postgres the binary has been compiled with, but we never actually checked that compatibility. Reported-by: Sergei Kornilov Author: Michael Paquier Reviewed-by: Sergei Kornilov Discussion: https://postgr.es/m/155231347133.16480.11453587097036807558.pgcf@coridan.postgresql.org Backpatch-through: 11
* Fix testing of parallel-safety of scan/join target.Etsuro Fujita2019-03-12
| | | | | | | | | | | In commit 960df2a971 ("Correctly assess parallel-safety of tlists when SRFs are used."), the testing of scan/join target was done incorrectly, which caused a plan-quality problem. Backpatch through to v11 where the aforementioned commit went in, since this is a regression from v10. Author: Etsuro Fujita Reviewed-by: Robert Haas and Tom Lane Discussion: https://postgr.es/m/5C75303E.8020303@lab.ntt.co.jp
* Fix potential memory access violation in ecpg if filename of include file isMichael Meskes2019-03-11
| | | | | | shorter than 2 characters. Patch by: "Wu, Fei" <wufei.fnst@cn.fujitsu.com>
* Disallow NaN as a value for floating-point GUCs.Tom Lane2019-03-10
| | | | | | | | | | | | | | | | | | None of the code that uses GUC values is really prepared for them to hold NaN, but parse_real() didn't have any defense against accepting such a value. Treat it the same as a syntax error. I haven't attempted to analyze the exact consequences of setting any of the float GUCs to NaN, but since they're quite unlikely to be good, this seems like a back-patchable bug fix. Note: we don't need an explicit test for +-Infinity because those will be rejected by existing range checks. I added a regression test for that in HEAD, but not older branches because the spelling of the value in the error message will be platform-dependent in branches where we don't always use port/snprintf.c. Discussion: https://postgr.es/m/1798.1552165479@sss.pgh.pa.us
* Fix handling of targetlist SRFs when scan/join relation is known empty.Tom Lane2019-03-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we introduced separate ProjectSetPath nodes for application of set-returning functions in v10, we inadvertently broke some cases where we're supposed to recognize that the result of a subquery is known to be empty (contain zero rows). That's because IS_DUMMY_REL was just looking for a childless AppendPath without allowing for a ProjectSetPath being possibly stuck on top. In itself, this didn't do anything much worse than produce slightly worse plans for some corner cases. Then in v11, commit 11cf92f6e rearranged things to allow the scan/join targetlist to be applied directly to partial paths before they get gathered. But it inserted a short-circuit path for dummy relations that was a little too short: it failed to insert a ProjectSetPath node at all for a targetlist containing set-returning functions, resulting in bogus "set-valued function called in context that cannot accept a set" errors, as reported in bug #15669 from Madelaine Thibaut. The best way to fix this mess seems to be to reimplement IS_DUMMY_REL so that it drills down through any ProjectSetPath nodes that might be there (and it seems like we'd better allow for ProjectionPath as well). While we're at it, make it look at rel->pathlist not cheapest_total_path, so that it gives the right answer independently of whether set_cheapest has been done lately. That dependency looks pretty shaky in the context of code like apply_scanjoin_target_to_paths, and even if it's not broken today it'd certainly bite us at some point. (Nastily, unsafe use of the old coding would almost always work; the hazard comes down to possibly looking through a dangling pointer, and only once in a blue moon would you find something there that resulted in the wrong answer.) It now looks like it was a mistake for IS_DUMMY_REL to be a macro: if there are any extensions using it, they'll continue to use the old inadequate logic until they're recompiled, after which they'll fail to load into server versions predating this fix. Hopefully there are few such extensions. Having fixed IS_DUMMY_REL, the special path for dummy rels in apply_scanjoin_target_to_paths is unnecessary as well as being wrong, so we can just drop it. Also change a few places that were testing for partitioned-ness of a planner relation but not using IS_PARTITIONED_REL for the purpose; that seems unsafe as well as inconsistent, plus it required an ugly hack in apply_scanjoin_target_to_paths. In passing, save a few cycles in apply_scanjoin_target_to_paths by skipping processing of pre-existing paths for partitioned rels, and do some cosmetic cleanup and comment adjustment in that function. I renamed IS_DUMMY_PATH to IS_DUMMY_APPEND with the intention of breaking any code that might be using it, since in almost every case that would be wrong; IS_DUMMY_REL is what to be using instead. In HEAD, also make set_dummy_rel_pathlist static (since it's no longer used from outside allpaths.c), and delete is_dummy_plan, since it's no longer used anywhere. Back-patch as appropriate into v11 and v10. Tom Lane and Julien Rouhaud Discussion: https://postgr.es/m/15669-02fb3296cca26203@postgresql.org
* Disable dump_connstr test on Msys2Andrew Dunstan2019-03-05
| | | | | | For some reason the dump test with names with high bits set fails on Msys2 (although not Msys1). Disable the tests for now, so that other tests can run.
* Fix pgbench TAP test failure with funky file names (redux)Andrew Dunstan2019-03-05
| | | | | | | | | | | This test fails if the containing directory contains a funny character such as a space or some perl metacharacter. To avoid that, we check for files names using readdir and a regex, rather than using a glob pattern. Discussion: https://postgr.es/m/CAM6_UM6dGdU39PKAC24T+HD9ouy0jLN9vH6163K8QEEzr__iZw@mail.gmail.com Author: Fabien COELHO Reviewed-by: Raúl Marín Rodríguez
* Fix error handling of readdir() port implementation on first file lookupMichael Paquier2019-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | The implementation of readdir() in src/port/ which gets used by MSVC has been added in 399a36a, and since the beginning it considers all errors on the first file lookup as ENOENT, setting errno accordingly and letting the routine caller think that the directory is empty. While this is normally enough for the case of the backend, this can confuse callers of this routine on Windows as all errors would map to the same behavior. So, for example, even permission errors would be thought as having an empty directory, while there could be contents in it. This commit changes the error handling so as readdir() gets a behavior similar to native implementations: force errno=0 when seeing ERROR_FILE_NOT_FOUND as error and consider other errors as plain failures. While looking at the patch, I noticed that MinGW does not enforce errno=0 when looking at the first file, but it gets enforced on the next file lookups. A comment related to that was incorrect in the code. Reported-by: Yuri Kurenkov Diagnosed-by: Yuri Kurenkov, Grigory Smolkin Author: Konstantin Knizhnik Reviewed-by: Andrew Dunstan, Michael Paquier Discussion: https://postgr.es/m/2cad7829-8d66-e39c-b937-ac825db5203d@postgrespro.ru Backpatch-through: 9.4
* Further fixing for multi-row VALUES lists for updatable views.Dean Rasheed2019-03-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, rewriteTargetListIU() generated a list of attribute numbers from the targetlist, which were passed to rewriteValuesRTE(), which expected them to contain the same number of entries as there are columns in the VALUES RTE, and to be in the same order. That was fine when the target relation was a table, but for an updatable view it could be broken in at least three different ways --- rewriteTargetListIU() could insert additional targetlist entries for view columns with defaults, the view columns could be in a different order from the columns of the underlying base relation, and targetlist entries could be merged together when assigning to elements of an array or composite type. As a result, when recursing to the base relation, the list of attribute numbers generated from the rewritten targetlist could no longer be relied upon to match the columns of the VALUES RTE. We got away with that prior to 41531e42d3 because it used to always be the case that rewriteValuesRTE() did nothing for the underlying base relation, since all DEFAULTS had already been replaced when it was initially invoked for the view, but that was incorrect because it failed to apply defaults from the base relation. Fix this by examining the targetlist entries more carefully and picking out just those that are simple Vars referencing the VALUES RTE. That's sufficient for the purposes of rewriteValuesRTE(), which is only responsible for dealing with DEFAULT items in the VALUES RTE. Any DEFAULT item in the VALUES RTE that doesn't have a matching simple-Var-assignment in the targetlist is an error which we complain about, but in theory that ought to be impossible. Additionally, move this code into rewriteValuesRTE() to give a clearer separation of concerns between the 2 functions. There is no need for rewriteTargetListIU() to know about the details of the VALUES RTE. While at it, fix the comment for rewriteValuesRTE() which claimed that it doesn't support array element and field assignments --- that hasn't been true since a3c7a993d5 (9.6 and later). Back-patch to all supported versions, with minor differences for the pre-9.6 branches, which don't support array element and field assignments to the same column in multi-row VALUES lists. Reviewed by Amit Langote. Discussion: https://postgr.es/m/15623-5d67a46788ec8b7f@postgresql.org
* Improve documentation of data_sync_retryMichael Paquier2019-02-28
| | | | | | | | Reflecting an updated parameter value requires a server restart, which was not mentioned in the documentation and in postgresql.conf.sample. Reported-by: Thomas Poty Discussion: https://postgr.es/m/15659-0cd812f13027a2d8@postgresql.org
* Fix SCRAM authentication via SSL when mixing versions of OpenSSLMichael Paquier2019-02-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When using a libpq client linked with OpenSSL 1.0.1 or older to connect to a backend linked with OpenSSL 1.0.2 or newer, the server would send SCRAM-SHA-256-PLUS and SCRAM-SHA-256 as valid mechanisms for the SASL exchange, and the client would choose SCRAM-SHA-256-PLUS even if it does not support channel binding, leading to a confusing error. In this case, what the client ought to do is switch to SCRAM-SHA-256 so as the authentication can move on and succeed. So for a SCRAM authentication over SSL, here are all the cases present and how we deal with them using libpq: 1) Server supports channel binding, it sends SCRAM-SHA-256-PLUS and SCRAM-SHA-256 as allowed mechanisms. 1-1) Client supports channel binding, chooses SCRAM-SHA-256-PLUS. 1-2) Client does not support channel binding, chooses SCRAM-SHA-256. 2) Server does not support channel binding, sends SCRAM-SHA-256 as allowed mechanism. 2-1) Client supports channel binding, still it has no choice but to choose SCRAM-SHA-256. 2-2) Client does not support channel binding, it chooses SCRAM-SHA-256. In all these scenarios the connection should succeed, and the one which was handled incorrectly prior this commit is 1-2), causing the connection attempt to fail because client chose SCRAM-SHA-256-PLUS over SCRAM-SHA-256. Reported-by: Hugh Ranalli Diagnosed-by: Peter Eisentraut Author: Michael Paquier Reviewed-by: Peter Eisentraut Discussion: https://postgr.es/m/CAAhbUMO89SqUk-5mMY+OapgWf-twF2NA5sCucbHEzMfGbvcepA@mail.gmail.com Backpatch-through: 11
* Fix inconsistent out-of-memory error reporting in dsa.c.Thomas Munro2019-02-25
| | | | | | | | | | | | | | | | | | | | | | | | | | Commit 16be2fd1 introduced the flag DSA_ALLOC_NO_OOM to control whether the DSA allocator would raise an error or return InvalidDsaPointer on failure to allocate. One edge case was not handled correctly: if we fail to allocate an internal "span" object for a large allocation, we would always return InvalidDsaPointer regardless of the flag; a caller not expecting that could then dereference a null pointer. This is a plausible explanation for a one-off report of a segfault. Remove a redundant pair of braces so that all three stanzas that handle DSA_ALLOC_NO_OOM match in style, for visual consistency. While fixing inconsistencies, if FreePageManagerGet() can't supply the pages that our book-keeping says it should be able to supply, then we should always report a FATAL error. Previously we treated that as a regular allocation failure in one code path, but as a FATAL condition in another. Back-patch to 10, where dsa.c landed. Author: Thomas Munro Reported-by: Jakub Glapa Discussion: https://postgr.es/m/CAEepm=2oPqXxyWQ-1o60tpOLrwkw=VpgNXqqF1VN2EyO9zKGQw@mail.gmail.com
* Fix ecpg bugs caused by missing semicolons in the backend grammar.Tom Lane2019-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | The Bison documentation clearly states that a semicolon is required after every grammar rule, and our scripts that generate ecpg's grammar from the backend's implicitly assumed this is true. But it turns out that only ancient versions of Bison actually enforce that. There have been a couple of rules without trailing semicolons in gram.y for some time, and as a consequence, ecpg's grammar was faulty and produced wrong output for the affected statements. To fix, add the missing semis, and add some cross-checks to ecpg's scripts so that they'll bleat if we mess this up again. The cases that were broken were: * "SET variable = DEFAULT" (but not "SET variable TO DEFAULT"), as well as allied syntaxes such as ALTER SYSTEM SET ... DEFAULT. These produced syntactically invalid output that the server would reject. * Multiple type names in DROP TYPE/DOMAIN commands. Only the first type name would be listed in the emitted command. Per report from Daisuke Higuchi. Back-patch to all supported versions. Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905DB51CE@g01jpexmbkw24
* Tolerate EINVAL when calling fsync() on a directory.Thomas Munro2019-02-24
| | | | | | | | | | | Previously, we tolerated EBADF as a way for the operating system to indicate that it doesn't support fsync() on a directory. Tolerate EINVAL too, for older versions of Linux CIFS. Bug #15636. Back-patch all the way. Reported-by: John Klann Discussion: https://postgr.es/m/15636-d380890dafd78fc6@postgresql.org
* Tolerate ENOSYS failure from sync_file_range().Thomas Munro2019-02-24
| | | | | | | | | | | | | | | | | | | | | One unintended consequence of commit 9ccdd7f6 was that Windows WSL users started getting a panic whenever we tried to initiate data flushing with sync_file_range(), because WSL does not implement that system call. Previously, they got a stream of periodic warnings, which was also undesirable but at least ignorable. Prevent the panic by handling ENOSYS specially and skipping the panic promotion with data_sync_elevel(). Also suppress future attempts after the first such failure so that the pre-existing problem of noisy warnings is improved. Back-patch to 9.6 (older branches were not affected in this way by 9ccdd7f6). Author: Thomas Munro and James Sewell Tested-by: James Sewell Reported-by: Bruce Klein Discussion: https://postgr.es/m/CA+mCpegfOUph2U4ZADtQT16dfbkjjYNJL1bSTWErsazaFjQW9A@mail.gmail.com
* Fix plan created for inherited UPDATE/DELETE with all tables excluded.Tom Lane2019-02-22
| | | | | | | | | | | | | | | | | | | | | | In the case where inheritance_planner() finds that every table has been excluded by constraints, it thought it could get away with making a plan consisting of just a dummy Result node. While certainly there's no updating or deleting to be done, this had two user-visible problems: the plan did not report the correct set of output columns when a RETURNING clause was present, and if there were any statement-level triggers that should be fired, it didn't fire them. Hence, rather than only generating the dummy Result, we need to stick a valid ModifyTable node on top, which requires a tad more effort here. It's been broken this way for as long as inheritance_planner() has known about deleting excluded subplans at all (cf commit 635d42e9c), so back-patch to all supported branches. Amit Langote and Tom Lane, per a report from Petr Fedorov. Discussion: https://postgr.es/m/5da6f0f0-1364-1876-6978-907678f89a3e@phystech.edu
* Report correct name in autovacuum "work items" activityAlvaro Herrera2019-02-22
| | | | | | | | We were reporting the database name instead of the relation name to pg_stat_activity. Repair. Reported-by: Justin Pryzby Discussion: https://postgr.es/m/20190220185552.GR28750@telsasoft.com