aboutsummaryrefslogtreecommitdiff
path: root/contrib/postgres_fdw
Commit message (Collapse)AuthorAge
...
* Remove redundant grouping and DISTINCT columns.Tom Lane2023-01-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid explicitly grouping by columns that we know are redundant for sorting, for example we need group by only one of x and y in SELECT ... WHERE x = y GROUP BY x, y This comes up more often than you might think, as shown by the changes in the regression tests. It's nearly free to detect too, since we are just piggybacking on the existing logic that detects redundant pathkeys. (In some of the existing plans that change, it's visible that a sort step preceding the grouping step already didn't bother to sort by the redundant column, making the old plan a bit silly-looking.) To do this, build processed_groupClause and processed_distinctClause lists that omit any provably-redundant sort items, and consult those not the originals where relevant. This means that within the planner, one should usually consult root->processed_groupClause or root->processed_distinctClause if one wants to know which columns are to be grouped on; but to check whether grouping or distinct-ing is happening at all, check non-NIL-ness of parse->groupClause or parse->distinctClause. This is comparable to longstanding rules about handling the HAVING clause, so I don't think it'll be a huge maintenance problem. nodeAgg.c also needs minor mods, because it's now possible to generate AGG_PLAIN and AGG_SORTED Agg nodes with zero grouping columns. Patch by me; thanks to Richard Guo and David Rowley for review. Discussion: https://postgr.es/m/185315.1672179489@sss.pgh.pa.us
* Revert "Get rid of the "new" and "old" entries in a view's rangetable."Tom Lane2023-01-11
| | | | | | | | | | | This reverts commit 1b4d280ea1eb7ddb2e16654d5fa16960bb959566. It's broken the buildfarm members that run cross-version-upgrade tests, because they're not prepared to deal with cosmetic differences between CREATE VIEW commands emitted by older servers and HEAD. Even if we had a solution to that, which we don't, it'd take some time to roll it out to the affected animals. This improvement isn't valuable enough to justify addressing that problem on an emergency basis, so revert it for now.
* Get rid of the "new" and "old" entries in a view's rangetable.Tom Lane2023-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rule system needs "old" and/or "new" pseudo-RTEs in rule actions that are ON INSERT/UPDATE/DELETE. Historically it's put such entries into the ON SELECT rules of views as well, but those are really quite vestigial. The only thing we've used them for is to carry the view's relid forward to AcquireExecutorLocks (so that we can re-lock the view to verify it hasn't changed before re-using a plan) and to carry its relid and permissions data forward to execution-time permissions checks. What we can do instead of that is to retain these fields of the RTE_RELATION RTE for the view even after we convert it to an RTE_SUBQUERY RTE. This requires a tiny amount of extra complication in the planner and AcquireExecutorLocks, but on the other hand we can get rid of the logic that moves that data from one place to another. The principal immediate benefit of doing this, aside from a small saving in the pg_rewrite data for views, is that these pseudo-RTEs no longer trigger ruleutils.c's heuristic about qualifying variable names when the rangetable's length is more than 1. That results in quite a number of small simplifications in regression test outputs, which are all to the good IMO. Bump catversion because we need to dump a few more fields of RTE_SUBQUERY RTEs. While those will always be zeroes anyway in stored rules (because we'd never populate them until query rewrite) they are useful for debugging, and it seems like we'd better make sure to transmit such RTEs accurately in plans sent to parallel workers. I don't think the executor actually examines these fields after startup, but someday it might. Amit Langote Discussion: https://postgr.es/m/CA+HiwqEf7gPN4Hn+LoZ4tP2q_Qt7n3vw7-6fJKOf92tSEnX6Gg@mail.gmail.com
* Check relkind before using TABLESAMPLE in postgres_fdwTomas Vondra2023-01-07
| | | | | | | | | | | | | | | | | | | Check the remote relkind before trying to use TABLESAMPLE to acquire sample from the remote relation. Even if the remote server version has TABLESAMPLE support, the foreign table may point to incompatible relkind (e.g. a view or a sequence). If the relkind does not support TABLESAMPLE, error out if TABLESAMPLE was requested specifically (as system/bernoulli), or fallback to random just like we do for old server versions. We currently end up disabling sampling for such relkind values anyway, due to reltuples being -1 or 1, but that seems rather accidental, and might get broken by improving reltuples estimates, etc. So better to make the check explicit. Reported-by: Tom Lane Discussion: https://postgr.es/m/951485.1672461744%40sss.pgh.pa.us
* Fix stale comment about sample_frac adjustmentTomas Vondra2023-01-06
| | | | | | | | | | A comment was left behind referencing sample rate adjustment removed from 8ad51b5f44. So clean that up. While at it also remove the sample rate clamping which should not be necessary without the clamping, and just check that with an assert. Reported-by: Tom Lane Discussion: https://postgr.es/m/951485.1672461744%40sss.pgh.pa.us
* Update copyright for 2023Bruce Momjian2023-01-02
| | | | Backpatch-through: 11
* Sample postgres_fdw tables remotely during ANALYZETomas Vondra2022-12-30
| | | | | | | | | | | | | | | | | When collecting ANALYZE sample on foreign tables, postgres_fdw fetched all rows and performed the sampling locally. For large tables this means transferring and immediately discarding large amounts of data. This commit allows the sampling to be performed on the remote server, transferring only the much smaller sample. The sampling is performed using the built-in TABLESAMPLE methods (system, bernoulli) or random() function, depending on the remote server version. Remote sampling can be enabled by analyze_sampling on the foreign server and/or foreign table, with supported values 'off', 'auto', 'system', 'bernoulli' and 'random'. The default value is 'auto' which uses either 'bernoulli' (TABLESAMPLE method) or 'random' (for remote servers without TABLESAMPLE support).
* Fix bug in translate_col_privs_multilevelDavid Rowley2022-12-24
| | | | | | | | | | | | | Fix incorrect code which was trying to convert a Bitmapset of columns at the attnums according to a parent table and transform them into the equivalent Bitmapset with same attnums according to the given child table. This code is new as of a61b1f748 and was failing to do the correct translation when there was an intermediate parent table between 'rel' and 'top_parent_rel'. Reported-by: Ranier Vilela Author: Richard Guo, Amit Langote Discussion: https://postgr.es/m/CAEudQArohfB_Gy%2BhcH2-bANUkxgjJiP%3DABq01_LgTNTbcNijag%40mail.gmail.com
* Add copyright notices to meson filesAndrew Dunstan2022-12-20
| | | | Discussion: https://postgr.es/m/222b43a5-2fb3-2c1b-9cd0-375d376c8246@dunslane.net
* Allow batching of inserts during cross-partition updates.Etsuro Fujita2022-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 927f453a9 disallowed batching added by commit b663a4136 to be used for the inserts performed as part of cross-partition updates of partitioned tables, mainly because the previous code in nodeModifyTable.c couldn't handle pending inserts into foreign-table partitions that are also UPDATE target partitions. But we don't have such a limitation anymore (cf. commit ffbb7e65a), so let's allow for this by removing from execPartition.c the restriction added by commit 927f453a9 that batching is only allowed if the query command type is CMD_INSERT. In postgres_fdw, since commit 86dc90056 changed it to effectively disable cross-partition updates in the case where a foreign-table partition chosen to insert rows into is also an UPDATE target partition, allow batching in the case where a foreign-table partition chosen to do so is *not* also an UPDATE target partition. This is enabled by the "batch_size" option added by commit b663a4136, which is disabled by default. This patch also adjusts the test case added by commit 927f453a9 to confirm that the inserts performed as part of a cross-partition update of a partitioned table indeed uses batching. Amit Langote, reviewed and/or tested by Georgios Kokolatos, Zhihong Yu, Bharath Rupireddy, Hou Zhijie, Vignesh C, and me. Discussion: http://postgr.es/m/CA%2BHiwqH1Lz1yJmPs%3DaD-pzd_HLLynLHvq5iYeT9mB0bBV7oJ6w%40mail.gmail.com
* C comment: fix wordingBruce Momjian2022-12-16
| | | | Backpatch-through: master
* Remove pessimistic cost penalization from Incremental SortDavid Rowley2022-12-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When incremental sorts were added in v13 a 1.5x pessimism factor was added to the cost modal. Seemingly this was done because the cost modal only has an estimate of the total number of input rows and the number of presorted groups. It assumes that the input rows will be evenly distributed throughout the presorted groups. The 1.5x pessimism factor was added to slightly reduce the likelihood of incremental sorts being used in the hope to avoid performance regressions where an incremental sort plan was picked and turned out slower due to a large skew in the number of rows in the presorted groups. An additional quirk with the path generation code meant that we could consider both a sort and an incremental sort on paths with presorted keys. This meant that with the pessimism factor, it was possible that we opted to perform a sort rather than an incremental sort when the given path had presorted keys. Here we remove the 1.5x pessimism factor to allow incremental sorts to have a fairer chance at being chosen against a full sort. Previously we would generally create a sort path on the cheapest input path (if that wasn't sorted already) and incremental sort paths on any path which had presorted keys. This meant that if the cheapest input path wasn't completely sorted but happened to have presorted keys, we would create a full sort path *and* an incremental sort path on that input path. Here we change this logic so that if there are presorted keys, we only create an incremental sort path, and create sort paths only when a full sort is required. Both the removal of the cost pessimism factor and the changes made to the path generation make it more likely that incremental sorts will now be chosen. That, of course, as with teaching the planner any new tricks, means an increased likelihood that the planner will perform an incremental sort when it's not the best method. Our standard escape hatch for these cases is an enable_* GUC. enable_incremental_sort already exists for this. This came out of a report by Pavel Luzanov where he mentioned that the master branch was choosing to perform a Seq Scan -> Sort -> Group Aggregate for his query with an ORDER BY aggregate function. The v15 plan for his query performed an Index Scan -> Group Aggregate, of course, the aggregate performed the final sort internally in nodeAgg.c for the aggregate's ORDER BY. The ideal plan would have been to use the index, which provided partially sorted input then use an incremental sort to provide the aggregate with the sorted input. This was not being chosen due to the pessimism in the incremental sort cost modal, so here we remove that and rationalize the path generation so that sort and incremental sort plans don't have to needlessly compete. We assume that it's senseless to ever use a full sort on a given input path where an incremental sort can be performed. Reported-by: Pavel Luzanov Reviewed-by: Richard Guo Discussion: https://postgr.es/m/9f61ddbf-2989-1536-b31e-6459370a6baa%40postgrespro.ru
* Rework query relation permission checkingAlvaro Herrera2022-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, information about the permissions to be checked on relations mentioned in a query is stored in their range table entries. So the executor must scan the entire range table looking for relations that need to have permissions checked. This can make the permission checking part of the executor initialization needlessly expensive when many inheritance children are present in the range range. While the permissions need not be checked on the individual child relations, the executor still must visit every range table entry to filter them out. This commit moves the permission checking information out of the range table entries into a new plan node called RTEPermissionInfo. Every top-level (inheritance "root") RTE_RELATION entry in the range table gets one and a list of those is maintained alongside the range table. This new list is initialized by the parser when initializing the range table. The rewriter can add more entries to it as rules/views are expanded. Finally, the planner combines the lists of the individual subqueries into one flat list that is passed to the executor for checking. To make it quick to find the RTEPermissionInfo entry belonging to a given relation, RangeTblEntry gets a new Index field 'perminfoindex' that stores the corresponding RTEPermissionInfo's index in the query's list of the latter. ExecutorCheckPerms_hook has gained another List * argument; the signature is now: typedef bool (*ExecutorCheckPerms_hook_type) (List *rangeTable, List *rtePermInfos, bool ereport_on_violation); The first argument is no longer used by any in-core uses of the hook, but we leave it in place because there may be other implementations that do. Implementations should likely scan the rtePermInfos list to determine which operations to allow or deny. Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com
* Stop accessing checkAsUser via RTE in some casesAlvaro Herrera2022-11-30
| | | | | | | | | | | | | | | | | | | | A future commit will move the checkAsUser field from RangeTblEntry to a new node that, unlike RTEs, will only be created for tables mentioned in the query but not for the inheritance child relations added to the query by the planner. So, checkAsUser value for a given child relation will have to be obtained by referring to that for its ancestor mentioned in the query. In preparation, it seems better to expand the use of RelOptInfo.userid during planning in place of rte->checkAsUser so that there will be fewer places to adjust for the above change. Given that the child-to-ancestor mapping is not available during the execution of a given "child" ForeignScan node, add a checkAsUser field to ForeignScan to carry the child relation's RelOptInfo.userid. Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CA+HiwqGFCs2uq7VRKi7g+FFKbP6Ea_2_HkgZb2HPhUfaAKT3ng@mail.gmail.com
* Fix handling of pending inserts in nodeModifyTable.c.Etsuro Fujita2022-11-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit b663a4136, which allowed FDWs to INSERT rows in bulk, added to nodeModifyTable.c code to flush pending inserts to the foreign-table result relation(s) before completing processing of the ModifyTable node, but the code failed to take into account the case where the INSERT query has modifying CTEs, leading to incorrect results. Also, that commit failed to flush pending inserts before firing BEFORE ROW triggers so that rows are visible to such triggers. In that commit we scanned through EState's es_tuple_routing_result_relations or es_opened_result_relations list to find the foreign-table result relations to which pending inserts are flushed, but that would be inefficient in some cases. So to fix, 1) add a List member to EState to record the insert-pending result relations, and 2) modify nodeModifyTable.c so that it adds the foreign-table result relation to the list in ExecInsert() if appropriate, and flushes pending inserts properly using the list where needed. While here, fix a copy-and-pasteo in a comment in ExecBatchInsert(), which was added by that commit. Back-patch to v14 where that commit appeared. Discussion: https://postgr.es/m/CAPmGK16qutyCmyJJzgQOhfBq%3DNoGDqTB6O0QBZTihrbqre%2BoxA%40mail.gmail.com
* Avoid making commutatively-duplicate clauses in EquivalenceClasses.Tom Lane2022-10-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we decide we need to make a derived clause equating a.x and b.y, we already will re-use a previously-made clause "a.x = b.y". But we might instead have "b.y = a.x", which is perfectly usable because equivclass.c has never promised anything about the operand order in clauses it builds. Saving construction of a new RestrictInfo doesn't matter all that much in itself --- but because we cache selectivity estimates and so on per-RestrictInfo, there's a possibility of saving a fair amount of duplicative effort downstream. Hence, check for commutative matches as well as direct ones when seeing if we have a pre-existing clause. This changes the visible clause order in several regression test cases, but they're all clearly-insignificant changes. Checking for the reverse operand order is simple enough, but if we wanted to check for operator OID match we'd need to call get_commutator here, which is not so cheap. I concluded that we don't really need the operator check anyway, so I just removed it. It's unlikely that an opfamily contains more than one applicable operator for a given pair of operand datatypes; and if it does they had better give the same answers, so there seems little need to insist that we use exactly the one select_equality_operator chose. Using the current core regression suite as a test case, I see this change reducing the number of new join clauses built by create_join_clause from 9673 to 5142 (out of 26652 calls). So not quite 50% savings, but pretty close to it. Discussion: https://postgr.es/m/78062.1666735746@sss.pgh.pa.us
* Fix confusion about havingQual vs hasHavingQual in planner.Tom Lane2022-10-18
| | | | | | | | | | | | | | | | | | | | | | | Preprocessing of the HAVING clause will reduce havingQual to NIL if the clause is constant-TRUE. This is one case where that convention is rather unfortunate, because "HAVING TRUE" is not at all the same as not having any HAVING clause at all. (Per the SQL spec, it still forces the query to be grouped.) The planner deals with this by having a boolean hasHavingQual that records whether havingQual was originally nonempty; places that just want to check whether HAVING was specified are supposed to consult that. I found three places that got that wrong. Fortunately, these could only affect cost estimates not correctness. It'd be hard even to demonstrate the errors; for example, the one in allpaths.c would only matter in a query that has HAVING TRUE but no GROUP BY and no aggregates, which would require a completely variable-free SELECT list, making the case probably of only academic interest. Hence, while these are worth fixing before someone copies the incorrect coding somewhere more critical, they don't seem worth back-patching. I didn't bother trying to devise regression tests, either. Discussion: https://postgr.es/m/2503888.1666042643@sss.pgh.pa.us
* Rename SetSingleFuncCall() to InitMaterializedSRF()Michael Paquier2022-10-18
| | | | | | | | | | | | | | | | | | Per discussion, the existing routine name able to initialize a SRF function with materialize mode is unpopular, so rename it. Equally, the flags of this function are renamed, as of: - SRF_SINGLE_USE_EXPECTED -> MAT_SRF_USE_EXPECTED_DESC - SRF_SINGLE_BLESS -> MAT_SRF_BLESS The previous function and flags introduced in 9e98583 are kept around for compatibility purposes, so as any extension code already compiled with v15 continues to work as-is. The declarations introduced here for compatibility will be removed from HEAD in a follow-up commit. The new names have been suggested by Andres Freund and Melanie Plageman. Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de Backpatch-through: 15
* Disallow MERGE cleanly for foreign partitionsAlvaro Herrera2022-10-15
| | | | | | | | | | | | While directly targetting a foreign table with MERGE was already expressly forbidden, we failed to catch the case of a partitioned table that has a foreign table as a partition; and the result if you try is an incomprehensible error. Fix that by adding a specific check. Backpatch to 15. Reported-by: Tatsuhiro Nakamori <bt22nakamorit@oss.nttdata.com> Discussion: https://postgr.es/m/bt22nakamorit@oss.nttdata.com
* Allow batch insertion during COPY into a foreign table.Etsuro Fujita2022-10-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 3d956d956 allowed the COPY, but it's done by inserting individual rows to the foreign table, so it can be inefficient due to the overhead caused by each round-trip to the foreign server. To improve performance of the COPY in such a case, this patch allows batch insertion, by extending the multi-insert machinery in CopyFrom() to the foreign-table case so that we insert multiple rows to the foreign table at once using the FDW callback routine added by commit b663a4136. This patch also allows this for postgres_fdw. It is enabled by the "batch_size" option added by commit b663a4136, which is disabled by default. When doing batch insertion, we update progress of the COPY command after performing the FDW callback routine, to count rows not suppressed by the FDW as well as a BEFORE ROW INSERT trigger. For consistency, this patch changes the timing of updating it for plain tables: previously, we updated it immediately after adding each row to the multi-insert buffer, but we do so only after writing the rows stored in the buffer out to the table using table_multi_insert(), which I think would be consistent even with non-batching mode, because in that mode we update it after writing each row out to the table using table_tuple_insert(). Andrey Lepikhov, heavily revised by me, with review from Ian Barwick, Andrey Lepikhov, and Zhihong Yu. Discussion: https://postgr.es/m/bc489202-9855-7550-d64c-ad2d83c24867%40postgrespro.ru
* Fix final warnings produced by -Wshadow=compatible-localDavid Rowley2022-10-07
| | | | | | | | I thought I had these in d8df67bb1, but per report from Andres Freund, I missed some. Reviewed-by: Andres Freund Discussion: https://postgr.es/m/20221005214052.c4tkudawyp5wxt3c@awork3.anarazel.de
* meson: Add windows resource filesAndres Freund2022-10-05
| | | | | | | | | | | | | The generated resource files aren't exactly the same ones as the old buildsystems generate. Previously "InternalName" and "OriginalFileName" were mostly wrong / not set (despite being required), but that was hard to fix in at least the make build. Additionally, the meson build falls back to a "auto-generated" description when not set, and doesn't set it in a few cases - unlikely that anybody looks at these descriptions in detail. Author: Andres Freund <andres@anarazel.de> Author: Nazir Bilal Yavuz <byavuz81@gmail.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
* Revert "Optimize order of GROUP BY keys".Tom Lane2022-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit db0d67db2401eb6238ccc04c6407a4fd4f985832 and several follow-on fixes. The idea of making a cost-based choice of the order of the sorting columns is not fundamentally unsound, but it requires cost information and data statistics that we don't really have. For example, relying on procost to distinguish the relative costs of different sort comparators is pretty pointless so long as most such comparator functions are labeled with cost 1.0. Moreover, estimating the number of comparisons done by Quicksort requires more than just an estimate of the number of distinct values in the input: you also need some idea of the sizes of the larger groups, if you want an estimate that's good to better than a factor of three or so. That's data that's often unknown or not very reliable. Worse, to arrive at estimates of the number of calls made to the lower-order-column comparison functions, the code needs to make estimates of the numbers of distinct values of multiple columns, which are necessarily even less trustworthy than per-column stats. Even if all the inputs are perfectly reliable, the cost algorithm as-implemented cannot offer useful information about how to order sorting columns beyond the point at which the average group size is estimated to drop to 1. Close inspection of the code added by db0d67db2 shows that there are also multiple small bugs. These could have been fixed, but there's not much point if we don't trust the estimates to be accurate in-principle. Finally, the changes in cost_sort's behavior made for very large changes (often a factor of 2 or so) in the cost estimates for all sorting operations, not only those for multi-column GROUP BY. That naturally changes plan choices in many situations, and there's precious little evidence to show that the changes are for the better. Given the above doubts about whether the new estimates are really trustworthy, it's hard to summon much confidence that these changes are better on the average. Since we're hard up against the release deadline for v15, let's revert these changes for now. We can always try again later. Note: in v15, I left T_PathKeyInfo in place in nodes.h even though it's unreferenced. Removing it would be an ABI break, and it seems a bit late in the release cycle for that. Discussion: https://postgr.es/m/TYAPR01MB586665EB5FB2C3807E893941F5579@TYAPR01MB5866.jpnprd01.prod.outlook.com
* Harmonize parameter names in contrib code.Peter Geoghegan2022-09-22
| | | | | | | | | | | Make sure that function declarations use names that exactly match the corresponding names from function definitions in contrib code. Like other recent commits that cleaned up function parameter names, this commit was written with help from clang-tidy. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WznJt9CMM9KJTMjJh_zbL5hD9oX44qdJ4aqZtjFi-zA3Tg@mail.gmail.com
* meson: Add initial version of meson based build systemAndres Freund2022-09-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Autoconf is showing its age, fewer and fewer contributors know how to wrangle it. Recursive make has a lot of hard to resolve dependency issues and slow incremental rebuilds. Our home-grown MSVC build system is hard to maintain for developers not using Windows and runs tests serially. While these and other issues could individually be addressed with incremental improvements, together they seem best addressed by moving to a more modern build system. After evaluating different build system choices, we chose to use meson, to a good degree based on the adoption by other open source projects. We decided that it's more realistic to commit a relatively early version of the new build system and mature it in tree. This commit adds an initial version of a meson based build system. It supports building postgres on at least AIX, FreeBSD, Linux, macOS, NetBSD, OpenBSD, Solaris and Windows (however only gcc is supported on aix, solaris). For Windows/MSVC postgres can now be built with ninja (faster, particularly for incremental builds) and msbuild (supporting the visual studio GUI, but building slower). Several aspects (e.g. Windows rc file generation, PGXS compatibility, LLVM bitcode generation, documentation adjustments) are done in subsequent commits requiring further review. Other aspects (e.g. not installing test-only extensions) are not yet addressed. When building on Windows with msbuild, builds are slower when using a visual studio version older than 2019, because those versions do not support MultiToolTask, required by meson for intra-target parallelism. The plan is to remove the MSVC specific build system in src/tools/msvc soon after reaching feature parity. However, we're not planning to remove the autoconf/make build system in the near future. Likely we're going to keep at least the parts required for PGXS to keep working around until all supported versions build with meson. Some initial help for postgres developers is at https://wiki.postgresql.org/wiki/Meson With contributions from Thomas Munro, John Naylor, Stone Tickle and others. Author: Andres Freund <andres@anarazel.de> Author: Nazir Bilal Yavuz <byavuz81@gmail.com> Author: Peter Eisentraut <peter@eisentraut.org> Reviewed-By: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Discussion: https://postgr.es/m/20211012083721.hvixq4pnh2pixr3j@alap3.anarazel.de
* postgres_fdw: Remove useless DO block in testPeter Eisentraut2022-09-16
| | | | Discussion: https://www.postgresql.org/message-id/flat/b1f9f399-3a1a-b554-283f-4ae7f34608e2@enterprisedb.com
* Adjust assorted hint messages that list all valid options.Peter Eisentraut2022-09-16
| | | | | | | | | | Instead of listing all valid options, we now try to provide one that looks similar. Since this may be useful elsewhere, this change introduces a new set of functions that can be reused for similar purposes. Author: Nathan Bossart <nathandbossart@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/b1f9f399-3a1a-b554-283f-4ae7f34608e2@enterprisedb.com
* postgres_fdw: Avoid 'variable not found in subplan target list' error.Etsuro Fujita2022-09-14
| | | | | | | | | | | | | | | | | | | | | | The tlist of the EvalPlanQual outer plan for a ForeignScan node is adjusted to produce a tuple whose descriptor matches the scan tuple slot for the ForeignScan node. But in the case where the outer plan contains an extra Sort node, if the new tlist contained columns required only for evaluating PlaceHolderVars or columns required only for evaluating local conditions, this would cause setrefs.c to fail with the error. The cause of this is that when creating the outer plan by injecting the Sort node into an alternative local join plan that could emit such extra columns as well, we fail to arrange for the outer plan to propagate them up through the Sort node, causing setrefs.c to fail to match up them in the new tlist to what is available from the outer plan. Repair. Per report from Alexander Pyhalov. Richard Guo and Etsuro Fujita, reviewed by Alexander Pyhalov and Tom Lane. Backpatch to all supported versions. Discussion: http://postgr.es/m/cfb17bf6dfdf876467bd5ef533852d18%40postgrespro.ru
* postgres_fdw: Disable batch insertion when there are WCO constraints.Etsuro Fujita2022-08-05
| | | | | | | | | | | | | | | | | | | | | | When inserting a view referencing a foreign table that has WITH CHECK OPTION constraints, in single-insert mode postgres_fdw retrieves the data that was actually inserted on the remote side so that the WITH CHECK OPTION constraints are enforced with the data locally, but in batch-insert mode it cannot currently retrieve the data (except for the row first inserted through the view), resulting in enforcing the WITH CHECK OPTION constraints with the data passed from the core (except for the first-inserted row), which led to incorrect results when inserting into a view referencing a foreign table in which a remote BEFORE ROW INSERT trigger changes the rows inserted through the view so that they violate the view's WITH CHECK OPTION constraint. Also, the query inserting into the view caused an assertion failure in assert-enabled builds. Fix these by disabling batch insertion when inserting into such a view. Back-patch to v14 where batch insertion was added. Discussion: https://postgr.es/m/CAPmGK17LpbTZs4m4a_6THP54UBeK9fHvX8aVVA%2BC6yEZDZwQcg%40mail.gmail.com
* Improve performance of ORDER BY / DISTINCT aggregatesDavid Rowley2022-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ORDER BY / DISTINCT aggreagtes have, since implemented in Postgres, been executed by always performing a sort in nodeAgg.c to sort the tuples in the current group into the correct order before calling the transition function on the sorted tuples. This was not great as often there might be an index that could have provided pre-sorted input and allowed the transition functions to be called as the rows come in, rather than having to store them in a tuplestore in order to sort them once all the tuples for the group have arrived. Here we change the planner so it requests a path with a sort order which supports the most amount of ORDER BY / DISTINCT aggregate functions and add new code to the executor to allow it to support the processing of ORDER BY / DISTINCT aggregates where the tuples are already sorted in the correct order. Since there can be many ORDER BY / DISTINCT aggregates in any given query level, it's very possible that we can't find an order that suits all of these aggregates. The sort order that the planner chooses is simply the one that suits the most aggregate functions. We take the most strictly sorted variation of each order and see how many aggregate functions can use that, then we try again with the order of the remaining aggregates to see if another order would suit more aggregate functions. For example: SELECT agg(a ORDER BY a),agg2(a ORDER BY a,b) ... would request the sort order to be {a, b} because {a} is a subset of the sort order of {a,b}, but; SELECT agg(a ORDER BY a),agg2(a ORDER BY c) ... would just pick a plan ordered by {a} (we give precedence to aggregates which are earlier in the targetlist). SELECT agg(a ORDER BY a),agg2(a ORDER BY b),agg3(a ORDER BY b) ... would choose to order by {b} since two aggregates suit that vs just one that requires input ordered by {a}. Author: David Rowley Reviewed-by: Ronan Dunklau, James Coleman, Ranier Vilela, Richard Guo, Tom Lane Discussion: https://postgr.es/m/CAApHDvpHzfo92%3DR4W0%2BxVua3BUYCKMckWAmo-2t_KiXN-wYH%3Dw%40mail.gmail.com
* postgres_fdw: Fix bug in checking of return value of PQsendQuery().Fujii Masao2022-07-22
| | | | | | | | | | | | | | | | | | When postgres_fdw begins an asynchronous data fetch, it submits FETCH query by using PQsendQuery(). If PQsendQuery() fails and returns 0, postgres_fdw should report an error. But, previously, postgres_fdw reported an error only when the return value is less than 0, though PQsendQuery() never return the values other than 0 and 1. Therefore postgres_fdw could not handle the failure to send FETCH query in an asynchronous data fetch. This commit fixes postgres_fdw so that it reports an error when PQsendQuery() returns 0. Back-patch to v14 where asynchronous execution was supported in postgres_fdw. Author: Fujii Masao Reviewed-by: Japin Li, Tom Lane Discussion: https://postgr.es/m/b187a7cf-d4e3-5a32-4d01-8383677797f3@oss.nttdata.com
* Tweak detail and hint messages to be consistent with project policyMichael Paquier2022-07-20
| | | | | | | | | | | Detail and hint messages should be full sentences and should end with a period, but some of the messages newly-introduced in v15 did not follow that. Author: Justin Pryzby Reviewed-by: Álvaro Herrera Discussion: https://postgr.es/m/20220719120948.GF12702@telsasoft.com Backpatch-through: 15
* Remove now superfluous declarations of dlsym()ed symbols.Andres Freund2022-07-17
| | | | | | | | The prior commit declared them centrally. Author: Andres Freund <andres@anarazel.de> Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/20211101020311.av6hphdl6xbjbuif@alap3.anarazel.de
* postgres_fdw: be more wary about shippability of reg* constants.Tom Lane2022-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | Don't consider a constant of regconfig or other reg* types to be shippable unless it refers to a built-in object, or an object in an extension that's been marked shippable. Without this restriction, we're too likely to send a constant that will fail to parse on the remote server. For the regconfig type only, consider OIDs up to 16383 to be "built in", rather than the normal cutoff of 9999. Otherwise the initdb-created text search configurations will be considered unshippable, which is unlikely to make anyone happy. It's possible that this new restriction will de-optimize queries that were working satisfactorily before. Users can restore any lost performance by making sure that objects that can be expected to exist on the remote side are in shippable extensions. However, that's not a change that people are likely to be happy about having to make after a minor-release update. Between that consideration and the lack of field complaints, let's just change this in HEAD. Noted while fixing bug #17483, although this is not precisely the problem that that report complained about. Discussion: https://postgr.es/m/1423433.1652722406@sss.pgh.pa.us
* postgres_fdw: set search_path to 'pg_catalog' while deparsing constants.Tom Lane2022-07-17
| | | | | | | | | | | | | The motivation for this is to ensure successful transmission of the values of constants of regconfig and other reg* types. The remote will be reading them with search_path = 'pg_catalog', so schema qualification is necessary when referencing objects in other schemas. Per bug #17483 from Emmanuel Quincerot. Back-patch to all supported versions. (There's some other stuff to do here, but it's less back-patchable.) Discussion: https://postgr.es/m/1423433.1652722406@sss.pgh.pa.us
* Attempt to fix compiler warning on old compilerPeter Eisentraut2022-07-16
| | | | | A couple more like b449afb582bb9015bfbb85abc10ce122aef9ec70, per complaints from lapwing.
* Replace many MemSet calls with struct initializationPeter Eisentraut2022-07-16
| | | | | | | | | | | | | | This replaces all MemSet() calls with struct initialization where that is easily and obviously possible. (For example, some cases have to worry about padding bits, so I left those.) (The same could be done with appropriate memset() calls, but this patch is part of an effort to phase out MemSet(), so it doesn't touch memset() calls.) Reviewed-by: Ranier Vilela <ranier.vf@gmail.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://www.postgresql.org/message-id/9847b13c-b785-f4e2-75c3-12ec77a3b05c@enterprisedb.com
* Support TRUNCATE triggers on foreign tables.Fujii Masao2022-07-12
| | | | | | | | | | Now some foreign data wrappers support TRUNCATE command. So it's useful to support TRUNCATE triggers on foreign tables for audit logging or for preventing undesired truncation. Author: Yugo Nagata Reviewed-by: Fujii Masao, Ian Lawrence Barwick Discussion: https://postgr.es/m/20220630193848.5b02e0d6076b86617a915682@sraoss.co.jp
* postgres_fdw: Fix grammar.Etsuro Fujita2022-07-07
| | | | Oversight in commit 4036bcbbb; back-patch to v15 where that appeared.
* Remove redundant null pointer checks before PQclear and PQconninfoFreePeter Eisentraut2022-07-03
| | | | | | | | These functions already had the free()-like behavior of handling null pointers as a no-op. But it wasn't documented, so add it explicitly to the documentation, too. Discussion: https://www.postgresql.org/message-id/flat/dac5d2d0-98f5-94d9-8e69-46da2413593d%40enterprisedb.com
* Pre-beta mechanical code beautification.Tom Lane2022-05-12
| | | | | Run pgindent, pgperltidy, and reformat-dat-files. I manually fixed a couple of comments that pgindent uglified.
* postgres_fdw: Update comments in make_new_connection().Etsuro Fujita2022-05-12
| | | | | | | | | | | | | | Expand the comment about the parallel_commit option to mention that the default is false. Also, since the comment about alteration of the keep_connections option, which was located above the expanded comment, holds true for the parallel_commit option, rewrite it to reflect this, and move it to after the expanded comment. Follow-up for commit 04e706d42. Discussion: https://postgr.es/m/CAPmGK16Kg2Bf90sqzcZ4YM5cN_G-4h7wFUS01qQpqNB%2B2BG5_w%40mail.gmail.com
* Disable asynchronous execution if using gating Result nodes.Etsuro Fujita2022-04-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | mark_async_capable_plan(), which is called from create_append_plan() to determine whether subplans are async-capable, failed to take into account that the given subplan created from a given subpath might include a gating Result node if the subpath is a SubqueryScanPath or ForeignPath, causing a segmentation fault there when the subplan created from a SubqueryScanPath includes the Result node, or causing ExecAsyncRequest() to throw an error about an unrecognized node type when the subplan created from a ForeignPath includes the Result node, because in the latter case the Result node was unintentionally considered as async-capable, but we don't currently support executing Result nodes asynchronously. Fix by modifying mark_async_capable_plan() to disable asynchronous execution in such cases. Also, adjust code in the ProjectionPath case in mark_async_capable_plan(), for consistency with other cases, and adjust/improve comments there. is_async_capable_path() added in commit 27e1f1456, which was rewritten to mark_async_capable_plan() in a later commit, has the same issue, causing the error at execution mentioned above, so back-patch to v14 where the aforesaid commit went in. Per report from Justin Pryzby. Etsuro Fujita, reviewed by Zhihong Yu and Justin Pryzby. Discussion: https://postgr.es/m/20220408124338.GK24419%40telsasoft.com
* postgres_fdw: Disable batch insert when BEFORE ROW INSERT triggers exist.Etsuro Fujita2022-04-21
| | | | | | | | | | | Previously, we allowed this, but such triggers might query the table to insert into and act differently if the tuples that have already been processed and prepared for insertion are not there, so disable it in such cases. Back-patch to v14 where batch insert was added. Discussion: https://postgr.es/m/CAPmGK16_uPqsmgK0-LpLSUk54_BoK13bPrhxhfjSoSTVz414hA%40mail.gmail.com
* Remove extraneous blank lines before block-closing bracesAlvaro Herrera2022-04-13
| | | | | | | | | These are useless and distracting. We wouldn't have written the code with them to begin with, so there's no reason to keep them. Author: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com Discussion: https://postgr.es/m/attachment/133167/0016-Extraneous-blank-lines.patch
* Allow asynchronous execution in more cases.Etsuro Fujita2022-04-06
| | | | | | | | | | | | | | | | In commit 27e1f1456, create_append_plan() only allowed the subplan created from a given subpath to be executed asynchronously when it was an async-capable ForeignPath. To extend coverage, this patch handles cases when the given subpath includes some other Path types as well that can be omitted in the plan processing, such as a ProjectionPath directly atop an async-capable ForeignPath, allowing asynchronous execution in partitioned-scan/partitioned-join queries with non-Var tlist expressions and more UNION queries. Andrey Lepikhov and Etsuro Fujita, reviewed by Alexander Pyhalov and Zhihong Yu. Discussion: https://postgr.es/m/659c37a8-3e71-0ff2-394c-f04428c76f08%40postgrespro.ru
* Fix postgres_fdw to check shippability of sort clauses properly.Tom Lane2022-03-31
| | | | | | | | | | | | | | | | | | | | | | | | | postgres_fdw would push ORDER BY clauses to the remote side without verifying that the sort operator is safe to ship. Moreover, it failed to print a suitable USING clause if the sort operator isn't default for the sort expression's type. The net result of this is that the remote sort might not have anywhere near the semantics we expect, which'd be disastrous for locally-performed merge joins in particular. We addressed similar issues in the context of ORDER BY within an aggregate function call in commit 7012b132d, but failed to notice that query-level ORDER BY was broken. Thus, much of the necessary logic already existed, but it requires refactoring to be usable in both cases. Back-patch to all supported branches. In HEAD only, remove the core code's copy of find_em_expr_for_rel, which is no longer used and really should never have been pushed into equivclass.c in the first place. Ronan Dunklau, per report from David Rowley; reviews by David Rowley, Ranier Vilela, and myself Discussion: https://postgr.es/m/CAApHDvr4OeC2DBVY--zVP83-K=bYrTD7F8SZDhN4g+pj2f2S-A@mail.gmail.com
* Optimize order of GROUP BY keysTomas Vondra2022-03-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When evaluating a query with a multi-column GROUP BY clause using sort, the cost may be heavily dependent on the order in which the keys are compared when building the groups. Grouping does not imply any ordering, so we're allowed to compare the keys in arbitrary order, and a Hash Agg leverages this. But for Group Agg, we simply compared keys in the order as specified in the query. This commit explores alternative ordering of the keys, trying to find a cheaper one. In principle, we might generate grouping paths for all permutations of the keys, and leave the rest to the optimizer. But that might get very expensive, so we try to pick only a couple interesting orderings based on both local and global information. When planning the grouping path, we explore statistics (number of distinct values, cost of the comparison function) for the keys and reorder them to minimize comparison costs. Intuitively, it may be better to perform more expensive comparisons (for complex data types etc.) last, because maybe the cheaper comparisons will be enough. Similarly, the higher the cardinality of a key, the lower the probability we’ll need to compare more keys. The patch generates and costs various orderings, picking the cheapest ones. The ordering of group keys may interact with other parts of the query, some of which may not be known while planning the grouping. E.g. there may be an explicit ORDER BY clause, or some other ordering-dependent operation, higher up in the query, and using the same ordering may allow using either incremental sort or even eliminate the sort entirely. The patch generates orderings and picks those minimizing the comparison cost (for various pathkeys), and then adds orderings that might be useful for operations higher up in the plan (ORDER BY, etc.). Finally, it always keeps the ordering specified in the query, on the assumption the user might have additional insights. This introduces a new GUC enable_group_by_reordering, so that the optimization may be disabled if needed. The original patch was proposed by Teodor Sigaev, and later improved and reworked by Dmitry Dolgov. Reviews by a number of people, including me, Andrey Lepikhov, Claudio Freire, Ibrar Ahmed and Zhihong Yu. Author: Dmitry Dolgov, Teodor Sigaev, Tomas Vondra Reviewed-by: Tomas Vondra, Andrey Lepikhov, Claudio Freire, Ibrar Ahmed, Zhihong Yu Discussion: https://postgr.es/m/7c79e6a5-8597-74e8-0671-1c39d124c9d6%40sigaev.ru Discussion: https://postgr.es/m/CA%2Bq6zcW_4o2NC0zutLkOJPsFt80megSpX_dVRo6GK9PC-Jx_Ag%40mail.gmail.com
* postgres_fdw: Minor cleanup for pgfdw_abort_cleanup().Etsuro Fujita2022-03-25
| | | | | | | | | | | | | | Commit 85c696112 introduced this function to deduplicate code in the transaction callback functions, but the SQL command passed as an argument to it was useless when it returned before aborting a remote transaction using the command. Modify pgfdw_abort_cleanup() so that it constructs the command when/if necessary, as before, removing the argument from it. Also update comments in pgfdw_abort_cleanup() and one of the calling functions. Etsuro Fujita, reviewed by David Zhang. Discussion: https://postgr.es/m/CAPmGK158hrd%3DZfXmgkmNFHivgh18e4oE2Gz151C2Q4OBDjZ08A%40mail.gmail.com
* Simplify SRFs using materialize mode in contrib/ modulesMichael Paquier2022-03-08
| | | | | | | | | | | | | | | | | | | | | 9e98583 introduced a helper to centralize building their needed state (tuplestore, tuple descriptors, etc.), checking for any errors. This commit updates all places of contrib/ that can be switched to use SetSingleFuncCall() as a drop-in replacement, resulting in the removal of a lot of boilerplate code in all the modules updated by this commit. Per analysis, some places remain as they are: - pg_logdir_ls() in adminpack/ uses historically TYPEFUNC_RECORD as return type, and I suspect that changing it may cause issues at run-time with some of its past versions, down to 1.0. - dblink/ uses a wrapper function doing exactly the work of SetSingleFuncCall(). Here the switch should be possible, but rather invasive so it does not seem the extra backpatch maintenance cost. - tablefunc/, similarly, uses multiple helper functions with portions of SetSingleFuncCall() spread across the code paths of this module. Author: Melanie Plageman Discussion: https://postgr.es/m/CAAKRu_bvDPJoL9mH6eYwvBpPtTGQwbDzfJbCM-OjkSZDu5yTPg@mail.gmail.com