aboutsummaryrefslogtreecommitdiff
path: root/contrib
Commit message (Collapse)AuthorAge
* Avoid using ambiguous word "non-negative" in error messages.Fujii Masao2021-07-28
| | | | | | | | | | | | | | | | | | | | | | The error messages using the word "non-negative" are confusing because it's ambiguous about whether it accepts zero or not. This commit improves those error messages by replacing it with less ambiguous word like "greater than zero" or "greater than or equal to zero". Also this commit added the note about the word "non-negative" to the error message style guide, to help writing the new error messages. When postgres_fdw option fetch_size was set to zero, previously the error message "fetch_size requires a non-negative integer value" was reported. This error message was outright buggy. Therefore back-patch to all supported versions where such buggy error message could be thrown. Reported-by: Hou Zhijie Author: Bharath Rupireddy Reviewed-by: Kyotaro Horiguchi, Fujii Masao Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Use the "pg_temp" schema alias in EXPLAIN and related output.Tom Lane2021-07-27
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch causes EXPLAIN output to refer to objects that are in the current session's temp schema with the "pg_temp" schema alias rather than that schema's actual name. This is useful for our own testing purposes since it will stabilize EXPLAIN VERBOSE output for such cases, allowing us to use that in regression tests. It should be less confusing for end users too. Since ruleutils.c needs to change behavior for this, the change also leaks into a few other users of ruleutils.c, for example pg_get_viewdef(). AFAICS that won't cause any problems. We did find that aggressively trying to change this behavior across-the-board would cause issues, but as long as "pg_temp" only appears within generated SQL text, I think it'll be fine. Along the way, make get_namespace_name_or_temp conform to the same API as get_namespace_name, ie that it returns a palloc'd string or NULL. The current behavior hasn't caused any bugs since no callers attempt to pfree the result, but if it gets more widespread usage that could become a problem. Amul Sul, reviewed and extended by me Discussion: https://postgr.es/m/CAAJ_b97W=QaGmag9AhWNbmx3uEYsNkXWL+OVW1_E1D3BtgWvtw@mail.gmail.com
* Harden pg_stat_statements tests against CLOBBER_CACHE_ALWAYS.Tom Lane2021-07-25
| | | | | | | | Turns out the buildfarm hasn't been testing this, which will soon change. Julien Rouhaud, per report from me Discussion: https://postgr.es/m/42557.1627229005@sss.pgh.pa.us
* Add support for prepared transactions to built-in logical replication.Amit Kapila2021-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To add support for streaming transactions at prepare time into the built-in logical replication, we need to do the following things: * Modify the output plugin (pgoutput) to implement the new two-phase API callbacks, by leveraging the extended replication protocol. * Modify the replication apply worker, to properly handle two-phase transactions by replaying them on prepare. * Add a new SUBSCRIPTION option "two_phase" to allow users to enable two-phase transactions. We enable the two_phase once the initial data sync is over. We however must explicitly disable replication of two-phase transactions during replication slot creation, even if the plugin supports it. We don't need to replicate the changes accumulated during this phase, and moreover, we don't have a replication connection open so we don't know where to send the data anyway. The streaming option is not allowed with this new two_phase option. This can be done as a separate patch. We don't allow to toggle two_phase option of a subscription because it can lead to an inconsistent replica. For the same reason, we don't allow to refresh the publication once the two_phase is enabled for a subscription unless copy_data option is false. Author: Peter Smith, Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich Reviewed-by: Amit Kapila, Sawada Masahiko, Vignesh C, Dilip Kumar, Takamichi Osumi, Greg Nancarrow Tested-By: Haiying Tang Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru Discussion: https://postgr.es/m/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com
* Change the name of the Result Cache node to MemoizeDavid Rowley2021-07-14
| | | | | | | | | | | "Result Cache" was never a great name for this node, but nobody managed to come up with another name that anyone liked enough. That was until David Johnston mentioned "Node Memoization", which Tom Lane revised to just "Memoize". People seem to like "Memoize", so let's do the rename. Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us Backpatch-through: 14, where Result Cache was introduced
* Rename debug_invalidate_system_caches_always to debug_discard_caches.Tom Lane2021-07-13
| | | | | | | | The name introduced by commit 4656e3d66 was agreed to be unreasonably long. To match this change, rename initdb's recently-added --clobber-cache option to --discard-caches. Discussion: https://postgr.es/m/1374320.1625430433@sss.pgh.pa.us
* Replace RelationOpenSmgr() with RelationGetSmgr().Tom Lane2021-07-12
| | | | | | | | | | | | | | | | | | | | The idea behind this patch is to design out bugs like the one fixed by commit 9d523119f. Previously, once one did RelationOpenSmgr(rel), it was considered okay to access rel->rd_smgr directly for some not-very-clear interval. But since that pointer will be cleared by relcache flushes, we had bugs arising from overreliance on a previous RelationOpenSmgr call still being effective. Now, very little code except that in rel.h and relcache.c should ever touch the rd_smgr field directly. The normal coding rule is to use RelationGetSmgr(rel) and not expect the result to be valid for longer than one smgr function call. There are a couple of places where using the function every single time seemed like overkill, but they are now annotated with large warning comments. Amul Sul, after an idea of mine. Discussion: https://postgr.es/m/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com
* pageinspect: Improve page_header() for pages of 32kBMichael Paquier2021-07-12
| | | | | | | | | | | | | | | ld_upper, ld_lower, pd_special and the page size have been using smallint as return type, which could cause those fields to return negative values in certain cases for builds configures with a page size of 32kB. Bump pageinspect to 1.10. page_header() is able to handle the correct return type of those fields at runtime when using an older version of the extension, with some tests are added to cover that. Author: Quan Zongliang Reviewed-by: Michael Paquier, Bharath Rupireddy Discussion: https://postgr.es/m/8b8ec36e-61fe-14f9-005d-07bc85aa4eed@yeah.net
* Improve error messages about mismatching relkindPeter Eisentraut2021-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | Most error messages about a relkind that was not supported or appropriate for the command was of the pattern "relation \"%s\" is not a table, foreign table, or materialized view" This style can become verbose and tedious to maintain. Moreover, it's not very helpful: If I'm trying to create a comment on a TOAST table, which is not supported, then the information that I could have created a comment on a materialized view is pointless. Instead, write the primary error message shorter and saying more directly that what was attempted is not possible. Then, in the detail message, explain that the operation is not supported for the relkind the object was. To simplify that, add a new function errdetail_relkind_not_supported() that does this. In passing, make use of RELKIND_HAS_STORAGE() where appropriate, instead of listing out the relkinds individually. Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://www.postgresql.org/message-id/flat/dc35a398-37d0-75ce-07ea-1dd71d98f8ec@2ndquadrant.com
* Fix crash in postgres_fdw for provably-empty remote UPDATE/DELETE.Tom Lane2021-07-07
| | | | | | | | | | | | | In 86dc90056, I'd written find_modifytable_subplan with the assumption that if the immediate child of a ModifyTable is a Result, it must be a projecting Result with a subplan. However, if the UPDATE or DELETE has a provably-constant-false WHERE clause, that's not so: we'll generate a dummy subplan with a childless Result. Add the missing null-check so we don't crash on such cases. Per report from Alexander Pyhalov. Discussion: https://postgr.es/m/b9a6f53549456b2f3e2fd150dcd79d72@postgrespro.ru
* postgres_fdw: Tighten up allowed values for batch_size, fetch_size options.Fujii Masao2021-07-07
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously the values such as '100$%$#$#', '9,223,372,' were accepted and treated as valid integers for postgres_fdw options batch_size and fetch_size. Whereas this is not the case with fdw_startup_cost and fdw_tuple_cost options for which an error is thrown. This was because endptr was not used while converting strings to integers using strtol. This commit changes the logic so that it uses parse_int function instead of strtol as it serves the purpose by returning false in case if it is unable to convert the string to integer. Note that this function also rounds off the values such as '100.456' to 100 and '100.567' or '100.678' to 101. While on this, use parse_real for fdw_startup_cost and fdw_tuple_cost options. Since parse_int and parse_real are being used for reloptions and GUCs, it is more appropriate to use in postgres_fdw rather than using strtol and strtod directly. Back-patch to v14. Author: Bharath Rupireddy Reviewed-by: Ashutosh Bapat, Tom Lane, Kyotaro Horiguchi, Fujii Masao Discussion: https://postgr.es/m/CALj2ACVMO6wY5Pc4oe1OCgUOAtdjHuFsBDw8R5uoYR86eWFQDA@mail.gmail.com
* Avoid doing catalog lookups in postgres_fdw's conversion_error_callback.Tom Lane2021-07-06
| | | | | | | | | | | | | | | | | | | | As in 50371df26, this is a bad idea since the callback can't really know what error is being thrown and thus whether or not it is safe to attempt catalog accesses. Rather than pushing said accesses into the mainline code where they'd usually be a waste of cycles, we can look at the query's rangetable instead. This change does mean that we'll be printing query aliases (if any were used) rather than the table or column's true name. But that doesn't seem like a bad thing: it's certainly a more useful definition in self-join cases, for instance. In any case, it seems unlikely that any applications would be depending on this detail, so it seems safe to change. Patch by me. Original complaint by Andres Freund; Bharath Rupireddy noted the connection to conversion_error_callback. Discussion: https://postgr.es/m/20210106020229.ne5xnuu6wlondjpe@alap3.anarazel.de
* Further stabilize postgres_fdw test.Tom Lane2021-06-24
| | | | | | | | | | | | | | | | The queries involving ft1_nopw don't stably return the same row anymore. I surmise that an autovacuum hitting "S 1"."T 1" right after the updates introduced by f61db909d/5843659d0 freed some space, changing where subsequent insertions get stored. It's only by good luck that these results were stable before, though, since a LIMIT without ORDER BY isn't well defined, and it's not like we've ever treated that table as append-only in this test script. Since we only really care whether these commands succeed or not, just replace "SELECT *" with "SELECT 1". Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2021-06-23%2019%3A52%3A08
* Allow non-quoted identifiers as isolation test session/step names.Tom Lane2021-06-23
| | | | | | | | | | | | | | | | | | | | | | | | | | For no obvious reason, isolationtester has always insisted that session and step names be written with double quotes. This is fairly tedious and does little for test readability, especially since the names that people actually choose almost always look like normal identifiers. Hence, let's tweak the lexer to allow SQL-like identifiers not only double-quoted strings. (They're SQL-like, not exactly SQL, because I didn't add any case-folding logic. Also there's no provision for U&"..." names, not that anyone's likely to care.) There is one incompatibility introduced by this change: if you write "foo""bar" with no space, that used to be taken as two identifiers, but now it's just one identifier with an embedded quote mark. I converted all the src/test/isolation/ specfiles to remove unnecessary double quotes, but stopped there because my eyes were glazing over already. Like 741d7f104, back-patch to all supported branches, so that this isn't a stumbling block for back-patching isolation test changes. Discussion: https://postgr.es/m/759113.1623861959@sss.pgh.pa.us
* Improve display of query results in isolation tests.Tom Lane2021-06-23
| | | | | | | | | | | | | | | | | | | Previously, isolationtester displayed SQL query results using some ad-hoc code that clearly hadn't had much effort expended on it. Field values longer than 14 characters weren't separated from the next field, and usually caused misalignment of the columns too. Also there was no visual separation of a query's result from subsequent isolationtester output. This made test result files confusing and hard to read. To improve matters, let's use libpq's PQprint() function. Although that's long since unused by psql, it's still plenty good enough for the purpose here. Like 741d7f104, back-patch to all supported branches, so that this isn't a stumbling block for back-patching isolation test changes. Discussion: https://postgr.es/m/582362.1623798221@sss.pgh.pa.us
* Use annotations to reduce instability of isolation-test results.Tom Lane2021-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've long contended with isolation test results that aren't entirely stable. Some test scripts insert long delays to try to force stable results, which is not terribly desirable; but other erratic failure modes remain, causing unrepeatable buildfarm failures. I've spent a fair amount of time trying to solve this by improving the server-side support code, without much success: that way is fundamentally unable to cope with diffs that stem from chance ordering of arrival of messages from different server processes. We can improve matters on the client side, however, by annotating the test scripts themselves to show the desired reporting order of events that might occur in different orders. This patch adds three types of annotations to deal with (a) test steps that might or might not complete their waits before the isolationtester can see them waiting; (b) test steps in different sessions that can legitimately complete in either order; and (c) NOTIFY messages that might arrive before or after the completion of a step in another session. We might need more annotation types later, but this seems to be enough to deal with the instabilities we've seen in the buildfarm. It also lets us get rid of all the long delays that were previously used, cutting more than a minute off the runtime of the isolation tests. Back-patch to all supported branches, because the buildfarm instabilities affect all the branches, and because it seems desirable to keep isolationtester's capabilities the same across all branches to simplify possible future back-patching of tests. Discussion: https://postgr.es/m/327948.1623725828@sss.pgh.pa.us
* amcheck: Fix code commentsPeter Eisentraut2021-06-21
| | | | | | | | Code comments were claiming that verify_heapam() was checking privileges on the relation it was operating on, but it didn't actually do that. Perhaps earlier versions of the patch did that, but now the access is regulated by privileges on the function. Remove the wrong comments.
* Stabilize test case added by commit f61db909d.Tom Lane2021-06-20
| | | | | | | | | | | | | | | | | | | | | | Buildfarm members ayu and tern have sometimes shown a different plan than expected for this query. I'd been unable to reproduce that before today, but I finally realized what is happening. If there is a concurrent open transaction (probably an autovacuum run in the buildfarm, but this can also be arranged manually), then the index entries for the rows removed by the DELETE a few lines up are not killed promptly, causing a change in the planner's estimate of the extremal value of ft2.c1, which moves the rowcount estimate for "c1 > 1100" by enough to change the join plan from nestloop to hash. To fix, change the query condition to "c1 > 1000", causing the hash plan to be preferred whether or not a concurrent open transaction exists. Since this UPDATE is tailored to be a no-op, nothing else changes. Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ayu&dt=2021-06-09%2022%3A45%3A48 Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ayu&dt=2021-06-13%2022%3A38%3A18 Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2021-06-20%2004%3A55%3A36
* Centralize the logic for protective copying of utility statements.Tom Lane2021-06-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the "simple Query" code path, it's fine for parse analysis or execution of a utility statement to scribble on the statement's node tree, since that'll just be thrown away afterwards. However it's not fine if the node tree is in the plan cache, as then it'd be corrupted for subsequent executions. Up to now we've dealt with that by having individual utility-statement functions apply copyObject() if they were going to modify the tree. But that's prone to errors of omission. Bug #17053 from Charles Samborski shows that CREATE/ALTER DOMAIN didn't get this memo, and can crash if executed repeatedly from plan cache. In the back branches, we'll just apply a narrow band-aid for that, but in HEAD it seems prudent to have a more principled fix that will close off the possibility of other similar bugs in future. Hence, let's hoist the responsibility for doing copyObject up into ProcessUtility from its children, thus ensuring that it happens for all utility statement types. Also, modify ProcessUtility's API so that its callers can tell it whether a copy step is necessary. It turns out that in all cases, the immediate caller knows whether the node tree is transient, so this doesn't involve a huge amount of code thrashing. In this way, while we lose a little bit in the execute-from-cache code path due to sometimes copying node trees that wouldn't be mutated anyway, we gain something in the simple-Query code path by not copying throwaway node trees. Statements that are complex enough to be expensive to copy are almost certainly ones that would have to be copied anyway, so the loss in the cache code path shouldn't be much. (Note that this whole problem applies only to utility statements. Optimizable statements don't have the issue because we long ago made the executor treat Plan trees as read-only. Perhaps someday we will make utility statement execution act likewise, but I'm not holding my breath.) Discussion: https://postgr.es/m/931771.1623893989@sss.pgh.pa.us Discussion: https://postgr.es/m/17053-3ca3f501bbc212b4@postgresql.org
* Fix copying data into slots with FDW batchingTomas Vondra2021-06-16
| | | | | | | | | | | | | | | | Commit b676ac443b optimized handling of tuple slots with bulk inserts into foreign tables, so that the slots are initialized only once and reused for all batches. The data was however copied into the slots only after the initialization, inserting duplicate values when the slot gets reused. Fixed by moving the ExecCopySlot outside the init branch. The existing postgres_fdw tests failed to catch this due to inserting data into foreign tables without unique indexes, and then checking only the number of inserted rows. This adds a new test with both a unique index and a check of inserted values. Reported-by: Alexander Pyhalov Discussion: https://postgr.es/m/7a8cf8d56b3d18e5c0bccd6cd42d04ac%40postgrespro.ru
* Adjust batch size in postgres_fdw to not use too many parametersTomas Vondra2021-06-08
| | | | | | | | | | | | | | | | The FE/BE protocol identifies parameters with an Int16 index, which limits the maximum number of parameters per query to 65535. With batching added to postges_fdw this limit is much easier to hit, as the whole batch is essentially a single query, making this error much easier to hit. The failures are a bit unpredictable, because it also depends on the number of columns in the query. So instead of just failing, this patch tweaks the batch_size to not exceed the maximum number of parameters. Reported-by: Hou Zhijie <houzj.fnst@cn.fujitsu.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://postgr.es/m/OS0PR01MB571603973C0AC2874AD6BF2594299%40OS0PR01MB5716.jpnprd01.prod.outlook.com
* Stabilize contrib/seg regression test.Tom Lane2021-06-07
| | | | | | | | | | | | | If autovacuum comes along just after we fill table test_seg with some data, it will update the stats to the point where we prefer a plain indexscan over a bitmap scan, breaking the expected output (as well as the point of the test case). To fix, just force a bitmap scan to be chosen here. This has evidently been wrong since commit de1d042f5. It's not clear why we just recently saw any buildfarm failures due to it; but prairiedog has failed twice on this test in the past week. Hence, backpatch to v11 where this test case came in.
* Fix rescanning of async-aware Append nodes.Etsuro Fujita2021-06-07
| | | | | | | | | | | | | | | | | | | | | In cases where run-time pruning isn't required, the synchronous and asynchronous subplans for an async-aware Append node determined using classify_matching_subplans() should be re-used when rescanning the node, but the previous code re-determined them using that function repeatedly each time when rescanning the node, leading to incorrect results in a normal build and an Assert failure in an Assert-enabled build as that function doesn't assume that it's called repeatedly in such cases. Fix the code as mentioned above. My oversight in commit 27e1f1456. While at it, initialize async-related pointers/variables to NULL/zero explicitly in ExecInitAppend() and ExecReScanAppend(), just to be sure. (The variables would have been set to zero before we get to the latter function, but let's do so.) Reviewed-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/CAPmGK16Q4B2_KY%2BJH7rb7wQbw54AUprp7TMekGTd2T1B62yysQ%40mail.gmail.com
* Fix postgres_fdw failure with whole-row Vars of type RECORD.Tom Lane2021-06-04
| | | | | | | | | | | | | | | | | | | | | Commit 86dc90056 expects that FDWs can cope with whole-row Vars for their tables, even if the Vars are marked with vartype RECORDOID. Previously, whole-row Vars generated by the planner had vartype equal to the relevant table's rowtype OID. (The point behind this change is to enable sharing of resjunk columns across inheritance child tables.) It turns out that postgres_fdw fails to cope with this, though through bad fortune none of its test cases exposed that. Things mostly work, but when we try to read back a value of such a Var, the expected rowtype is not available to record_in(). Fortunately, it's not difficult to hack up the tupdesc that controls this process to substitute the foreign table's rowtype for RECORDOID. Thus we can solve the runtime problem while still sharing the resjunk column with other tables. Per report from Alexander Pyhalov. Discussion: https://postgr.es/m/7817fb9ebd6661cdf9b67dec6e129a78@postgrespro.ru
* Adjust locations which have an incorrect copyright yearDavid Rowley2021-06-04
| | | | | | | A few patches committed after ca3b37487 mistakenly forgot to make the copyright year 2021. Fix these. Discussion: https://postgr.es/m/CAApHDvqyLmd9P2oBQYJ=DbrV8QwyPRdmXtCTFYPE08h+ip0UJw@mail.gmail.com
* Fix planner's row-mark code for inheritance from a foreign table.Tom Lane2021-06-02
| | | | | | | | | | | | | | | | Commit 428b260f8 broke planning of cases where row marks are needed (SELECT FOR UPDATE, etc) and one of the query's tables is a foreign table that has regular table(s) as inheritance children. We got the reverse case right, but apparently were thinking that foreign tables couldn't be inheritance parents. Not so; so we need to be able to add a CTID junk column while adding a new child, not only a wholerow junk column. Back-patch to v12 where the faulty code came in. Amit Langote Discussion: https://postgr.es/m/CA+HiwqEmo3FV1LAQ4TVyS2h1WM=kMkZUmbNuZSCnfHvMcUcPeA@mail.gmail.com
* Fix assertion during streaming of multi-insert toast changes.Amit Kapila2021-05-27
| | | | | | | | | | | | | | | | | | | | While decoding the multi-insert WAL we can't clean the toast untill we get the last insert of that WAL record. Now if we stream the changes before we get the last change, the memory for toast chunks won't be released and we expect the txn to have streamed all changes after streaming. This restriction is mainly to ensure the correctness of streamed transactions and it doesn't seem worth uplifting such a restriction just to allow this case because anyway we will stream the transaction once such an insert is complete. Previously we were using two different flags (one for toast tuples and another for speculative inserts) to indicate partial changes. Now instead we replaced both of them with a single flag to indicate partial changes. Reported-by: Pavan Deolasee Author: Dilip Kumar Reviewed-by: Pavan Deolasee, Amit Kapila Discussion: https://postgr.es/m/CABOikdN-_858zojYN-2tNcHiVTw-nhxPwoQS4quExeweQfG1Ug@mail.gmail.com
* Allow compute_query_id to be set to 'auto' and make it defaultAlvaro Herrera2021-05-15
| | | | | | | | | | | | | | | | | Allowing only on/off meant that all either all existing configuration guides would become obsolete if we disabled it by default, or that we would have to accept a performance loss in the default config if we enabled it by default. By allowing 'auto' as a middle ground, the performance cost is only paid by those who enable pg_stat_statements and similar modules. I only edited the release notes to comment-out a paragraph that is now factually wrong; further edits are probably needed to describe the related change in more detail. Author: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20210513002623.eugftm4nk2lvvks3@nol
* Prevent asynchronous execution of direct foreign-table modifications.Etsuro Fujita2021-05-13
| | | | | | | | | | | | | | | | Commits 27e1f1456 and 86dc90056, which were independently discussed, cause a crash when executing an inherited foreign UPDATE/DELETE query with asynchronous execution enabled, where children of an Append node that is the direct/indirect child of the ModifyTable node are rewritten so as to modify foreign tables directly by postgresPlanDirectModify(); as in that case the direct modifications are executed asynchronously, which is not currently supported by asynchronous execution. Fix by disabling asynchronous execution of the direct modifications in that function. Author: Etsuro Fujita Reviewed-by: Amit Langote Discussion: https://postgr.es/m/CAPmGK158e9sJOfuWxfn%2B0ynrspXQU3JhNjSCbaoeSzMvnga%2Bbw%40mail.gmail.com
* Fix tests for replication slots stats.Amit Kapila2021-05-13
| | | | | | | | | | | | | Some of the tests were not considering that the slot's spill stats could be received by the stats collector after we have reset the stats. Remove those tests and don't check total bytes decoded and sent to output plugin in the spilled stats test as we can send the spilled stats to the stats collector before actually sending the changes to output plugin. Reported-by: Tom Lane as per buildfarm Author: Vignesh C, Sawada Masahiko Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
* Initial pgindent and pgperltidy run for v14.Tom Lane2021-05-12
| | | | | | | | Also "make reformat-dat-files". The only change worthy of note is that pgindent messed up the formatting of launcher.c's struct LogicalRepWorkerId, which led me to notice that that struct wasn't used at all anymore, so I just took it out.
* Fix EXPLAIN ANALYZE for async-capable nodes.Etsuro Fujita2021-05-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | EXPLAIN ANALYZE for an async-capable ForeignScan node associated with postgres_fdw is done just by using instrumentation for ExecProcNode() called from the node's callbacks, causing the following problems: 1) If the remote table to scan is empty, the node is incorrectly considered as "never executed" by the command even if the node is executed, as ExecProcNode() isn't called from the node's callbacks at all in that case. 2) The command fails to collect timings for things other than ExecProcNode() done in the node, such as creating a cursor for the node's remote query. To fix these problems, add instrumentation for async-capable nodes, and modify postgres_fdw accordingly. My oversight in commit 27e1f1456. While at it, update a comment for the AsyncRequest struct in execnodes.h and the documentation for the ForeignAsyncRequest API in fdwhandler.sgml to match the code in ExecAsyncAppendResponse() in nodeAppend.c, and fix typos in comments in nodeAppend.c. Per report from Andrey Lepikhov, though I didn't use his patch. Reviewed-by: Andrey Lepikhov Discussion: https://postgr.es/m/2eb662bb-105d-fc20-7412-2f027cc3ca72%40postgrespro.ru
* Copy the INSERT query in postgres_fdwTomas Vondra2021-05-07
| | | | | | | | | | | When executing the INSERT with batching, we may need to rebuild the query when the batch size changes, in which case we pfree the current string. We must not release the original string, stored in fdw_private, because that may be needed in EXPLAIN ANALYZE. So make copy of the SQL, but only for INSERT queries. Reported-by: Pavel Stehule Discussion: https://postgr.es/m/CAFj8pRCL_Rjw-MCR6J7VX9OF7MR6PA5K8qUbrMvprW_e-aHkfQ%40mail.gmail.com
* Remove extraneous newlines added by perl copyright patchAndrew Dunstan2021-05-07
|
* Add a copyright notice to perl files lacking one.Andrew Dunstan2021-05-07
|
* Disable cache clobber to avoid breaking postgres_fdw termination test.Tom Lane2021-05-04
| | | | | | | | | | | | | | | | | | | | Commit 93f414614 improved a pre-existing test case so that it would show whether or not termination of the "remote" worker process happened. This soon exposed that, when debug_invalidate_system_caches_always (nee CLOBBER_CACHE_ALWAYS) is enabled, no such termination occurs. That's because cache invalidation forces postgres_fdw connections to be dropped at end of transaction, so that there's no worker to terminate. There's a race condition as to whether the worker will manage to get out of the BackendStatusArray before we look, but at least on buildfarm member hyrax, it's failed twice in two attempts. Rather than re-lobotomizing the test, let's fix this by transiently disabling debug_invalidate_system_caches_always. (Hooray for that being just a GUC nowadays, rather than a compile-time option.) If this proves not to be enough to make the test stable, we can do the other thing instead. Discussion: https://postgr.es/m/3854538.1620081771@sss.pgh.pa.us
* Update query_id computationBruce Momjian2021-05-03
| | | | | | | | | | | | | Properly fix: - the "ONLY" in FROM [ONLY] isn't hashed - the agglevelsup field in GROUPING isn't hashed - WITH TIES not being hashed (new in PG 13) - "DISTINCT" in "GROUP BY [DISTINCT]" isn't hashed (new in PG 14) Reported-by: Julien Rouhaud Discussion: https://postgr.es/m/20210425081119.ulyzxqz23ueh3wuj@nol
* amcheck: Improve some confusing reports about TOAST problems.Robert Haas2021-05-03
| | | | | | | | | | Don't phrase reports in terms of the number of tuples thus-far returned by the index scan, but rather in terms of the chunk_seq values found inside the tuples. Patch by me, reviewed by Mark Dilger. Discussion: http://postgr.es/m/CA+TgmoZUONCkdcdR778EKuE+f1r5Obieu63db2OgMPHaEvEPTQ@mail.gmail.com
* Revert use singular for -1 (commits 9ee7d533da and 5da9868ed9Bruce Momjian2021-05-01
| | | | | | | | | | | | | | Turns out you can specify negative values using plurals: https://english.stackexchange.com/questions/9735/is-1-followed-by-a-singular-or-plural-noun so the previous code was correct enough, and consistent with other usage in our code. Also add comment in the two places where this could be confused. Reported-by: Noah Misch Diagnosed-by: 20210425115726.GA2353095@rfd.leadboat.com
* Another try to fix the test case added by commit f5fc2f5b23.Amit Kapila2021-04-30
| | | | | | | | | | | | | | | As per analysis, it appears that the 'drop slot' message from the previous test and 'create slot' message of the new test are either missed or not yet delivered to the stats collector due to which we will still see the stats from the old slot. This can happen rarely which could be the reason that we are seeing some failures in the buildfarm randomly. To avoid that we are using a different slot name for the tests in test_decoding/sql/stats.sql. Reported-by: Tom Lane based on buildfarm reports Author: Sawada Masahiko Reviewed-by: Amit Kapila, Vignesh C Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
* Don't pass "ONLY" options specified in TRUNCATE to foreign data wrapper.Fujii Masao2021-04-27
| | | | | | | | | | | | | | | | | | | | | | Commit 8ff1c94649 allowed TRUNCATE command to truncate foreign tables. Previously the information about "ONLY" options specified in TRUNCATE command were passed to the foreign data wrapper. Then postgres_fdw constructed the TRUNCATE command to issue the remote server and included "ONLY" options in it based on the passed information. On the other hand, "ONLY" options specified in SELECT, UPDATE or DELETE have no effect when accessing or modifying the remote table, i.e., are not passed to the foreign data wrapper. So it's inconsistent to make only TRUNCATE command pass the "ONLY" options to the foreign data wrapper. Therefore this commit changes the TRUNCATE command so that it doesn't pass the "ONLY" options to the foreign data wrapper, for the consistency with other statements. Also this commit changes postgres_fdw so that it always doesn't include "ONLY" options in the TRUNCATE command that it constructs. Author: Fujii Masao Reviewed-by: Bharath Rupireddy, Kyotaro Horiguchi, Justin Pryzby, Zhihong Yu Discussion: https://postgr.es/m/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com
* Use HTAB for replication slot statistics.Amit Kapila2021-04-27
| | | | | | | | | | | | | | | | | | | | | | | Previously, we used to use the array of size max_replication_slots to store stats for replication slots. But that had two problems in the cases where a message for dropping a slot gets lost: 1) the stats for the new slot are not recorded if the array is full and 2) writing beyond the end of the array if the user reduces the max_replication_slots. This commit uses HTAB for replication slot statistics, resolving both problems. Now, pgstat_vacuum_stat() search for all the dead replication slots in stats hashtable and tell the collector to remove them. To avoid showing the stats for the already-dropped slots, pg_stat_replication_slots view searches slot stats by the slot name taken from pg_replication_slots. Also, we send a message for creating a slot at slot creation, initializing the stats. This reduces the possibility that the stats are accumulated into the old slot stats when a message for dropping a slot gets lost. Reported-by: Andres Freund Author: Sawada Masahiko, test case by Vignesh C Reviewed-by: Amit Kapila, Vignesh C, Dilip Kumar Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
* amcheck: MAXALIGN() nbtree special area offset.Peter Geoghegan2021-04-23
| | | | | | | | | | | | | | | This isn't strictly necessary, but in theory it might matter if in the future the width of the nbtree special area changes -- its total size might not be an even number of MAXALIGN() quantums, even with padding. PageInit() MAXALIGN()s all special area offsets, but amcheck uses the offset to perform initial basic validation of line pointers, so we don't rely on the offset from the page header. The real reason to do this is to set a good example for new code that adds amcheck coverage for other index AMs. Reported-By: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://postgr.es/m/CALj2ACUMqTR9nErh99FbOBmzCXE9=gXNqhBiwYOhejJJS1LXqQ@mail.gmail.com
* Remove use of [U]INT64_FORMAT in some translatable stringsMichael Paquier2021-04-23
| | | | | | | | %lld with (long long), or %llu with (unsigned long long) are more adapted. This is similar to 3286065. Author: Kyotaro Horiguchi Discussion: https://postgr.es/m/20210421.200000.1462448394029407895.horikyota.ntt@gmail.com
* Minor code cleanup in asynchronous execution support.Etsuro Fujita2021-04-23
| | | | | | | | | | | | | | | This is cleanup for commit 27e1f1456: * ExecAppendAsyncEventWait(), which was modified a bit further by commit a8af856d3, duplicated the same nevents calculation. Simplify the code a little bit to avoid the duplication. Update comments there. * Add an assertion to ExecAppendAsyncRequest(). * Update a comment about merging the async_capable options from input relations in merge_fdw_options(), per complaint from Kyotaro Horiguchi. * Add a comment for fetch_more_data_begin(). Author: Etsuro Fujita Discussion: https://postgr.es/m/CAPmGK1637W30Wx3MnrReewhafn6F_0J76mrJGoFXFnpPq4QfvA%40mail.gmail.com
* Fix test case added by commit f5fc2f5b23.Amit Kapila2021-04-19
| | | | | | | | | | | | | | | | | In the new test after resetting the stats, we were not waiting for the stats message to be delivered. Also, we need to decode the results for the new test, otherwise, it will show the old stats. In passing, a. Change docs added by commit f5fc2f5b23 as per suggestion by Justin Pryzby. b. Bump the PGSTAT_FILE_FORMAT_ID as commit f5fc2f5b23 changes the file format of stats. Reported-by: Tom Lane based on buildfarm reports Author: Vignesh C, Justin Pryzby Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
* Fix typos and grammar in comments and docsMichael Paquier2021-04-19
| | | | | Author: Justin Pryzby Discussion: https://postgr.es/m/20210416070310.GG3315@telsasoft.com
* Replace magic constants for seek() calls in perl scriptsMichael Paquier2021-04-19
| | | | | | | | | | A couple of tests have been using 0 as magic constant while SEEK_SET can be used instead. This makes the code easier to understand, and more consistent with the changes done in 3c5b068. Per discussion with Andrew Dunstan. Discussion: https://postgr.es/m/YHrc24AgJQ6tQ1q0@paquier.xyz
* Add information of total data processed to replication slot stats.Amit Kapila2021-04-16
| | | | | | | | | | | | This adds the statistics about total transactions count and total transaction data logically sent to the decoding output plugin from ReorderBuffer. Users can query the pg_stat_replication_slots view to check these stats. Suggested-by: Andres Freund Author: Vignesh C and Amit Kapila Reviewed-by: Sawada Masahiko, Amit Kapila Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
* Revert "psql: Show all query results by default"Peter Eisentraut2021-04-15
| | | | | | | | | This reverts commit 3a5130672296ed4e682403a77a9a3ad3d21cef75. Per discussion, this patch had too many issues to resolve at this point of the development cycle. We'll try again in the future. Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre