aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Use croak instead of die in Perl code when appropriatePeter Eisentraut2020-10-22
|
* Optimize a few list_delete_ptr callsDavid Rowley2020-10-22
| | | | | | | | | | | | | | | | | | | | | There is a handful of places where we called list_delete_ptr() to remove some element from a List. In many of these places we know, or with very little additional effort know the index of the ListCell that we need to remove. Here we change all of those places to instead either use one of; list_delete_nth_cell(), foreach_delete_current() or list_delete_last(). Each of these saves from having to iterate over the list to search for the element to remove by its pointer value. There are some small performance gains to be had by doing this, but in the general case, none of these lists are likely to be very large, so the lookup was probably never that expensive anyway. However, some of the calls are in fairly hot code paths, e.g process_equivalence(). So any small gains there are useful. Author: Zhijie Hou and David Rowley Discussion: https://postgr.es/m/b3517353ec7c4f87aa560678fbb1034b@G08CNEXMBPEKD05.g08.fujitsu.local
* Fix connection string handling in psql's \connect command.Tom Lane2020-10-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | psql's \connect claims to be able to re-use previous connection parameters, but in fact it only re-uses the database name, user name, host name (and possibly hostaddr, depending on version), and port. This is problematic for assorted use cases. Notably, pg_dump[all] emits "\connect databasename" commands which we would like to have re-use all other parameters. If such a script is loaded in a psql run that initially had "-d connstring" with some non-default parameters, those other parameters would be lost, potentially causing connection failure. (Thus, this is the same kind of bug addressed in commits a45bc8a4f and 8e5793ab6, although the details are much different.) To fix, redesign do_connect() so that it pulls out all properties of the old PGconn using PQconninfo(), and then replaces individual properties in that array. In the case where we don't wish to re-use anything, get libpq's default settings using PQconndefaults() and replace entries in that, so that we don't need different code paths for the two cases. This does result in an additional behavioral change for cases where the original connection parameters allowed multiple hosts, say "psql -h host1,host2", and the \connect request allows re-use of the host setting. Because the previous coding relied on PQhost(), it would only permit reconnection to the same host originally selected. Although one can think of scenarios where that's a good thing, there are others where it is not. Moreover, that behavior doesn't seem to meet the principle of least surprise, nor was it documented; nor is it even clear it was intended, since that coding long pre-dates the addition of multi-host support to libpq. Hence, this patch is content to drop it and re-use the host list as given. Per Peter Eisentraut's comments on bug #16604. Back-patch to all supported branches. Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
* Use fast checkpoint in PostgresNode::backup()Alvaro Herrera2020-10-21
| | | | Should cause tests to be a bit faster
* Remove the option to build thread_test.c outside configure.Tom Lane2020-10-21
| | | | | | | | | | | | | | | | | | | Theoretically one could go into src/test/thread and build/run this program there. In practice, that hasn't worked since 96bf88d52, and probably much longer on some platforms (likely including just the sort of hoary leftovers where this test might be of interest). While it wouldn't be too hard to repair the breakage, the fact that nobody has noticed for two years shows that there is zero usefulness in maintaining this build pathway. Let's get rid of it and decree that thread_test.c is *only* meant to be built/used in configure. Given that decision, it makes sense to put thread_test.c under config/ and get rid of src/test/thread altogether, so that's what I did. In passing, update src/test/README, which had been ignored by some not-so-recent additions of subdirectories. Discussion: https://postgr.es/m/227659.1603041612@sss.pgh.pa.us
* Remove obsolete ifdefsPeter Eisentraut2020-10-21
| | | | | | | | | | | Commit 8dace66e0735ca39b779922d02c24ea2686e6521 added #ifdefs for a number of errno symbols because they were not present on Windows. Later, commit 125ad539a275db5ab8f4647828b80a16d02eabd2 added replacement #defines for some of those symbols. So some of the changes from the first commit are made dead code by the second commit and can now be removed. Discussion: https://www.postgresql.org/message-id/flat/6dee8574-b0ad-fc49-9c8c-2edc796f0033@2ndquadrant.com
* Fix -Wcast-function-type warnings on Windows/MinGWPeter Eisentraut2020-10-21
| | | | | | | | After de8feb1f3a23465b5737e8a8c160e8ca62f61339, some warnings remained that were only visible when using GCC on Windows. Fix those as well. Note that the ecpg test source files don't use the full pg_config.h, so we can't use pg_funcptr_t there but have to do it the long way.
* Review format of code generated by PerfectHash.pmMichael Paquier2020-10-21
| | | | | | | | | | | | | | | 80f8eb7 has added to the normalization quick check headers some code generated by PerfectHash.pm that is incompatible with the settings of gitattributes for this repository, as whitespaces followed a set of tabs for the first element of a line in the table. Instead of adding a new exception to gitattributes, rework the format generated so as a right padding with spaces is used instead of a left padding. This keeps the table generated in a readable shape with its set of columns, making unnecessary an update of gitattributes. Reported-by: Peter Eisentraut Author: John Naylor Discussion: https://postgr.es/m/d601b3b5-a3c7-5457-2f84-3d6513d690fc@2ndquadrant.com
* Fix ALTER TABLE .. ENABLE/DISABLE TRIGGER recursionAlvaro Herrera2020-10-20
| | | | | | | | | | | | | | | | More precisely, correctly handle the ONLY flag indicating not to recurse. This was implemented in 86f575948c77 by recursing in trigger.c, but that's the wrong place; use ATSimpleRecursion instead, which behaves properly. However, because legacy inheritance has never recursed in that situation, make sure to do that only for new-style partitioning. I noticed this problem while testing a fix for another bug in the vicinity. This has been wrong all along, so backpatch to 11. Discussion: https://postgr.es/m/20201016235925.GA29829@alvherre.pgsql
* Change the attribute name in pg_stat_replication_slots view.Amit Kapila2020-10-20
| | | | | | | | | | | | | | | Change the attribute 'name' to 'slot_name' in pg_stat_replication_slots view to make it clear and that way we will be consistent with the other places like pg_stat_wal_receiver view where we display the same attribute. In the passing, fix the typo in one of the macros in the related code. Bump the catversion as we have modified the name in the catalog as well. Reported-by: Noriyoshi Shinoda Author: Noriyoshi Shinoda Reviewed-by: Sawada Masahiko and Amit Kapila Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
* Fix connection string handling in src/bin/scripts/ programs.Tom Lane2020-10-19
| | | | | | | | | | | | | | | | | | | | | When told to process all databases, clusterdb, reindexdb, and vacuumdb would reconnect by replacing their --maintenance-db parameter with the name of the target database. If that parameter is a connstring (which has been allowed for a long time, though we failed to document that before this patch), we'd lose any other options it might specify, for example SSL or GSS parameters, possibly resulting in failure to connect. Thus, this is the same bug as commit a45bc8a4f fixed in pg_dump and pg_restore. We can fix it in the same way, by using libpq's rules for handling multiple "dbname" parameters to add the target database name separately. I chose to apply the same refactoring approach as in that patch, with a struct to handle the command line parameters that need to be passed through to connectDatabase. (Maybe someday we can unify the very similar functions here and in pg_dump/pg_restore.) Per Peter Eisentraut's comments on bug #16604. Back-patch to all supported branches. Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
* Fix list-munging bug that broke SQL function result coercions.Tom Lane2020-10-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since commit 913bbd88d, check_sql_fn_retval() can either insert type coercion steps in-line in the Query that produces the SQL function's results, or generate a new top-level Query to perform the coercions, if modifying the Query's output in-place wouldn't be safe. However, it appears that the latter case has never actually worked, because the code tried to inject the new Query back into the query list it was passed ... which is not the list that will be used for later processing when we execute the SQL function "normally" (without inlining it). So we ended up with no coercion happening at run-time, leading to wrong results or crashes depending on the datatypes involved. While the regression tests look like they cover this area well enough, through a huge bit of bad luck all the test cases that exercise the separate-Query path were checking either inline-able cases (which accidentally didn't have the bug) or cases that are no-ops at runtime (e.g., varchar to text), so that the failure to perform the coercion wasn't obvious. The fact that the cases that don't work weren't allowed at all before v13 probably contributed to not noticing the problem sooner, too. To fix, get rid of the separate "flat" list of Query nodes and instead pass the real two-level list that is going to be used later. I chose to make the same change in check_sql_fn_statements(), although that has no actual bug, just so that we don't need that data structure at all. This is an API change, as evidenced by the adjustments needed to callers outside functions.c. That's a bit scary to be doing in a released branch, but so far as I can tell from a quick search, there are no outside callers of these functions (and they are sufficiently specific to our semantics for SQL-language functions that it's not apparent why any extension would need to call them). In any case, v13 already changed the API of check_sql_fn_retval() compared to prior branches. Per report from pinker. Back-patch to v13 where this code came in. Discussion: https://postgr.es/m/1603050466566-0.post@n3.nabble.com
* Remove PartitionRoutingInfo struct.Heikki Linnakangas2020-10-19
| | | | | | | | | | The extra indirection neeeded to access its members via its enclosing ResultRelInfo seems pointless. Move all the fields from PartitionRoutingInfo to ResultRelInfo. Author: Amit Langote Reviewed-by: Alvaro Herrera Discussion: https://www.postgresql.org/message-id/CA%2BHiwqFViT47Zbr_ASBejiK7iDG8%3DQ1swQ-tjM6caRPQ67pT%3Dw%40mail.gmail.com
* Revise child-to-root tuple conversion map management.Heikki Linnakangas2020-10-19
| | | | | | | | | | | | | | | | | | | | | | | Store the tuple conversion map to convert a tuple from a child table's format to the root format in a new ri_ChildToRootMap field in ResultRelInfo. It is initialized if transition tuple capture for FOR STATEMENT triggers or INSERT tuple routing on a partitioned table is needed. Previously, ModifyTable kept the maps in the per-subplan ModifyTableState->mt_per_subplan_tupconv_maps array, or when tuple routing was used, in ResultRelInfo->ri_Partitioninfo->pi_PartitionToRootMap. The new field replaces both of those. Now that the child-to-root tuple conversion map is always available in ResultRelInfo (when needed), remove the TransitionCaptureState.tcs_map field. The callers of Exec*Trigger() functions no longer need to set or save it, which is much less confusing and bug-prone. Also, as a future optimization, this will allow us to delay creating the map for a given result relation until the relation is actually processed during execution. Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA%2BHiwqHtCWLdK-LO%3DNEsvOdHx%2B7yv4mE_zYK0i3BH7dXb-wxog%40mail.gmail.com
* Clean up code to resolve the "root target relation" in nodeModifyTable.cHeikki Linnakangas2020-10-19
| | | | | | | | | | | | | | | | | When executing DDL on a partitioned table or on a table with inheritance children, statement-level triggers must be fired against the table given in the original statement. The code to look that up was a bit messy and duplicative. Commit 501ed02cf6 added a helper function, getASTriggerResultRelInfo() (later renamed to getTargetResultRelInfo()) for it, but for some reason it was only used when firing AFTER STATEMENT triggers and the code to fire BEFORE STATEMENT triggers duplicated the logic. Determine the target relation in ExecInitModifyTable(), and set it always in ModifyTableState. Code that used to call getTargetResultRelInfo() can now use ModifyTableState->rootResultRelInfo directly. Discussion: https://www.postgresql.org/message-id/CA%2BHiwqFViT47Zbr_ASBejiK7iDG8%3DQ1swQ-tjM6caRPQ67pT%3Dw%40mail.gmail.com
* Avoid invalid alloc size error in shm_mqPeter Eisentraut2020-10-19
| | | | | | | | | | | | In shm_mq_receive(), a huge payload could trigger an unjustified "invalid memory alloc request size" error due to the way the buffer size is increased. Add error checks (documenting the upper limit) and avoid the error by limiting the allocation size to MaxAllocSize. Author: Markus Wanner <markus.wanner@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/3bb363e7-ac04-0ac4-9fe8-db1148755bfa%402ndquadrant.com
* Improve whitespacePeter Eisentraut2020-10-19
| | | | | The SQL file was using a mix of tabs and spaces inconsistently. Convert all to spaces and adjust indentation accordingly.
* Prevent overly large and NaN row estimates in relationsDavid Rowley2020-10-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Given a query with enough joins, it was possible that the query planner, after multiplying the row estimates with the join selectivity that the estimated number of rows would exceed the limits of the double data type and become infinite. To give an indication on how extreme a case is required to hit this, the particular example case reported required 379 joins to a table without any statistics, which resulted in the 1.0/DEFAULT_NUM_DISTINCT being used for the join selectivity. This eventually caused the row estimates to go infinite and resulted in an assert failure in initial_cost_mergejoin() where the infinite row estimated was multiplied by an outerstartsel of 0.0 resulting in NaN. The failing assert verified that NaN <= Inf, which is false. To get around this we use clamp_row_est() to cap row estimates at a maximum of 1e100. This value is thought to be low enough that costs derived from it would remain within the bounds of what the double type can represent. Aside from fixing the failing Assert, this also has the added benefit of making it so add_path() will still receive proper numerical values as costs which will allow it to make more sane choices when determining the cheaper path in extreme cases such as the one described above. Additionally, we also get rid of the isnan() checks in the join costing functions. The actual case which originally triggered those checks to be added in the first place never made it to the mailing lists. It seems likely that the new code being added to clamp_row_est() will result in those becoming checks redundant, so just remove them. The fairly harmless assert failure problem does also exist in the backbranches, however, a more minimalistic fix will be applied there. Reported-by: Onder Kalaci Reviewed-by: Tom Lane Discussion: https://postgr.es/m/DM6PR21MB1211FF360183BCA901B27F04D80B0@DM6PR21MB1211.namprd21.prod.outlook.com
* Update the Winsock API version requested by libpq.Tom Lane2020-10-18
| | | | | | | | | | | | | | | According to Microsoft's documentation, 2.2 has been the current version since Windows 98 or so. Moreover, that's what the Postgres backend has been requesting since 2004 (cf commit 4cdf51e64). So there seems no reason for libpq to keep asking for 1.1. Bring thread_test along, too, so that we're uniformly asking for 2.2 in all our WSAStartup calls. It's not clear whether there's any point in back-patching this, so for now I didn't. Discussion: https://postgr.es/m/132799.1602960277@sss.pgh.pa.us
* In pg_restore's dump_lo_buf(), work a little harder on error handling.Tom Lane2020-10-18
| | | | | | | | | | | | | | | | | | Failure to write data to a large object during restore led to an ugly and uninformative error message. To add insult to injury, it then fatal'd out, where other SQL-level errors usually result in pressing on. Report the underlying error condition, rather than just giving not-very- useful byte counts, and use warn_or_exit_horribly() so as to adhere to pg_restore's general policy about whether to continue or not. Also recognize that lo_write() returns int not size_t. Per report from Justin Pryzby, though I didn't use his patch. Given the lack of comparable complaints, I'm not sure this is worth back-patching. Discussion: https://postgr.es/m/20201018010232.GF9241@telsasoft.com
* In libpq for Windows, call WSAStartup once and WSACleanup not at all.Tom Lane2020-10-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Windows documentation insists that every WSAStartup call should have a matching WSACleanup call. However, if that ever had actual relevance, it wasn't in this century. Every remotely-modern Windows kernel is capable of cleaning up when a process exits without doing that, and must be so to avoid resource leaks in case of a process crash. Moreover, Postgres backends have done WSAStartup without WSACleanup since commit 4cdf51e64 in 2004, and we've never seen any indication of a problem with that. libpq's habit of doing WSAStartup during connection start and WSACleanup during shutdown is also rather inefficient, since a series of non-overlapping connection requests leads to repeated, quite expensive DLL unload/reload cycles. We document a workaround for that (having the application call WSAStartup for itself), but that's just a kluge. It's also worth noting that it's far from uncommon for applications to exit without doing PQfinish, and we've not heard reports of trouble from that either. However, the real reason for acting on this is that recent experiments by Alexander Lakhin suggest that calling WSACleanup during PQfinish might be triggering the symptom we occasionally see that a process using libpq fails to emit expected stdio output. Therefore, let's change libpq so that it calls WSAStartup only once per process, during the first connection attempt, and never calls WSACleanup at all. While at it, get rid of the only other WSACleanup call in our code tree, in pg_dump/parallel.c; that presumably is equally useless. If this proves to suppress the fairly-common ecpg test failures we see on Windows, I'll back-patch, but for now let's just do it in HEAD and see what happens. Discussion: https://postgr.es/m/ac976d8c-03df-d6b8-025c-15a2de8d9af1@postgrespro.ru
* Update time zone data files to tzdata release 2020c.Tom Lane2020-10-16
| | | | | | DST law changes in Morocco, Canadian Yukon, Fiji, Macquarie Island, Casey Station (Antarctica). Historical corrections for France, Hungary, Monaco.
* Sync our copy of the timezone library with IANA release tzcode2020c.Tom Lane2020-10-16
| | | | | | | | | | | | | | | This changes zic's default output format from "-b fat" to "-b slim". We were already using "slim" in v13/HEAD, so those branches drop the explicit -b switch in the Makefiles. Instead, add an explicit "-b fat" in v12 and before, so that we don't change the output file format in those branches. (This is perhaps excessively conservative, but we decided not to do so in a12079109, and I'll stick with that.) Other non-cosmetic changes are to drop support for zic's long-obsolete "-y" switch, and to ensure that strftime() does not change errno unless it fails. As usual with tzcode changes, back-patch to all supported branches.
* llvmjit: Work around bug in LLVM 3.9 causing crashes after 72559438f92.Andres Freund2020-10-15
| | | | | | | | | | | | | | Unfortunately in LLVM 3.9 LLVMGetAttributeCountAtIndex(func, index) crashes when called with an index that has 0 attributes. Since there's no way to work around this in the C API, add a small C++ wrapper doing so. The only reason this didn't fail before 72559438f92 is that there always are function attributes... Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20201016001254.w2nfj7gd74jmb5in@alap3.anarazel.de Backpatch: 11-, like 72559438f92
* pg_upgrade: remove C99 compiler req. from commit 3c0471b5fdBruce Momjian2020-10-15
| | | | | | | | | | | | | | This commit required support for inline variable definition, which is not a requirement. RELEASE NOTE AUTHOR: the author of commit 3c0471b5fd (pg_upgrade/tablespaces) was Justin Pryzby, not me. Reported-by: Andres Freund Discussion: https://postgr.es/m/20201016001959.h24fkywfubkv2pc5@alap3.anarazel.de Backpatch-through: 9.5
* pg_upgrade: generate check error for left-over new tablespaceBruce Momjian2020-10-15
| | | | | | | | | | | | | Previously, if pg_upgrade failed, and the user recreated the cluster but did not remove the new cluster tablespace directory, a later pg_upgrade would fail since the new tablespace directory would already exists. This adds error reporting for this during check. Reported-by: Justin Pryzby Discussion: https://postgr.es/m/20200925005531.GJ23631@telsasoft.com Backpatch-through: 9.5
* llvmjit: Also copy parameter / return value attributes from template functions.Andres Freund2020-10-15
| | | | | | | | | | | | | | | | | | | | | Previously we only copied the function attributes. That caused problems at least on s390x: Because we didn't copy the 'zeroext' attribute for ExecAggTransReparent()'s *IsNull parameters, expressions invoking it didn't ensure that the upper bytes of the registers were zeroed. In the - relatively rare - cases where not, ExecAggTransReparent() wrongly ended up in the newValueIsNull branch due to the register not being zero. Subsequently causing a crash. It's quite possible that this would cause problems on other platforms, and in other places than just ExecAggTransReparent() on s390x. Thanks to Christoph (and the Debian project) for providing me with access to a s390x machine, allowing me to debug this. Reported-By: Christoph Berg Author: Andres Freund Discussion: https://postgr.es/m/20201015083246.kie5726xerdt3ael@alap3.anarazel.de Backpatch: 11-, where JIT was added
* doc: improve description of synchronous_commit modesBruce Momjian2020-10-15
| | | | | | | | | | | | Previously it wasn't clear exactly what each of the synchronous_commit modes accomplished. This clarifies that, and adds a table describing it. Only backpatched through 9.6 since 9.5 doesn't have all the options. Reported-by: kghost0@gmail.com Discussion: https://postgr.es/m/159741195522.14321.13812604195366728976@wrigleys.postgresql.org Backpatch-through: 9.6
* Revert "Remove pointless HeapTupleHeaderIndicatesMovedPartitions calls"Alvaro Herrera2020-10-15
| | | | | This reverts commit 85adb5e91ec2. It was not intended for commit just yet.
* Remove pointless HeapTupleHeaderIndicatesMovedPartitions callsAlvaro Herrera2020-10-15
| | | | | | | | | | | | | | Pavan Deolasee recently noted that a few of the HeapTupleHeaderIndicatesMovedPartitions calls added by commit 5db6df0c0117 are useless, since they are done after comparing t_self with t_ctid. But because t_self can never be set to the magical values that indicate that the tuple moved partition, this can never succeed: if the first test fails (so we know t_self equals t_ctid), necessarily the second test will also fail. So these checks can be removed and no harm is done. Discussion: https://postgr.es/m/20200929164411.GA15497@alvherre.pgsql
* Install pg_isolation_regress and isolationtesterAlvaro Herrera2020-10-15
| | | | | | | | | | | We already install assorted tools for testing extensions, but these two were missing. Having them installed, and after ISOLATION support was added to PGXS's makefiles by d3c09b9b1307, helps third-party modules usefully include isolation tests. Compare c3a0818460a8. Author: Craig Ringer <craig.ringer@enterprisedb.com> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/CAMsr+YFsCMH3B4uOPFE+2qWM6k=o=hf9LGiPNCfwqKdUPz_BsQ@mail.gmail.com
* Review logical replication tablesync codeAlvaro Herrera2020-10-15
| | | | | | | | | | | | | | | | | | | | | | | | Most importantly, remove optimization in LogicalRepSyncTableStart that skips the normal walrcv_startstreaming/endstreaming dance. The optimization is not critically important for production uses anyway, since it only fires in cases with no activity, and saves an uninteresting amount of work even then. Critically, it obscures bugs by hiding the interesting code path from test cases. Also: in GetSubscriptionRelState, remove pointless relation open; access pg_subscription_rel->srsubstate with GETSTRUCT as is typical rather than SysCacheGetAttr; remove unused 'missing_ok' argument. In wait_for_relation_state_change, use explicit catalog snapshot invalidation rather than obscurely (and expensively) through GetLatestSnapshot. In various places: sprinkle comments more liberally and rewrite a number of them. Other cosmetic code improvements. No backpatch, since no bug is being fixed here. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Petr Jelínek <petr.jelinek@2ndquadrant.com> Discussion: https://postgr.es/m/20201010190637.GA5774@alvherre.pgsql
* Refactor code for cross-partition updates to a separate function.Heikki Linnakangas2020-10-15
| | | | | | | | | ExecUpdate() is very long, so extract the part of it that deals with cross-partition updates to a separate function to make it more readable. Per Andres Freund's suggestion. Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA%2BHiwqEUgb5RdUgxR7Sqco4S09jzJstHiaT2vnCRPGR4JCAPqA%40mail.gmail.com
* Fix query in new test to check tables are syncedAlvaro Herrera2020-10-15
| | | | | | | | Rather than looking for tablesync workers, it is more reliable to see the sync state of the tables. Per note from Amit Kapila. Discussion: https://postgr.es/m/CAA4eK1JSSD7FVwq+_rOme86jUZTQFzjsNU06hQ4-LiRt1xFmSg@mail.gmail.com
* Replace calls of htonl()/ntohl() with pg_bswap.h for GSSAPI encryptionMichael Paquier2020-10-15
| | | | | | | | | The in-core equivalents can make use of built-in functions if the compiler supports this option, making optimizations possible. 0ba99c8 replaced all existing calls in the code base at this time, but b0b39f7 (GSSAPI encryption) has forgotten to do the switch. Discussion: https://postgr.es/m/20201014055303.GG3349@paquier.xyz
* Improve tab-completion for FETCH/MOVE.Fujii Masao2020-10-15
| | | | | | Author: Naoki Nakamichi Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/d05a46b599634ca0d94144387507f4b4@oss.nttdata.com
* Fixup some appendStringInfo and appendPQExpBuffer callsDavid Rowley2020-10-15
| | | | | | | | | | | | | | | | | | A number of places were using appendStringInfo() when they could have been using appendStringInfoString() instead. While there's no functionality change there, it's just more efficient to use appendStringInfoString() when no formatting is required. Likewise for some appendStringInfoString() calls which were just appending a single char. We can just use appendStringInfoChar() for that. Additionally, many places were using appendPQExpBuffer() when they could have used appendPQExpBufferStr(). Change those too. Patch by Zhijie Hou, but further searching by me found significantly more places that deserved the same treatment. Author: Zhijie Hou, David Rowley Discussion: https://postgr.es/m/cb172cf4361e4c7ba7167429070979d4@G08CNEXMBPEKD05.g08.fujitsu.local
* Add documentation link to attributes supported by ClangPeter Eisentraut2020-10-15
|
* Handle EACCES errors from kevent() better.Thomas Munro2020-10-15
| | | | | | | | | | | | | | | | While registering for postmaster exit events, we have to handle a couple of edge cases where the postmaster is already gone. Commit 815c2f09 missed one: EACCES must surely imply that PostmasterPid no longer belongs to our postmaster process (or alternatively an unexpected permissions model has been imposed on us). Like ESRCH, this should be treated as a WL_POSTMASTER_DEATH event, rather than being raised with ereport(). No known problems reported in the wild. Per code review from Tom Lane. Back-patch to 13. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/3624029.1602701929%40sss.pgh.pa.us
* Execute invalidation messages for each XLOG_XACT_INVALIDATIONS messageAmit Kapila2020-10-15
| | | | | | | | | | | | | | | | | | | | during logical decoding. Prior to commit c55040ccd0 we have no way of knowing the invalidations before commit. So, while decoding we use to execute all the invalidations at each command end as we had no way of knowing which invalidations happened before that command. Due to this, transactions involving large amounts of DDLs use to take more time and also lead to high CPU usage. But now we know specific invalidations at each command end so we execute only required invalidations. It has been observed that decoding of a transaction containing truncation of a table with 1000 partitions would be finished in 1s whereas before this patch it used to take 4-5 minutes. Author: Dilip Kumar Reviewed-by: Amit Kapila and Keisuke Kuroda Discussion: https://postgr.es/m/CANDwggKYveEtXjXjqHA6RL3AKSHMsQyfRY6bK+NqhAWJyw8psQ@mail.gmail.com
* Restore replication protocol's duplicate command tagsAlvaro Herrera2020-10-14
| | | | | | | | | | | | | | | | | | | | I removed the duplicate command tags for START_REPLICATION inadvertently in commit 07082b08cc5d, but the replication protocol requires them. The fact that the replication protocol was broken was not noticed because all our test cases use an optimized code path that exits early, failing to verify that the behavior is correct for non-optimized cases. Put them back. Also document this protocol quirk. Add a test case that shows the failure. It might still succeed even without the patch when run on a fast enough server, but it suffices to show the bug in enough cases that it would be noticed in buildfarm. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Henry Hinze <henry.hinze@gmail.com> Reviewed-by: Petr Jelínek <petr.jelinek@2ndquadrant.com> Discussion: https://postgr.es/m/16643-eaadeb2a1a58d28c@postgresql.org
* Make WL_POSTMASTER_DEATH level-triggered on kqueue builds.Thomas Munro2020-10-15
| | | | | | | | | | | | If WaitEventSetWait() reports that the postmaster has gone away, later calls to WaitEventSetWait() should continue to report that. Otherwise further waits that occur in the proc_exit() path after we already noticed the postmaster's demise could block forever. Back-patch to 13, where the kqueue support landed. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/3624029.1602701929%40sss.pgh.pa.us
* Remove es_result_relation_info from EState.Heikki Linnakangas2020-10-14
| | | | | | | | | | | | | | Maintaining 'es_result_relation_info' correctly at all times has become cumbersome, especially with partitioning where each partition gets its own result relation info. Having to set and reset it across arbitrary operations has caused bugs in the past. This changes all the places that used 'es_result_relation_info', to receive the currently active ResultRelInfo via function parameters instead. Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGEmiib8FLiHMhKB%2BCH5dRgHSLc5N5wnvc4kym%2BZYpQEQ%40mail.gmail.com
* Include result relation info in direct modify ForeignScan nodes.Heikki Linnakangas2020-10-14
| | | | | | | | | | | | | | | | | FDWs that can perform an UPDATE/DELETE remotely using the "direct modify" set of APIs need to access the ResultRelInfo of the target table. That's currently available in EState.es_result_relation_info, but the next commit will remove that field. This commit adds a new resultRelation field in ForeignScan, to store the target relation's RT index, and the corresponding ResultRelInfo in ForeignScanState. The FDW's PlanDirectModify callback is expected to set 'resultRelation' along with 'operation'. The core code doesn't need them for anything, they are for the convenience of FDW's Begin- and IterateDirectModify callbacks. Authors: Amit Langote, Etsuro Fujita Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGEmiib8FLiHMhKB%2BCH5dRgHSLc5N5wnvc4kym%2BZYpQEQ%40mail.gmail.com
* Use https for gnu.org linksPeter Eisentraut2020-10-14
| | | | Mostly already done, but there were some stragglers.
* Correct error messagePeter Eisentraut2020-10-14
| | | | Apparently copy-and-pasted from nearby.
* Paper over regression failures in infinite_recurse() on PPC64 Linux.Tom Lane2020-10-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our infinite_recurse() test to verify sane stack-overrun behavior is affected by a bug of the Linux kernel on PPC64: it will get SIGSEGV if it receives a signal when the stack depth is (a) over 1MB and (b) within a few kB of filling the current physical stack allocation. See https://bugzilla.kernel.org/show_bug.cgi?id=205183. Since this test is a bit time-consuming and we run it in parallel with test scripts that do a lot of DDL, it can be expected to get an sinval catchup interrupt at some point, leading to failure if the timing is wrong. This has caused more than 100 buildfarm failures over the past year or so. While a fix exists for the kernel bug, it might be years before that propagates into all production kernels, particularly in some of the older distros we have in the buildfarm. For now, let's just back off and not run this test on Linux PPC64; that loses nothing in test coverage so far as our own code is concerned. To do that, split this test into a new script infinite_recurse.sql and skip the test when the platform name is powerpc64...-linux-gnu. Back-patch to v12. Branches before that have not been seen to get this failure. No doubt that's because the "errors" test was not run in parallel with other tests before commit 798070ec0, greatly reducing the odds of an sinval catchup being necessary. I also back-patched 3c8553547 into v12, just so the new regression script would look the same in all branches having it. Discussion: https://postgr.es/m/3479046.1602607848@sss.pgh.pa.us Discussion: https://postgr.es/m/20190723162703.GM22387%40telsasoft.com
* Create ResultRelInfos later in InitPlan, index them by RT index.Heikki Linnakangas2020-10-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of allocating all the ResultRelInfos upfront in one big array, allocate them in ExecInitModifyTable(). es_result_relations is now an array of ResultRelInfo pointers, rather than an array of structs, and it is indexed by the RT index. This simplifies things: we get rid of the separate concept of a "result rel index", and don't need to set it in setrefs.c anymore. This also allows follow-up optimizations (not included in this commit yet) to skip initializing ResultRelInfos for target relations that were not needed at runtime, and removal of the es_result_relation_info pointer. The EState arrays of regular result rels and root result rels are merged into one array. Similarly, the resultRelations and rootResultRelations lists in PlannedStmt are merged into one. It's not actually clear to me why they were kept separate in the first place, but now that the es_result_relations array is indexed by RT index, it certainly seems pointless. The PlannedStmt->resultRelations list is now only needed for ExecRelationIsTargetRelation(). One visible effect of this change is that ExecRelationIsTargetRelation() will now return 'true' also for the partition root, if a partitioned table is updated. That seems like a good thing, although the function isn't used in core code, and I don't see any reason for an FDW to call it on a partition root. Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGEmiib8FLiHMhKB%2BCH5dRgHSLc5N5wnvc4kym%2BZYpQEQ%40mail.gmail.com
* Fix GiST buffering build to work when there are included columns.Tom Lane2020-10-12
| | | | | | | | | | | | | | gistRelocateBuildBuffersOnSplit did not get the memo about which attribute count to use. This could lead to a crash if there were included columns and buffering build was chosen. (Because there are random page-split decisions elsewhere in GiST index build, the crashes are not entirely deterministic.) Back-patch to v12 where GiST gained support for included columns. Pavel Borisov Discussion: https://postgr.es/m/CALT9ZEECCV5m7wvxg46PC-7x-EybUmnpupBGhSFMoAAay+r6HQ@mail.gmail.com
* Re-allow testing of GiST buffered builds.Tom Lane2020-10-12
| | | | | | | | | | | | | | | | | | | | Commit 16fa9b2b3 broke the ability to reliably test GiST buffered builds, because it caused sorted builds to be done instead if sortsupport is available, regardless of any attempt to override that. While a would-be test case could try to work around that by choosing an opclass that has no sortsupport function, coverage would be silently lost the moment someone decides it'd be a good idea to add a sortsupport function. Hence, rearrange the logic in gistbuild() so that if "buffering = on" is specified in CREATE INDEX, we will use that method, sortsupport or no. Also document the interaction between sorting and the buffering parameter, as 16fa9b2b3 failed to do. (Note that in fact we still lack any test coverage of buffered builds, but this is a prerequisite to adding a non-fragile test.) Discussion: https://postgr.es/m/3249980.1602532990@sss.pgh.pa.us