aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Add exception for unicode_east_asian_fw_table.h to cpluspluscheckJohn Naylor2021-09-23
| | | | | | | | | unicode_east_asian_fw_table.h should not be compiled standalone, similarly to unicode_combining_table.h, but cpluspluscheck did not get the memo. Oversight in bab982161. Per report from Tom Lane
* Split macros from visibilitymap.h into a separate headerAlexander Korotkov2021-09-23
| | | | | | | | | | | That allows to include just visibilitymapdefs.h from file.c, and in turn, remove include of postgres.h from relcache.h. Reported-by: Andres Freund Discussion: https://postgr.es/m/20210913232614.czafiubr435l6egi%40alap3.anarazel.de Author: Alexander Korotkov Reviewed-by: Andres Freund, Tom Lane, Alvaro Herrera Backpatch-through: 13
* Release memory allocated by dependency_degreeTomas Vondra2021-09-23
| | | | | | | | | | | | | | | | | Calculating degree of a functional dependency may allocate a lot of memory - we have released mot of the explicitly allocated memory, but e.g. detoasted varlena values were left behind. That may be an issue, because we consider a lot of dependencies (all combinations), and the detoasting may happen for each one again. Fixed by calling dependency_degree() in a dedicated context, and resetting it after each call. We only need the calculated dependency degree, so we don't need to copy anything. Backpatch to PostgreSQL 10, where extended statistics were introduced. Backpatch-through: 10 Discussion: https://www.postgresql.org/message-id/20210915200928.GP831%40telsasoft.com
* Free memory after building each statistics objectTomas Vondra2021-09-23
| | | | | | | | | | | | | | | | | | | | | | | | Until now, all extended statistics on a given relation were built in the same memory context, without resetting. Some of the memory was released explicitly, but not all of it - for example memory allocated while detoasting values is hard to free. This is how it worked since extended statistics were introduced in PostgreSQL 10, but adding support for extended stats on expressions made the issue somewhat worse as it increases the number of statistics to build. Fixed by adding a memory context which gets reset after building each statistics object (all the statistics kinds included in it). Resetting it after building each statistics kind would be even better, but it would require more invasive changes and copying of results, making it harder to backpatch. Backpatch to PostgreSQL 10, where extended statistics were introduced. Author: Justin Pryzby Reported-by: Justin Pryzby Reviewed-by: Tomas Vondra Backpatch-through: 10 Discussion: https://www.postgresql.org/message-id/20210915200928.GP831%40telsasoft.com
* Document issue with heapam line pointer truncation.Peter Geoghegan2021-09-22
| | | | | | | | | | | | Checking that an offset number isn't past the end of a heap page's line pointer array was just a defensive sanity check for HOT-chain traversal code before commit 3c3b8a4b. It's etrictly necessary now, though. Add comments that reference the issue to code in heapam that needs to get it right. Per suggestion from Alexander Lakhin. Discussion: https://postgr.es/m/f76a292c-9170-1aef-91a0-59d9443b99a3@gmail.com
* Make use of PG_INT64_MAX/PG_INT64_MINPeter Eisentraut2021-09-22
| | | | | This code was written before those symbols were introduced, but now we can simplify it.
* Invalidate all partitions for a partitioned table in publication.Amit Kapila2021-09-22
| | | | | | | | | | | | | | | | | Updates/Deletes on a partition were allowed even without replica identity after the parent table was added to a publication. This would later lead to an error on subscribers. The reason was that we were not invalidating the partition's relcache and the publication information for partitions was not getting rebuilt. Similarly, we were not invalidating the partitions' relcache after dropping a partitioned table from a publication which will prohibit Updates/Deletes on its partition without replica identity even without any publication. Reported-by: Haiying Tang Author: Hou Zhijie and Vignesh C Reviewed-by: Vignesh C and Amit Kapila Backpatch-through: 13 Discussion: https://postgr.es/m/OS0PR01MB6113D77F583C922F1CEAA1C3FBD29@OS0PR01MB6113.jpnprd01.prod.outlook.com
* Add parent table name in an error in reorderbuffer.c.Amit Kapila2021-09-22
| | | | | | | | | This can help in troubleshooting the cause of a particular error that can occur during decoding. Author: Jeremy Schneider Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/808ed65b-994c-915a-361c-577f088b837f@amazon.com
* Fix "single value strategy" index deletion issue.Peter Geoghegan2021-09-21
| | | | | | | | | | | | | | | | | | | | | | | | It is not appropriate for deduplication to apply single value strategy when triggered by a bottom-up index deletion pass. This wastes cycles because later bottom-up deletion passes will overinterpret older duplicate tuples that deduplication actually just skipped over "by design". It also makes bottom-up deletion much less effective for low cardinality indexes that happen to cross a meaningless "index has single key value per leaf page" threshold. To fix, slightly narrow the conditions under which deduplication's single value strategy is considered. We already avoided the strategy for a unique index, since our high level goal must just be to buy time for VACUUM to run (not to buy space). We'll now also avoid it when we just had a bottom-up pass that reported failure. The two cases share the same high level goal, and already overlapped significantly, so this approach is quite natural. Oversight in commit d168b666, which added bottom-up index deletion. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WznaOvM+Gyj-JQ0X=JxoMDxctDTYjiEuETdAGbF5EUc3MA@mail.gmail.com Backpatch: 14-, where bottom-up deletion was introduced.
* Fix some issues with TAP tests for postgres -CMichael Paquier2021-09-22
| | | | | | | | | | | | | | | | This addresses two issues with the tests added in 0c39c292 for runtime GUCs: - Re-enable the test on Msys. The test could fail because of \r\n generated by Msys perl. 0d91c52a has taken care of this issue. - Allow the test to run in the context of a privileged account. CIs running under privileged accounts would fail on permission failures, as reported by Andres Freund. This issue is fixed by wrapping the postgres command within pg_ctl as the latter will take care of any permissions needed. The test checking a failure of postgres -C for a runtime parameter with an instance running is removed, as pg_ctl produces an unstable error code (no need for a CI to reproduce that). Discussion: https://postgr.es/m/20210921032040.lyl4lcax37aedx2x@alap3.anarazel.de
* Fix places in TestLib.pm in need of adaptation to the output of Msys perlMichael Paquier2021-09-22
| | | | | | | | | | | | | | | | | | | Contrary to the output of native perl, Msys perl generates outputs with CRLFs characters. There are already places in the TAP code where CRLFs (\r\n) are automatically converted to LF (\n) on Msys, but we missed a couple of places when running commands and using their output for comparison, that would lead to failures. This problem has been found thanks to the test added in 5adb067 using TestLib::command_checks_all(), but after a closer look more code paths were missing a filter. This is backpatched all the way down to prevent any surprises if a new test is introduced in stable branches. Reviewed-by: Andrew Dunstan, Álvaro Herrera Discussion: https://postgr.es/m/1252480.1631829409@sss.pgh.pa.us Backpatch-through: 9.6
* Fix misevaluation of STABLE parameters in CALL within plpgsql.Tom Lane2021-09-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Before commit 84f5c2908, a STABLE function in a plpgsql CALL statement's argument list would see an up-to-date snapshot, because exec_stmt_call would push a new snapshot. I got rid of that because the possibility of the snapshot disappearing within COMMIT made it too hard to manage a snapshot across the CALL statement. That's fine so far as the procedure itself goes, but I forgot to think about the possibility of STABLE functions within the CALL argument list. As things now stand, those'll be executed with the Portal's snapshot as ActiveSnapshot, keeping them from seeing updates more recent than Portal startup. (VOLATILE functions don't have a problem because they take their own snapshots; which indeed is also why the procedure itself doesn't have a problem. There are no STABLE procedures.) We can fix this by pushing a new snapshot transiently within ExecuteCallStmt itself. Popping the snapshot before we get into the procedure proper eliminates the management problem. The possibly-useless extra snapshot-grab is slightly annoying, but it's no worse than what happened before 84f5c2908. Per bug #17199 from Alexander Nawratil. Back-patch to v11, like the previous patch. Discussion: https://postgr.es/m/17199-1ab2561f0d94af92@postgresql.org
* Document XLOG_INCLUDE_XID a little betterAlvaro Herrera2021-09-21
| | | | | | | | | | | | I noticed that commit 0bead9af484c left this flag undocumented in XLogSetRecordFlags, which led me to discover that the flag doesn't actually do what the one comment on it said it does. Improve the situation by adding some more comments. Backpatch to 14, where the aforementioned commit appears. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/202109212119.c3nhfp64t2ql@alvherre.pgsql
* Introduce GUC shared_memory_size_in_huge_pagesMichael Paquier2021-09-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This runtime-computed GUC shows the number of huge pages required for the server's main shared memory area, taking advantage of the work done in 0c39c29 and 0bd305e. This is useful for users to estimate the amount of huge pages required for a server as it becomes possible to do an estimation without having to start the server and potentially allocate a large chunk of shared memory. The number of huge pages is calculated based on the existing GUC huge_page_size if set, or by using the system's default by looking at /proc/meminfo on Linux. There is nothing new here as this commit reuses the existing calculation methods, and just exposes this information directly to the user. The routine calculating the huge page size is refactored to limit the number of files with platform-specific flags. This new GUC's name was the most popular choice based on the discussion done. This is only supported on Linux. I have taken the time to test the change on Linux, Windows and MacOS, though for the last two ones large pages are not supported. The first one calculates correctly the number of pages depending on the existing GUC huge_page_size or the system's default. Thanks to Andres Freund, Robert Haas, Kyotaro Horiguchi, Tom Lane, Justin Pryzby (and anybody forgotten here) for the discussion. Author: Nathan Bossart Discussion: https://postgr.es/m/F2772387-CE0F-46BF-B5F1-CC55516EB885@amazon.com
* Remove overzealous index deletion assertion.Peter Geoghegan2021-09-20
| | | | | | | | | | | | | | | | | | | | A broken HOT chain is not an unexpected condition, even when the offset number points past the end of the page's line pointer array. heap_prune_chain() does not (and never has) treated this condition as unexpected, so derivative code in heap_index_delete_tuples() shouldn't do so either. Oversight in commit 4228817449. The assertion can probably only fail on Postgres 14 and master. Earlier releases don't have commit 3c3b8a4b, which taught VACUUM to truncate the line pointer array of heap pages. Backpatch all the same, just to be consistent. Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/17197-9438f31f46705182@postgresql.org Backpatch: 12-, just like commit 4228817449.
* pgstat: Prepare to use mechanism for truncated rels also for droppped rels.Andres Freund2021-09-20
| | | | | | | | | | | | | The upcoming shared memory stats patch drops stats for dropped objects in a transactional manner, rather than removing them later as part of vacuum. This means that stats for DROP inside a transaction needs to handle aborted (sub-)transactions similar to TRUNCATE: The stats up to the DROP should be restored. Rename the existing infrastructure in preparation. Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20210405092914.mmxqe7j56lsjfsej@alap3.anarazel.de
* pgstat: Split out relation stats handling from AtEO[Sub]Xact_PgStat() etc.Andres Freund2021-09-20
| | | | | | | | | An upcoming patch will add additional work to these functions. To avoid the functions getting too complicated / doing too many things at once, split out sub-tasks into their own functions. Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20210405092914.mmxqe7j56lsjfsej@alap3.anarazel.de
* Doc: add glossary term for "auxiliary process"Alvaro Herrera2021-09-20
| | | | | | | | | | | Add entries for existing processes not documented, too, and adjust existing definitions for consistency. Per question from Bharath Rupireddy. Author: Justin Pryzby <pryzby@telsasoft.com> Author: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/CALj2ACVpYCT0M+k8zqrAa4ZQZV+ce5s6G=yajwoS1m=h-jj8NQ@mail.gmail.com
* Disallow extended statistics on system columnsTomas Vondra2021-09-20
| | | | | | | | | | | | | | | | | | | Since introduction of extended statistics, we've disallowed references to system columns. So for example CREATE STATISTICS s ON ctid FROM t; would fail. But with extended statistics on expressions, it was possible to work around this limitation quite easily CREATE STATISTICS s ON (ctid::text) FROM t; This is an oversight in a4d75c86bf, fixed by adding a simple check. Backpatch to PostgreSQL 14, where support for extended statistics on expressions was introduced. Backpatch-through: 14 Discussion: https://postgr.es/m/20210816013255.GS10479%40telsasoft.com
* process startup: Split single user code out of PostgresMain().Andres Freund2021-09-17
| | | | | | | | | | | | | | | It was harder than necessary to understand PostgresMain() because the code for a normal backend was interspersed with single-user mode specific code. Split most of the single-user mode code into its own function PostgresSingleUserMain(), that does all the necessary setup for single-user mode, and then hands off after that to PostgresMain(). There still is some single-user mode code in InitPostgres(), and it'd likely be worth moving at least some of it out. But that's for later. Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20210802164124.ufo5buo4apl6yuvs@alap3.anarazel.de
* Improve some check logic in pg_receivewalMichael Paquier2021-09-18
| | | | | | | | | | | | | | | | | | The following things are improved: - Fetch the system identifier from the source server before any WAL streaming loop. This triggers extra checks to make sure that pg_receivewal is still connected to a server with the same system ID with a correct timeline. - Switch umask() (for file creation mode mask) and RetrieveWalSegSize() (to fetch the size of WAL segments) a bit later before the initial stream attempt. If the connection was done with a database, pg_receivewal would fail but those commands were still executed, which was a waste. The slot creation and drop are now done before retrieving the segment size. Author: Bharath Rupireddy Reviewed-by: Ronan Dunklau, Michael Paquier Discussion: https://postgr.es/m/CALj2ACX00YYeyBfoi55Cy=NrP-FcfMgiYYx1qRUEib3yjCVoaA@mail.gmail.com
* Fix pull_varnos to cope with translated PlaceHolderVars.Tom Lane2021-09-17
| | | | | | | | | | | | | | | | | | Commit 55dc86eca changed pull_varnos to use (if possible) the associated ph_eval_at for a PlaceHolderVar. I missed a fine point though: we might be looking at a PHV in the quals or tlist of a child appendrel, in which case we need to compute a ph_eval_at value that's been translated in the same way that the PHV itself has been (cf. adjust_appendrel_attrs). Fortunately, enough info is available in the PlaceHolderInfo to make such translation possible without additional outside data, so we don't need another round of uglification of planner APIs. This is a little bit complicated, but since it's a hard-to-hit corner case, I'm not much worried about adding cycles here. Per report from Jaime Casanova. Back-patch to v12, like the previous commit. Discussion: https://postgr.es/m/20210915230959.GB17635@ahch-to
* Clarify some errors in pg_receivewal when closing WAL segmentsMichael Paquier2021-09-17
| | | | | | | | | | | | | | | | | | | | A WAL segment closed during a WAL stream for pg_receivewal would generate incorrect error messages depending on the context, as the file name used when referring to a WAL segment ignored partial files or the compression method used. In such cases, the error message generated (failure on close, seek or rename) would not match a physical file name. The same code paths are used by pg_basebackup, but it uses no partial suffix so it is not impacted. 7fbe0c8 has introduced in walmethods.c a callback to get the exact physical file name used for a given context, this commit makes use of it to improve those error messages. This could be extended to more code paths of pg_basebackup/ in the future, if necessary. Extracted from a larger patch by the same author. Author: Georgios Kokolatos Discussion: https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
* Disable test for postgres -C on MsysMichael Paquier2021-09-17
| | | | | | | | | | | | | | | | The output generated on Msys is incorrect because of the different way IPC::Run processes outputs with native Perl (converts natively \r\n to \n) and Msys perl (\r\n kept as-is), causing this test to fail. For now, just disable the test to bring the buildfarm to a green state. I think that the correct long-term solution would be to tweak all the routines command_checks_* in PostgresNode.pm to handle this output like psql does when using Msys, by discarding \r automatically before comparing it. Per report from jacana and fairywren. Thanks to Tom Lane for the ping. Discussion: https://postgr.es/m/1252480.1631829409@sss.pgh.pa.us
* Fix EXPLAIN to handle SEARCH BREADTH FIRST queries.Tom Lane2021-09-16
| | | | | | | | | | | | | | | | | | | The rewriter transformation for SEARCH BREADTH FIRST produces a FieldSelect on a Var of type RECORD, where the Var references the recursive union's worktable output. EXPLAIN VERBOSE failed to handle this case, because it only expected such Vars to appear in CteScans not WorkTableScans. Fix that, and add some test cases exercising EXPLAIN on SEARCH and CYCLE queries. In principle this oversight is an old bug, but it seems that the case is unreachable without SEARCH BREADTH FIRST, because the parser fails when attempting to create such a reference manually. So for today I'll just patch HEAD/v14. Someday we might find that the code portion of this patch needs to be back-patched further. Per report from Atsushi Torikoshi. Discussion: https://postgr.es/m/5bafa66ad529e11860339565c9e7c166@oss.nttdata.com
* Message style improvementsPeter Eisentraut2021-09-16
|
* process startup: Do InitProcess() at the same time regardless of EXEC_BACKEND.Andres Freund2021-09-16
| | | | | | | | | An upcoming patch splits single user mode into its own function. This makes that easier. Split out for easier review / testing. Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20210802164124.ufo5buo4apl6yuvs@alap3.anarazel.de
* Fix performance regression from session statistics.Andres Freund2021-09-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Session statistics, as introduced by 960869da08, had several shortcomings: - an additional GetCurrentTimestamp() call that also impaired the accuracy of the data collected This can be avoided by passing the current timestamp we already have in pgstat_report_stat(). - an additional statistics UDP packet sent every 500ms This is solved by adding the new statistics to PgStat_MsgTabstat. This is conceptually ugly, because session statistics are not table statistics. But the struct already contains data unrelated to tables, so there is not much damage done. Connection and disconnection are reported in separate messages, which reduces the number of additional messages to two messages per session and a slight increase in PgStat_MsgTabstat size (but the same number of table stats fit). - Session time computation could overflow on systems where long is 32 bit. Reported-By: Andres Freund <andres@anarazel.de> Author: Andres Freund <andres@anarazel.de> Author: Laurenz Albe <laurenz.albe@cybertec.at> Discussion: https://postgr.es/m/20210801205501.nyxzxoelqoo4x2qc%40alap3.anarazel.de Backpatch: 14-, where the feature was introduced.
* Fix variable shadowing in procarray.c.Fujii Masao2021-09-16
| | | | | | | | | | | | | ProcArrayGroupClearXid function has a parameter named "proc", but the same name was used for its local variables. This commit fixes this variable shadowing, to improve code readability. Back-patch to all supported versions, to make future back-patching easy though this patch is classified as refactoring only. Reported-by: Ranier Vilela Author: Ranier Vilela, Aleksander Alekseev https://postgr.es/m/CAEudQAqyoTZC670xWi6w-Oe2_Bk1bfu2JzXz6xRfiOUzm7xbyQ@mail.gmail.com
* Use int instead of size_t in procarray.c.Fujii Masao2021-09-16
| | | | | | | | | | | | | | All size_t variables declared in procarray.c are actually int ones. Let's use int instead of size_t for those variables. Which would reduce Wsign-compare compiler warnings. Back-patch to v14 where commit 941697c3c1 added size_t variables in procarray.c, to make future back-patching easy though this patch is classified as refactoring only. Reported-by: Ranier Vilela Author: Ranier Vilela, Aleksander Alekseev https://postgr.es/m/CAEudQAqyoTZC670xWi6w-Oe2_Bk1bfu2JzXz6xRfiOUzm7xbyQ@mail.gmail.com
* Support "postgres -C" with runtime-computed GUCsMichael Paquier2021-09-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, the -C option of postgres was handled before a small subset of GUCs computed at runtime are initialized, leading to incorrect results as GUC machinery would fall back to default values for such parameters. For example, data_checksums could report "off" for a cluster as the control file is not loaded yet. Or wal_segment_size would show a segment size at 16MB even if initdb --wal-segsize used something else. Worse, the command would fail to properly report the recently-introduced shared_memory, that requires to load shared_preload_libraries as these could ask for a chunk of shared memory. Support for runtime GUCs comes with a limitation, as the operation is now allowed on a running server. One notable reason for this is that _PG_init() functions of loadable libraries are called before all runtime-computed GUCs are initialized, and this is not guaranteed to be safe to do on running servers. For the case of shared_memory_size, where we want to know how much memory would be used without allocating it, this limitation is fine. Another case where this will help is for huge pages, with the introduction of a different GUC to evaluate the amount of huge pages required for a server before starting it, without having to allocate large chunks of memory. This feature is controlled with a new GUC flag, and four parameters are classified as runtime-computed as of this change: - data_checksums - shared_memory_size - data_directory_mode - wal_segment_size Some TAP tests are added to provide some coverage here, using data_checksums in the tests of pg_checksums. Per discussion with Andres Freund, Justin Pryzby, Magnus Hagander and more. Author: Nathan Bossart Discussion: https://postgr.es/m/F2772387-CE0F-46BF-B5F1-CC55516EB885@amazon.com
* process startup: Initialize PgStartTime earlier in single user mode.Andres Freund2021-09-15
| | | | | | | | | | | | An upcoming patch splits single user mode handling out of PostgresMain(). The startup time only needs to be determined in single user mode. Currently the initialization happens late, which makes the split a bit harder. As postmaster determines the time earlier it makes sense to move the time for single user mode to a roughly similar point in time. Reviewd-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Author: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20210802164124.ufo5buo4apl6yuvs@alap3.anarazel.de
* Remove arbitrary 64K-or-so limit on rangetable size.Tom Lane2021-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now the size of a query's rangetable has been limited by the constants INNER_VAR et al, which mustn't be equal to any real rangetable index. 65000 doubtless seemed like enough for anybody, and it still is orders of magnitude larger than the number of joins we can realistically handle. However, we need a rangetable entry for each child partition that is (or might be) processed by a query. Queries with a few thousand partitions are getting more realistic, so that the day when that limit becomes a problem is in sight, even if it's not here yet. Hence, let's raise the limit. Rather than just increase the values of INNER_VAR et al, this patch adopts the approach of making them small negative values, so that rangetables could theoretically become as long as INT_MAX. The bulk of the patch is concerned with changing Var.varno and some related variables from "Index" (unsigned int) to plain "int". This is basically cosmetic, with little actual effect other than to help debuggers print their values nicely. As such, I've only bothered with changing places that could actually see INNER_VAR et al, which the parser and most of the planner don't. We do have to be careful in places that are performing less/greater comparisons on varnos, but there are very few such places, other than the IS_SPECIAL_VARNO macro itself. A notable side effect of this patch is that while it used to be possible to add INNER_VAR et al to a Bitmapset, that will now draw an error. I don't see any likelihood that it wouldn't be a bug to include these fake varnos in a bitmapset of real varnos, so I think this is all to the good. Although this touches outfuncs/readfuncs, I don't think a catversion bump is required, since stored rules would never contain Vars with these fake varnos. Andrey Lepikhov and Tom Lane, after a suggestion by Peter Eisentraut Discussion: https://postgr.es/m/43c7f2f5-1e27-27aa-8c65-c91859d15190@postgrespro.ru
* Add Cardinality typedefPeter Eisentraut2021-09-15
| | | | | | | | Similar to Cost and Selectivity, this is just a double, which can be used in path and plan nodes to give some hint about the meaning of a field. Discussion: https://www.postgresql.org/message-id/c091e5cd-45f8-69ee-6a9b-de86912cc7e7@enterprisedb.com
* Disallow LISTEN in background workers.Tom Lane2021-09-15
| | | | | | | | | | | | | | | | | | | It's possible to execute user-defined SQL in some background processes; for example, logical replication workers can fire triggers. This opens the possibility that someone would try to execute LISTEN in such a context. But since only regular backends ever call ProcessNotifyInterrupt, no messages would actually be received, and thus the registered listener would simply prevent the message queue from being cleaned. Eventually NOTIFY would stop working, which is bad. Perhaps someday somebody will invent infrastructure to make listening in a background worker actually useful. In the meantime, forbid it. Back-patch to v13, which is where we introduced the MyBackendType variable. It'd be a lot harder to implement the check without that, and it doesn't seem worth the trouble. Discussion: https://postgr.es/m/153243441449.1404.2274116228506175596@wrigleys.postgresql.org
* Make node output prefix match node structure namePeter Eisentraut2021-09-15
| | | | | | | | | | | In most cases, the prefix string in a node output is the upper case of the node structure name, e.g., MergeAppend -> MERGEAPPEND. There were a few exceptions that for either no apparent reason or perhaps minor aesthetic reasons deviated from this. In order to simplify this and perhaps allow automatic generation without having to deal with exception cases, make them all match. Discussion: https://www.postgresql.org/message-id/c091e5cd-45f8-69ee-6a9b-de86912cc7e7@enterprisedb.com
* Fix hash_arrayPeter Eisentraut2021-09-15
| | | | | | | | | | | Commit a3d2b1bbe904b0ca8d9fdde20f25295ff3e21f79 neglected to initialize the type_id field of the synthesized type cache entry, so it would make a new one on every call. Also, better use the per-function memory context for this; otherwise it leaks memory. Discussion: https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
* Fix incorrect format placeholdersPeter Eisentraut2021-09-15
| | | | | | | | Also remove obsolete comments about why the 64-bit integers need to be printed in a separate buffer. The reason used to be portability, but now the remaining reason is that we need the string lengths for the progress displays. That is evident by looking at the code right below, so a new comment doesn't seem necessary.
* Update Unicode data to Unicode 14.0.0Peter Eisentraut2021-09-15
|
* Update README for resource owners about the resource types supportedMichael Paquier2021-09-15
| | | | | | | | | | | | All the types supported were listed directly in the README, but it was very outdated. Rather than listing all the types supported in the README, this commit adds a reference to look at ResourceOwnerData in resowner.c to get this information. The order of the paragraphs is reworked a bit for clarity. Author: Amit Langote Discussion: https://postgr.es/m/CA+HiwqHtfT9z=4H5+F7DOy0OyNHAaVwuRcakt9b2t2uADOaiag@mail.gmail.com
* Improve log messages from pg_import_system_collations().Tom Lane2021-09-14
| | | | | | | | | | | | | | | | pg_import_system_collations() was a bit inconsistent about how it reported locales (names output by "locale -a") that it didn't make pg_collation entries for. IMV we should print some suitable message for every locale that we reject, except when it matches a pre-existing pg_collation entry. (This is all at DEBUG1 log level, though, so as not to create noise during initdb.) Add messages for the two cases that were previously not logged, namely unrecognized encoding and client-only encoding. Re-word the existing messages to have a consistent style. Anton Voloshin and Tom Lane Discussion: https://postgr.es/m/429d64ee-188d-3ce1-106a-53a8b45c4fce@postgrespro.ru
* Send NOTIFY signals during CommitTransaction.Tom Lane2021-09-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Formerly, we sent signals for outgoing NOTIFY messages within ProcessCompletedNotifies, which was also responsible for sending relevant ones of those messages to our connected client. It therefore had to run during the main-loop processing that occurs just before going idle. This arrangement had two big disadvantages: * Now that procedures allow intra-command COMMITs, it would be useful to send NOTIFYs to other sessions immediately at COMMIT (though, for reasons of wire-protocol stability, we still shouldn't forward them to our client until end of command). * Background processes such as replication workers would not send NOTIFYs at all, since they never execute the client communication loop. We've had requests to allow triggers running in replication workers to send NOTIFYs, so that's a problem. To fix these things, move transmission of outgoing NOTIFY signals into AtCommit_Notify, where it will happen during CommitTransaction. Also move the possible call of asyncQueueAdvanceTail there, to ensure we don't bloat the async SLRU if a background worker sends many NOTIFYs with no one listening. We can also drop the call of asyncQueueReadAllNotifications, allowing ProcessCompletedNotifies to go away entirely. That's because commit 790026972 added a call of ProcessNotifyInterrupt adjacent to PostgresMain's call of ProcessCompletedNotifies, and that does its own call of asyncQueueReadAllNotifications, meaning that we were uselessly doing two such calls (inside two separate transactions) whenever inbound notify signals coincided with an outbound notify. We need only set notifyInterruptPending to ensure that ProcessNotifyInterrupt runs, and we're done. The existing documentation suggests that custom background workers should call ProcessCompletedNotifies if they want to send NOTIFY messages. To avoid an ABI break in the back branches, reduce it to an empty routine rather than removing it entirely. Removal will occur in v15. Although the problems mentioned above have existed for awhile, I don't feel comfortable back-patching this any further than v13. There was quite a bit of churn in adjacent code between 12 and 13. At minimum we'd have to also backpatch 51004c717, and a good deal of other adjustment would also be needed, so the benefit-to-risk ratio doesn't look attractive. Per bug #15293 from Michael Powers (and similar gripes from others). Artur Zakirov and Tom Lane Discussion: https://postgr.es/m/153243441449.1404.2274116228506175596@wrigleys.postgresql.org
* Fix planner error with multiple copies of an AlternativeSubPlan.Tom Lane2021-09-14
| | | | | | | | | | | | | | | | | | | It's possible for us to copy an AlternativeSubPlan expression node into multiple places, for example the scan quals of several partition children. Then it's possible that we choose a different one of the alternatives as optimal in each place. Commit 41efb8340 failed to consider this scenario, so its attempt to remove "unused" subplans could remove subplans that were still used elsewhere. Fix by delaying the removal logic until we've examined all the AlternativeSubPlans in a given query level. (This does assume that AlternativeSubPlans couldn't get copied to other query levels, but for the foreseeable future that's fine; cf qual_is_pushdown_safe.) Per report from Rajkumar Raghuwanshi. Back-patch to v14 where the faulty logic came in. Discussion: https://postgr.es/m/CAKcux6==O3NNZC3bZ2prRYv3cjm3_Zw1GfzmOjEVqYN4jub2+Q@mail.gmail.com
* Add WRITE_INDEX_ARRAYPeter Eisentraut2021-09-14
| | | | | | | | | | | | | We have a few WRITE_{name of type}_ARRAY macros, but the one case using the Index type was hand-coded. Wrap it into a macro as well. This also changes the behavior slightly: Before, the field name was skipped if the length was zero. Now it prints the field name even in that case. This is more consistent with how other array fields are handled. Reviewed-by: Jacob Champion <pchampion@vmware.com> Discussion: https://www.postgresql.org/message-id/c091e5cd-45f8-69ee-6a9b-de86912cc7e7@enterprisedb.com
* Add COPY_ARRAY_FIELD and COMPARE_ARRAY_FIELDPeter Eisentraut2021-09-14
| | | | | | | | | These handle node fields that are inline arrays (as opposed to dynamically allocated arrays handled by COPY_POINTER_FIELD and COMPARE_POINTER_FIELD). These cases were hand-coded until now. Reviewed-by: Jacob Champion <pchampion@vmware.com> Discussion: https://www.postgresql.org/message-id/c091e5cd-45f8-69ee-6a9b-de86912cc7e7@enterprisedb.com
* Remove T_ExprPeter Eisentraut2021-09-14
| | | | | | | This is an abstract node that shouldn't have a node tag defined. Reviewed-by: Jacob Champion <pchampion@vmware.com> Discussion: https://www.postgresql.org/message-id/c091e5cd-45f8-69ee-6a9b-de86912cc7e7@enterprisedb.com
* jit: Do not try to shut down LLVM state in case of LLVM triggered errors.Andres Freund2021-09-13
| | | | | | | | | | | | | | | | | If an allocation failed within LLVM it is not safe to call back into LLVM as LLVM is not generally safe against exceptions / stack-unwinding. Thus errors while in LLVM code are promoted to FATAL. However llvm_shutdown() did call back into LLVM even in such cases, while llvm_release_context() was careful not to do so. We cannot generally skip shutting down LLVM, as that can break profiling. But it's OK to do so if there was an error from within LLVM. Reported-By: Jelte Fennema <Jelte.Fennema@microsoft.com> Author: Andres Freund <andres@anarazel.de> Author: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/AM5PR83MB0178C52CCA0A8DEA0207DC14F7FF9@AM5PR83MB0178.EURPRD83.prod.outlook.com Backpatch: 11-, where jit was introduced
* Remove code duplication for permission checks with replication slotsMichael Paquier2021-09-14
| | | | | | | | | | | Two functions, both named check_permissions(), used the same checks to verify if a user had required privileges to work on replication slots. This commit removes the duplication, and moves the function doing the checks to slot.c to be centralized. Author: Bharath Rupireddy Reviewed-by: Nathan Bossart, Euler Taveira Discussion: https://postgr.es/m/CALj2ACUPpVw1u7sQocFVWrSs0n10pt_G_4NPZKSxXK6cW1dErw@mail.gmail.com
* Clear conn->errorMessage at successful completion of PQconnectdb().Tom Lane2021-09-13
| | | | | | | | | | | | | Commits ffa2e4670 and 52a10224e caused libpq's connection-establishment functions to usually leave a nonempty string in the connection's errorMessage buffer, even after a successful connection. While that was intentional on my part, more sober reflection says that it wasn't a great idea: the string would be a bit confusing. Also this broke at least one application that checked for connection success by examining the errorMessage, instead of using PQstatus() as documented. Let's clear the buffer at success exit, restoring the pre-v14 behavior. Discussion: https://postgr.es/m/4170264.1620321747@sss.pgh.pa.us
* Fix EXIT out of outermost block in plpgsql.Tom Lane2021-09-13
| | | | | | | | | | | | Ordinarily, using EXIT this way would draw "control reached end of function without RETURN". However, if the function is one where we don't require an explicit RETURN (such as a DO block), that should not happen. It did anyway, because add_dummy_return() neglected to account for the case. Per report from Herwig Goemans. Back-patch to all supported branches. Discussion: https://postgr.es/m/868ae948-e3ca-c7ec-95a6-83cfc08ef750@gmail.com