aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Fix typo in connection limits testDaniel Gustafsson2024-10-11
| | | | Spotted while doing post-commit review.
* Use deconstruct_array_builtin instead of deconstruct_arrayÁlvaro Herrera2024-10-11
| | | | | | | | | Commit 062a84442424 introduced use of deconstruct_array when deconstruct_array_builtin can be used instead. Do that to save some code. Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Discussion: https://postgr.es/m/Zwi5g2GzlUX1NqxR@ip-10-97-1-34.eu-west-3.compute.internal
* pgbench: Improve result outputs related to failed transactions.Tatsuo Ishii2024-10-11
| | | | | | | | | | | | | | | | | | | | | | | Previously, per-script statistics were never output when all transactions failed due to serialization or deadlock errors. However, it is reasonable to report such information if there are ones even when there are no successful transaction since these failed transactions are now objects to be reported. Meanwhile, if the total number of successful, skipped, and failed transactions is zero, we don't have to report the number of failed transactions as similar to the number of skipped transactions, which avoids to print "NaN%" in lines on failed transaction reports. Also, the number of transactions in per-script results now includes skipped and failed transactions. It prevents to print "total of NaN%" when any transactions are not successfully processed. The number of transactions actually processed per-script and TPS based on it are now output explicitly in a separate line. Author: Yugo Nagata Reviewed-by: Tatsuo Ishii Discussion: https://postgr.es/m/20240921003544.2436ef8da9c5c8cb963c651b%40sraoss.co.jp
* Adjust EXPLAIN's output for disabled nodesDavid Rowley2024-10-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | c01743aa4 added EXPLAIN output to display the plan node's disabled_node count whenever that count is above 0. Seemingly, there weren't many people who liked that output as each parent of a disabled node would also have a "Disabled Nodes" output due to the way disabled_nodes is accumulated towards the root plan node. It was often hard and sometimes impossible to figure out which nodes were disabled from looking at EXPLAIN. You might think it would be possible to manually add up the numbers from the "Disabled Nodes" output of a given node's children to figure out if that node has a higher disabled_nodes count than its children, but that wouldn't have worked for Append and Merge Append nodes if some disabled child nodes were run-time pruned during init plan. Those children are not displayed in EXPLAIN. Here we attempt to improve this output by only showing "Disabled: true" against only the nodes which are explicitly disabled themselves. That seems to be the output that's desired by the most people who voiced their opinion. This is done by summing up the disabled_nodes of the given node's children and checking if that number is less than the disabled_nodes of the current node. This commit also fixes a bug in make_sort() which was neglecting to set the Sort's disabled_nodes field. This should have copied what was done in cost_sort(), but it hadn't been updated. With the new output, the choice to not maintain that field properly was clearly wrong as the disabled-ness of the node was attributed to the Sort's parent instead. Reviewed-by: Laurenz Albe, Alena Rybakina Discussion: https://postgr.es/m/9e4ad616bebb103ec2084bf6f724cfc739e7fabb.camel@cybertec.at
* Don't hard-code the input file name in gen_tabcomplete.pl's output.Tom Lane2024-10-10
| | | | | | | | | | | | | | Use $ARGV[0], that is the specified input file name, in #line directives generated by gen_tabcomplete.pl. This makes code coverage reports work properly in the meson build system (where the input file name will be a relative path). Also fix up brain fade in the meson build rule for tab-complete.c: we only need to write the input file name once not twice. Jacob Champion (some cosmetic adjustments by me) Discussion: https://postgr.es/m/CAOYmi+=+oWAoi8pqnH0MJQqsSn4ddzqDhqRQJvyiN2aJSWvw2w@mail.gmail.com
* Avoid possible segfault in psql's tab completion.Tom Lane2024-10-10
| | | | | | | | | | | | Fix oversight in bd1276a3c: the "words_after_create" stanza in psql_completion() requires previous_words_count > 0, since it uses prev_wd. This condition was formerly assured by the if-else chain above it, but no more. If there were no previous words then we'd dereference an uninitialized pointer, possibly causing a segfault. Report and patch by Anthonin Bonnefoy. Discussion: https://postgr.es/m/CAO6_XqrSRE7c_i+D7Hm07K3+6S0jTAmMr60RY41XzaA29Ae5uA@mail.gmail.com
* Unbreak overflow test for attinhcount/coninhcountÁlvaro Herrera2024-10-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 90189eefc1e1 narrowed pg_attribute.attinhcount and pg_constraint.coninhcount from 32 to 16 bits, but kept other related structs with 32-bit wide fields: ColumnDef and CookedConstraint contain an int 'inhcount' field which is itself checked for overflow on increments, but there's no check that the values aren't above INT16_MAX before assigning to the catalog columns. This means that a creative user can get a inconsistent table definition and override some protections. Fix it by changing those other structs to also use int16. Also, modernize style by using pg_add_s16_overflow for overflow testing instead of checking for negative values. We also have Constraint.inhcount, which is here removed completely. This was added by commit b0e96f311985 and not removed by its revert at 6f8bb7c1e961. It is not needed by the upcoming not-null constraints patch. This is mostly academic, so we agreed not to backpatch to avoid ABI problems. Bump catversion because of the changes to parse nodes. Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Co-authored-by: 何建 (jian he) <jian.universality@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/202410081611.up4iyofb5ie7@alvherre.pgsql
* Improve descriptions of some pg_stat_checkpoints functions in pg_proc.dat.Fujii Masao2024-10-11
| | | | | | | | | | | | | | | | | | Previously, the descriptions of pg_stat_get_checkpointer_num_requested(), pg_stat_get_checkpointer_restartpoints_requested(), and pg_stat_get_checkpointer_restartpoints_performed() in pg_proc.dat referred to "backend". This was misleading because these functions report the number of checkpoints or restartpoints requested or performed by other than backends as well. This commit removes "backend" from these descriptions to avoid confusion. Bump catalog version. Idea from Anton A. Melnikov Author: Fujii Masao Reviewed-by: Anton A. Melnikov Discussion: https://postgr.es/m/8e5f353f-8b31-4a8e-9cfa-c037f22b4aee@postgrespro.ru
* Avoid crash in estimate_array_length with null root pointer.Tom Lane2024-10-09
| | | | | | | | | | | | | | | | | | | Commit 9391f7152 added a "PlannerInfo *root" parameter to estimate_array_length, but failed to consider the possibility that NULL would be passed for that, leading to a null pointer dereference. We could rectify the particular case shown in the bug report by fixing simplify_function/inline_function to pass through the root pointer. However, as long as eval_const_expressions is documented to accept NULL for root, similar hazards would remain. For now, let's just do the narrow fix of hardening estimate_array_length to not crash. Its behavior with NULL root will be the same as it was before 9391f7152, so this is not too awful. Per report from Fredrik Widlert (via Paul Ramsey). Back-patch to v17 where 9391f7152 came in. Discussion: https://postgr.es/m/518339E7-173E-45EC-A0FF-9A4A62AA4F40@cleverelephant.ca
* Apply GUC name from central table in more places of guc.cMichael Paquier2024-10-09
| | | | | | | | | | | | | | | | | | | The name extracted from the record of the GUC tables is applied to more internal places of guc.c. This change has the advantage to simplify parse_and_validate_value(), where the "name" was only used in elog messages, while it was required to match with the name from the GUC record. pg_parameter_aclcheck() now passes the name of the GUC from its record in two places rather than the caller's argument. The value given to this function goes through convert_GUC_name_for_parameter_acl() that does a simple ASCII downcasing. Few GUCs mix character casing in core; one test is added for one of these code paths with "IntervalStyle". Author: Peter Smith, Michael Paquier Discussion: https://postgr.es/m/ZwNh4vkc2NHJHnND@paquier.xyz
* Allow pushdown of HAVING clauses with grouping setsRichard Guo2024-10-09
| | | | | | | | | | | | | | | | | | | In some cases, we may want to transfer a HAVING clause into WHERE in hopes of eliminating tuples before aggregation instead of after. Previously, we couldn't do this if there were any nonempty grouping sets, because we didn't have a way to tell if the HAVING clause referenced any columns that were nullable by the grouping sets, and moving such a clause into WHERE could potentially change the results. Now, with expressions marked nullable by grouping sets with the RT index of the RTE_GROUP RTE, it is much easier to identify those clauses that reference any nullable-by-grouping-sets columns: we just need to check if the RT index of the RTE_GROUP RTE is present in the clause. For other HAVING clauses, they can be safely pushed down. Author: Richard Guo Discussion: https://postgr.es/m/CAMbWs4-NpzPgtKU=hgnvyn+J-GanxQCjrUi7piNzZ=upiCV=2Q@mail.gmail.com
* Consider explicit incremental sort for mergejoinsRichard Guo2024-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For a mergejoin, if the given outer path or inner path is not already well enough ordered, we need to do an explicit sort. Currently, we only consider explicit full sort and do not account for incremental sort. In this patch, for the outer path of a mergejoin, we choose to use explicit incremental sort if it is enabled and there are presorted keys. For the inner path, though, we cannot use incremental sort because it does not support mark/restore at present. The rationale is based on the assumption that incremental sort is always faster than full sort when there are presorted keys, a premise that has been applied in various parts of the code. In addition, the current cost model tends to favor incremental sort as being cheaper than full sort in the presence of presorted keys, making it reasonable not to consider full sort in such cases. It could be argued that what if a mergejoin with an incremental sort as the outer path is selected as the inner path of another mergejoin. However, this should not be a problem, because mergejoin itself does not support mark/restore either, and we will add a Material node on top of it anyway in this case (see final_cost_mergejoin). There is one ensuing plan change in the regression tests, and we have to modify that test case to ensure that it continues to test what it is intended to. No backpatch as this could result in plan changes. Author: Richard Guo Reviewed-by: David Rowley, Tomas Vondra Discussion: https://postgr.es/m/CAMbWs49x425QrX7h=Ux05WEnt8GS757H-jOP3_xsX5t1FoUsZw@mail.gmail.com
* Remove incorrect function import from pgindentDaniel Gustafsson2024-10-09
| | | | | | | | | | | | | | | | Commit 149ac7d4559 which re-implemented pgindent in Perl explicitly imported the devnull function from File::Spec, but the module does not export anything. In recent versions of Perl calling a missing import function cause a warning, which combined with warnings being fatal cause pgindent to error out. Backpatch to all supported versions. Author: Erik Wienhold <ewie@ewie.name> Reviewed-by: Andrew Dunstan <andrew@dunslane.net> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discusson: https://postgr.es/m/2372cd74-11b0-46f9-b28e-8f9627215d19@ewie.name Backpatch-through: v12
* Allow roles created by new test to log in under SSPI.Tom Lane2024-10-08
| | | | | Semi-blind attempt to fix 6a1d0d470 to work on Windows, along the same lines as a70f2a57f. Per buildfarm.
* Introduce two fields in EState to track parallel worker activityMichael Paquier2024-10-09
| | | | | | | | | | | | | | | These fields can be set by executor nodes to record how many parallel workers were planned to be launched and how many of them have been actually launched within the number initially planned. This data is able to give an approximation of the parallel worker draught a system is facing, making easier the tuning of related configuration parameters. These fields will be used by some follow-up patches to populate other parts of the system with their data. Author: Guillaume Lelarge, Benoit Lobréau Discussion: https://postgr.es/m/783bc7f7-659a-42fa-99dd-ee0565644e25@dalibo.com Discussion: https://postgr.es/m/CAECtzeWtTGOK0UgKXdDGpfTVSa5bd_VbUt6K6xn8P7X+_dZqKw@mail.gmail.com
* Silence assorted annoying test output.Tom Lane2024-10-08
| | | | | | | | | Remove unnecessary chatter about "checking if IO::Socket::UNIX works"; our tests should never print anything on stderr unless there's a problem. Add .gitignore entry for temporary directory now being left behind in src/test/postmaster.
* Add min and max aggregates for bytea type.Tom Lane2024-10-08
| | | | | | | | | Similar to a0f1fce80, although we chose to duplicate logic rather than invoke byteacmp, primarily to avoid repeat detoasting. Marat Buharov, Aleksander Alekseev Discussion: https://postgr.es/m/CAPCEVGXiASjodos4P8pgyV7ixfVn-ZgG9YyiRZRbVqbGmfuDyg@mail.gmail.com
* Use aux process resource owner in walsenderAndres Freund2024-10-08
| | | | | | | | | | AIO will need a resource owner to do IO. Right now we create a resowner on-demand during basebackup, and we could do the same for AIO. But it seems easier to just always create an aux process resowner. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/1f6b50a7-38ef-4d87-8246-786d39f46ab9@iki.fi
* bufmgr/smgr: Don't cross segment boundaries in StartReadBuffers()Andres Freund2024-10-08
| | | | | | | | | | | | | | | | | With real AIO it doesn't make sense to cross segment boundaries with one IO. Add smgrmaxcombine() to allow upper layers to query which buffers can be merged. We could continue to cross segment boundaries when not using AIO, but it doesn't really make sense, because md.c will never be able to perform the read across the segment boundary in one system call. Which means we'll mark more buffers as undergoing IO than really makes sense - if another backend desires to read the same blocks, it'll be blocked longer than necessary. So it seems better to just never cross the boundary. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/1f6b50a7-38ef-4d87-8246-786d39f46ab9@iki.fi
* bufmgr: Return early in ScheduleBufferTagForWriteback() if fsync=offAndres Freund2024-10-08
| | | | | | | | | | | As pg_flush_data() doesn't do anything with fsync disabled, there's no point in tracking the buffer for writeback. Arguably the better fix would be to change pg_flush_data() to flush data even with fsync off, but that's a behavioral change, whereas this is just a small optimization. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/1f6b50a7-38ef-4d87-8246-786d39f46ab9@iki.fi
* Silence buildfarm warning chatter from bd1276a3c.Tom Lane2024-10-08
| | | | | | Buildfarm members using -Wextra complained about "warning: suggest braces around empty body in an 'if' statement". Do it gcc's way, though I see no actual readability benefit in this.
* Fix typo and run pgperltidy on newly-added testHeikki Linnakangas2024-10-08
| | | | From commit 85ec945b78.
* Use an shmem_exit callback to remove backend from PMChildFlags on exitHeikki Linnakangas2024-10-08
| | | | | | | | | | | | This seems nicer than having to duplicate the logic between InitProcess() and ProcKill() for which child processes have a PMChildFlags slot. Move the MarkPostmasterChildActive() call earlier in InitProcess(), out of the section protected by the spinlock. Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/a102f15f-eac4-4ff2-af02-f9ff209ec66f@iki.fi
* Add test for dead-end backendsHeikki Linnakangas2024-10-08
| | | | | | | | | | | The code path for launching a dead-end backend because we're out of slots was not covered by any tests, so add one. (Some tests did hit the case of launching a dead-end backend because the server is still starting up, though, so the gap in our test coverage wasn't as big as it sounds.) Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/a102f15f-eac4-4ff2-af02-f9ff209ec66f@iki.fi
* Add test for connection limitsHeikki Linnakangas2024-10-08
| | | | | Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://www.postgresql.org/message-id/a102f15f-eac4-4ff2-af02-f9ff209ec66f@iki.fi
* Move check for binary mode and on_error option to the appropriate location.Fujii Masao2024-10-08
| | | | | | | | | | | | | | | Commit 9e2d870119 placed the check for binary mode and on_error before default values were inserted, which was not ideal. This commit moves the check to a more appropriate position after default values are set. Additionally, the comment incorrectly mentioned two checks before inserting defaults, when there are actually three. This commit corrects that comment. Author: Atsushi Torikoshi Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/8830518a-28ac-43a2-8a11-1676d9a3cdf8@oss.nttdata.com
* Add REJECT_LIMIT option to the COPY command.Fujii Masao2024-10-08
| | | | | | | | | | | | | | | | Previously, when ON_ERROR was set to 'ignore', the COPY command would skip all rows with data type conversion errors, with no way to limit the number of skipped rows before failing. This commit introduces the REJECT_LIMIT option, allowing users to specify the maximum number of erroneous rows that can be skipped. If more rows encounter data type conversion errors than allowed by REJECT_LIMIT, the COPY command will fail with an error, even when ON_ERROR = 'ignore'. Author: Atsushi Torikoshi Reviewed-by: Junwang Zhao, Kirill Reshke, jian he, Fujii Masao Discussion: https://postgr.es/m/63f99327aa6b404cc951217fa3e61fe4@oss.nttdata.com
* Improve style of two code pathsMichael Paquier2024-10-08
| | | | | | | | | | In execGrouping.c, execTuplesMatchPrepare() was doing a memory allocation that was not necessary when the number of columns was 0. In foreign.c, pg_options_to_table() was assigning twice a variable to the same value. Author: Ranier Vilela Discussion: https://postgr.es/m/CAEudQAqup0agbSzMjSLSTn=OANyCzxENF1+HrSYnr3WyZib7=Q@mail.gmail.com
* Fix search_path cache initialization.Jeff Davis2024-10-07
| | | | | | | | | The cache needs to be available very early, so don't rely on InitializeSearchPath() to initialize the it. Reported-by: Murat Efendioğlu Discussion: https://postgr.es/m/CACbCzujQ4zS8MM1bx-==+tr+D3Hk5G1cjN4XkUQ+Q=cEpwhzqg@mail.gmail.com Backpatch-through: 17
* Fix test for password hash length limit.Nathan Bossart2024-10-07
| | | | | | In commit 8275325a06, I forgot to update password_1.out (an alternative expected test output file added by commit 3c44e7d8d4), so this test began failing on machines with FIPS mode enabled.
* vacuumdb: Schema-qualify operator in catalog query's WHERE clause.Nathan Bossart2024-10-07
| | | | | | | | | | | | | | | | | | Commit 1ab67c9dfa, which modified this catalog query so that it doesn't return temporary relations, forgot to schema-qualify the operator. A comment earlier in the function implores us to fully qualify everything in the query: * Since we execute the constructed query with the default search_path * (which could be unsafe), everything in this query MUST be fully * qualified. This commit fixes that. While at it, add a newline for consistency with surrounding code. Reviewed-by: Noah Misch Discussion: https://postgr.es/m/ZwQJYcuPPUsF0reU%40nathan Backpatch-through: 12
* Fix Y2038 issues with MyStartTime.Nathan Bossart2024-10-07
| | | | | | | | | | | | | | | Several places treat MyStartTime as a "long", which is only 32 bits wide on some platforms. In reality, MyStartTime is a pg_time_t, i.e., a signed 64-bit integer. This will lead to interesting bugs on the aforementioned systems in 2038 when signed 32-bit integers are no longer sufficient to store Unix time (e.g., "pg_ctl start" hanging). To fix, ensure that MyStartTime is handled as a 64-bit value everywhere. (Of course, users will need to ensure that time_t is 64 bits wide on their system, too.) Co-authored-by: Max Johnson Discussion: https://postgr.es/m/CO1PR07MB905262E8AC270FAAACED66008D682%40CO1PR07MB9052.namprd07.prod.outlook.com Backpatch-through: 12
* Convert tab-complete's long else-if chain to a switch statement.Tom Lane2024-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename tab-complete.c to tab-complete.in.c, create the preprocessor script gen_tabcomplete.pl, and install Makefile/meson.build rules to create tab-complete.c from tab-complete.in.c. The preprocessor converts match_previous_words' else-if chain into a switch and populates tcpatterns[] with the data needed by the driver loop. The initial HeadMatches/TailMatches/Matches test in each else-if arm is now performed in a table-driven loop. Where we get a match, the corresponding switch case is invoked to see if the match succeeds. (It might not, if there were additional conditions in the original else-if test.) The total number of string comparisons done is just about the same as it was in the previous coding; however, now that we have table-driven logic underlying the handmade rules, there is room to improve that. For now I haven't bothered because tab completion is still plenty fast enough for human use. If the number of rules keeps increasing, we might someday need to do more in that area. The immediate benefit of all this thrashing is that C compilers frequently don't deal well with long else-if chains. On gcc 8.5.0, this reduces the compile time of tab-complete.c by about a factor of four, while MSVC is reported to crash outright with the previous coding. Discussion: https://postgr.es/m/2208466.1720729502@sss.pgh.pa.us
* Prepare tab-complete.c for preprocessing.Tom Lane2024-10-07
| | | | | | | | | | | | | | | | | | | | Separate out psql_completion's giant else-if chain of *Matches tests into a new function. Add the infrastructure needed for table-driven checking of the initial match of each completion rule. As-is, however, the code continues to operate as it did. The new behavior applies only if SWITCH_CONVERSION_APPLIED is #defined, which it is not here. (The preprocessor added in the next patch will add a #define for that.) The first and last couple of bits of psql_completion are not based on HeadMatches/TailMatches/Matches tests, so they stay where they are; they won't become part of the switch. This patch also fixes up a couple of if-conditions that didn't meet the conditions enumerated in the comment for match_previous_words(). Those restrictions exist to simplify the preprocessor. Discussion: https://postgr.es/m/2208466.1720729502@sss.pgh.pa.us
* Invent "MatchAnyN" option for tab-complete.c's Matches/MatchesCS.Tom Lane2024-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | This argument matches any number (including zero) of previous words. Use it to replace the common coding pattern if (HeadMatches("A", "B") && TailMatches("X", "Y")) with if (Matches("A", "B", MatchAnyN, "X", "Y")) In itself this feature doesn't do much except (arguably) make the code slightly shorter and more readable. However, it reduces the number of complex if-condition patterns that have to be dealt with in the next commits in this series. While here, restructure the *Matches implementation functions so that the actual work is done in functions that take a char ** array of pattern strings, and the versions taking variadic arguments are thin wrappers around the array ones. This simplifies the new Matches logic considerably. At the end of this patch series, the array functions will be the only ones that are material to performance, so having the variadic ones be wrappers makes sense. Discussion: https://postgr.es/m/2208466.1720729502@sss.pgh.pa.us
* Restrict password hash length.Nathan Bossart2024-10-07
| | | | | | | | | | | | | | | | | | | Commit 6aa44060a3 removed pg_authid's TOAST table because the only varlena column is rolpassword, which cannot be de-TOASTed during authentication because we haven't selected a database yet and cannot read pg_class. Since that change, attempts to set password hashes that require out-of-line storage will fail with a "row is too big" error. This error message might be confusing to users. This commit places a limit on the length of password hashes so that attempts to set long password hashes will fail with a more user-friendly error. The chosen limit of 512 bytes should be sufficient to avoid "row is too big" errors independent of BLCKSZ, but it should also be lenient enough for all reasonable use-cases (or at least all the use-cases we could imagine). Reviewed-by: Tom Lane, Jonathan Katz, Michael Paquier, Jacob Champion Discussion: https://postgr.es/m/89e8649c-eb74-db25-7945-6d6b23992394%40gmail.com
* Fix fetching default toast value during decoding of in-progress transactions.Amit Kapila2024-10-07
| | | | | | | | | | | | | | | During logical decoding of in-progress transactions, we perform the toast table scan while fetching the default toast value for an attribute. We forgot to initialize the flag during this scan to indicate that the system table scan is in progress. We need this flag to ensure that during logical decoding we never directly access the tableam or heap APIs because we check for concurrent aborts only in systable_* APIs. Reported-by: Alexander Lakhin Author: Takeshi Ideriha, Hou Zhijie Reviewed-by: Amit Kapila, Hou Zhijie Backpatch-through: 14 Discussion: https://postgr.es/m/18641-6687273b7f15269d@postgresql.org
* Use camel case for "DateStyle" in some error messagesMichael Paquier2024-10-07
| | | | | | | | | | | | | | This GUC is written as camel-case in most of the documentation and the GUC table (but not postgresql.conf.sample), and two error messages hardcoded it with lower case characters. Let's use a style more consistent. Most of the noise comes from the regression tests, updated to reflect the GUC name in these error messages. Author: Peter Smith Reviewed-by: Peter Eisentraut, Álvaro Herrera Discussion: https://postgr.es/m/CAHut+Pv-kSN8SkxSdoHano_wPubqcg5789ejhCDZAcLFceBR-w@mail.gmail.com
* Ignore not-yet-defined Portals in pg_cursors view.Tom Lane2024-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_cursor() supposed that any Portal it finds in the hash table must have sourceText set up, but there's an edge case where that is not so. A newly-created Portal has sourceText = NULL, and that doesn't change until PortalDefineQuery is called. In SPI_cursor_open_internal, we perform GetCachedPlan between CreatePortal and PortalDefineQuery, and it's possible for user-defined code to execute during that planning and cause a fetch from the pg_cursors view, resulting in a null-pointer-dereference crash. (It looks like the same could happen in exec_bind_message, but I've not tried to provoke a failure there.) I considered trying to fix this by setting sourceText sooner, but there may be instances of this same calling pattern in extensions, and we couldn't be sure they'd get the memo promptly. It seems better to redefine pg_cursor as not showing Portals that have not yet had PortalDefineQuery called on them, which we can do by just skipping them if sourceText is still NULL. (Before a1c692358, pg_cursor would instead return a row with NULL in the statement column. We could revert to that behavior but it doesn't really seem like a better definition, especially since our documentation doesn't suggest that the column could be NULL.) Per report from PetSerAl. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAKygsHTBXLXjwV43kpZa+Cs+XTiaeeJiZdL4cPBm9f4MTdw7wg@mail.gmail.com
* Move Cluster.pm initialization code to a more obvious placeAndrew Dunstan2024-10-06
| | | | | | | Commit 460c0076e8 added some module intialization code to set signal handlers. However, that code has now become somewhat buried, as later commits added new subroutines. Therefore, move the initialization code to the module's INIT block where it won't become obscured.
* libpq: Discard leading and trailing spaces for parameters and values in URIsMichael Paquier2024-10-06
| | | | | | | | | | | | | | | | | | | Integer values applied a parsing rule through pqParseIntParam() that made URIs like this one working, even if these include spaces around values: "postgresql://localhost:5432/postgres?keepalives=1 &keepalives_idle=1 " This commit changes the parsing so as spaces before and after parameters and values are discarded, offering more consistency with the parsing that already applied to libpq for integer values in URIs. Note that %20 can be used in a URI for a space character. ECPGconnect() has been discarded leading and trailing spaces around parameters and values that for a long time, as well. Like f22e84df1dea, this is done as a HEAD-only change. Reviewed-by: Yuto Sasaki Discussion: https://postgr.es/m/Zv3oWOfcrHTph7JK@paquier.xyz
* Use generateClonedIndexStmt to propagate CREATE INDEX to partitions.Tom Lane2024-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When instantiating an existing partitioned index for a new child partition, we use generateClonedIndexStmt to build a suitable IndexStmt to pass to DefineIndex. However, when DefineIndex needs to recurse to instantiate a newly created partitioned index on an existing child partition, it was doing copyObject on the given IndexStmt and then applying a bunch of ad-hoc fixups. This has a number of problems, primarily that it implies fresh lookups of referenced objects such as opclasses and collations. Since commit 2af07e2f7 caused DefineIndex to restrict search_path internally, those lookups could fail or deliver different results than the original one. We can avoid those problems and save a few dozen lines of code by using generateClonedIndexStmt in this code path too. Another thing this fixes is incorrect propagation of parent-index comments to child indexes (because the copyObject approach copies the idxcomment field while generateClonedIndexStmt doesn't). I had noticed this in connection with commit c01eb619a, but not run the problem to ground. I'm tempted to back-patch this further than v17, but the only thing it's known to fix in older branches is the comment issue, which is pretty minor and doesn't seem worth the risk of introducing new issues in stable branches. (If anyone does care about that, clearing idxcomment in the copied IndexStmt would be a safer fix.) Per bug #18637 from usamoi. Back-patch to v17 where the search_path change came in. Discussion: https://postgr.es/m/18637-f51e314546e3ba2a@postgresql.org
* Clean up WaitLatch calls that passed latch without WL_LATCH_SETHeikki Linnakangas2024-10-05
| | | | | | | The 'latch' argument is ignored if WL_LATCH_SET is not given. Clarify these calls by not pointlessly passing MyLatch. Discussion: https://www.postgresql.org/message-id/391abe21-413e-4d91-a650-b663af49500c@iki.fi
* Remove unneeded #includeHeikki Linnakangas2024-10-05
| | | | | | Unneeded since commit d72731a704. Discussion: https://www.postgresql.org/message-id/391abe21-413e-4d91-a650-b663af49500c@iki.fi
* Remove unused latchHeikki Linnakangas2024-10-05
| | | | | | | It was left unused by commit bc971f4025, which replaced the latch usage with a condition variable Discussion: https://www.postgresql.org/message-id/391abe21-413e-4d91-a650-b663af49500c@iki.fi
* Reject non-ASCII locale names.Thomas Munro2024-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit bf03cfd1 started scanning all available BCP 47 locale names on Windows. This caused an abort/crash in the Windows runtime library if the default locale name contained non-ASCII characters, because of our use of the setlocale() save/restore pattern with "char" strings. After switching to another locale with a different encoding, the saved name could no longer be understood, and setlocale() would abort. "Turkish_Türkiye.1254" is the example from recent reports, but there are other examples of countries and languages with non-ASCII characters in their names, and they appear in Windows' (old style) locale names. To defend against this: 1. In initdb, reject non-ASCII locale names given explicity on the command line, or returned by the operating system environment with setlocale(..., ""), or "canonicalized" by the operating system when we set it. 2. In initdb only, perform the save-and-restore with Windows' non-standard wchar_t variant of setlocale(), so that it is not subject to round trip failures stemming from char string encoding confusion. 3. In the backend, we don't have to worry about the save-and-restore problem because we have already vetted the defaults, so we just have to make sure that CREATE DATABASE also rejects non-ASCII names in any new databases. SET lc_XXX doesn't suffer from the problem, but the ban applies to it too because it uses check_locale(). CREATE COLLATION doesn't suffer from the problem either, but it doesn't use check_locale() so it is not included in the new ban for now, to minimize the change. Anyone who encounters the new error message should either create a new duplicated locale with an ASCII-only name using Windows Locale Builder, or consider using BCP 47 names like "tr-TR". Users already couldn't initialize a cluster with "Turkish_Türkiye.1254" on PostgreSQL 16+, but the new failure mode is an error message that explains why, instead of a crash. Back-patch to 16, where bf03cfd1 landed. Older versions are affected in theory too, but only 16 and later are causing crash reports. Reviewed-by: Andrew Dunstan <andrew@dunslane.net> (the idea, not the patch) Reported-by: Haifang Wang (Centific Technologies Inc) <v-haiwang@microsoft.com> Discussion: https://postgr.es/m/PH8PR21MB3902F334A3174C54058F792CE5182%40PH8PR21MB3902.namprd21.prod.outlook.com
* ecpg: avoid adding whitespace around '&' in connection URLs.Tom Lane2024-10-04
| | | | | | | | | | | The preprocessor really should not have done this to begin with. The space after '&' was masked by ECPGconnect's skipping spaces before option keywords, and the space before by dint of libpq being (mostly) insensitive to trailing space in option values. We fixed the one known problem with that in 920d51979. Hence this patch is mostly cosmetic, and we'll just change it in HEAD. Discussion: https://postgr.es/m/TY2PR01MB36286A7B97B9A15793335D18C1772@TY2PR01MB3628.jpnprd01.prod.outlook.com
* Rename PageData to GenericXLogPageDataPeter Eisentraut2024-10-04
| | | | | | | | | | | | In the PostgreSQL C type naming schema, the type PageData should be what the pointer of type Page points to. But in this case it's actually an unrelated type local to generic_xlog.c. Rename that to a more specific name. This makes room to possible add a PageData type with the mentioned meaning, but this is not done here. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/001d457e-c118-4219-8132-e1846c2ae3c9%40eisentraut.org
* Speed up numeric division by always using the "fast" algorithm.Dean Rasheed2024-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Formerly there were two internal functions in numeric.c to perform numeric division, div_var() and div_var_fast(). div_var() performed division exactly to a specified rscale using Knuth's long division algorithm, while div_var_fast() used the algorithm from the "FM" library, which approximates each quotient digit using floating-point arithmetic, and computes a truncated quotient with DIV_GUARD_DIGITS extra digits. div_var_fast() could be many times faster than div_var(), but did not guarantee correct results in all cases, and was therefore only suitable for use in transcendental functions, where small errors are acceptable. This commit merges div_var() and div_var_fast() together into a single function with an extra "exact" boolean parameter, which can be set to false if the caller is OK with an approximate result. The new function uses the faster algorithm from the "FM" library, except that when "exact" is true, it does not truncate the computation with DIV_GUARD_DIGITS extra digits, but instead performs the full-precision computation, subtracting off complete multiples of the divisor for each quotient digit. However, it is able to retain most of the performance benefits of div_var_fast(), by delaying the propagation of carries, allowing the inner loop to be auto-vectorized. Since this may still lead to an inaccurate result, when "exact" is true, it then inspects the remainder and uses that to adjust the quotient, if necessary, to make it correct. In practice, the quotient rarely needs to be adjusted, and never by more than one in the final digit, though it's difficult to prove that, so the code allows for larger adjustments, just in case. In addition, use base-NBASE^2 arithmetic and a 64-bit dividend array, similar to mul_var(), so that the number of iterations of the outer loop is roughly halved. Together with the faster algorithm, this makes div_var() up to around 20 times as fast as the old Knuth algorithm when "exact" is true, and up to 2 or 3 times as fast as the old div_var_fast() function when "exact" is false. Dean Rasheed, reviewed by Joel Jacobson. Discussion: https://postgr.es/m/CAEZATCVHR10BPDJSANh0u2+Sg6atO3mD0G+CjKDNRMD-C8hKzQ@mail.gmail.com
* Remove assertion checking query ID in execMain.cMichael Paquier2024-10-04
| | | | | | | | | | This assertion has been added by 24f520594809, but Alexander Lakhin has proved that the ExecutorRun() one can be broken by using a PL function that manipulates compute_query_id and track_activities, while the ones in ExecutorFinish() and ExecutorEnd() could be triggered when cleaning up portals at the beginning of a new query execution. Discussion: https://postgr.es/m/b37d8e6c-e83d-e157-8865-1b2460a6aef2@gmail.com