aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Fix EXPLAIN Bitmap heap scan to count pages with no visible tuplesHeikki Linnakangas2024-03-18
| | | | | | | | | | | | | | | | | Previously, bitmap heap scans only counted lossy and exact pages for explain when there was at least one visible tuple on the page. heapam_scan_bitmap_next_block() returned true only if there was a "valid" page with tuples to be processed. However, the lossy and exact page counters in EXPLAIN should count the number of pages represented in a lossy or non-lossy way in the constructed bitmap, regardless of whether or not the pages ultimately contained visible tuples. Backpatch to all supported versions. Author: Melanie Plageman Discussion: https://www.postgresql.org/message-id/CAAKRu_ZwCwWFeL_H3ia26bP2e7HiKLWt0ZmGXPVwPO6uXq0vaA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAAKRu_bxrXeZ2rCnY8LyeC2Ls88KpjWrQ%2BopUrXDRXdcfwFZGA@mail.gmail.com
* Move code for backend startup to separate fileHeikki Linnakangas2024-03-18
| | | | | | | | This is code that runs in the backend process after forking, rather than postmaster. Move it out of postmaster.c for clarity. Reviewed-by: Tristan Partin, Andres Freund Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
* Refactor postmaster child process launchingHeikki Linnakangas2024-03-18
| | | | | | | | | | | | | | | | Introduce new postmaster_child_launch() function that deals with the differences in EXEC_BACKEND mode. Refactor the mechanism of passing information from the parent to child process. Instead of using different command-line arguments when launching the child process in EXEC_BACKEND mode, pass a variable-length blob of startup data along with all the global variables. The contents of that blob depend on the kind of child process being launched. In !EXEC_BACKEND mode, we use the same blob, but it's simply inherited from the parent to child process. Reviewed-by: Tristan Partin, Andres Freund Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
* Move some functions from postmaster.c to a new source fileHeikki Linnakangas2024-03-18
| | | | | | | | | This just moves the functions, with no other changes, to make the next commits smaller and easier to review. The moved functions are related to launching postmaster child processes in EXEC_BACKEND mode. Reviewed-by: Tristan Partin, Andres Freund Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
* Split registration of Win32 deadchild callback to separate functionHeikki Linnakangas2024-03-18
| | | | | | | | | | The next commit will move the internal_forkexec() function to a different source file, but it makes sense to keep all the code related to the win32 waitpid() emulation in postmaster.c. Split it off to a separate function now, to make the next commit more mechanical. Reviewed-by: Tristan Partin, Andres Freund Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
* Initialize variables to placate compiler.Nathan Bossart2024-03-17
| | | | | | | | | Since commit 012460ee93, some compilers have been warning that a couple of variables may be used uninitialized. There doesn't appear to be any actual risk, so let's just initialize these variables to 0 to silence the compiler warnings. Discussion: https://postgr.es/m/20240317192927.GA3978212%40nathanxps13
* Mark hash_corrupted() as pg_attribute_noreturn.Tom Lane2024-03-17
| | | | | | Coverity started complaining about this after cc5ef90ed. The code's not really different from before, but might as well clarify its intent.
* Add RETURNING support to MERGE.Dean Rasheed2024-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | | This allows a RETURNING clause to be appended to a MERGE query, to return values based on each row inserted, updated, or deleted. As with plain INSERT, UPDATE, and DELETE commands, the returned values are based on the new contents of the target table for INSERT and UPDATE actions, and on its old contents for DELETE actions. Values from the source relation may also be returned. As with INSERT/UPDATE/DELETE, the output of MERGE ... RETURNING may be used as the source relation for other operations such as WITH queries and COPY commands. Additionally, a special function merge_action() is provided, which returns 'INSERT', 'UPDATE', or 'DELETE', depending on the action executed for each row. The merge_action() function can be used anywhere in the RETURNING list, including in arbitrary expressions and subqueries, but it is an error to use it anywhere outside of a MERGE query's RETURNING list. Dean Rasheed, reviewed by Isaac Morland, Vik Fearing, Alvaro Herrera, Gurjeet Singh, Jian He, Jeff Davis, Merlin Moncure, Peter Eisentraut, and Wolfgang Walther. Discussion: http://postgr.es/m/CAEZATCWePEGQR5LBn-vD6SfeLZafzEm2Qy_L_Oky2=qw2w3Pzg@mail.gmail.com
* Add attstattarget to FormExtraData_pg_attributePeter Eisentraut2024-03-17
| | | | | | | | | | | This allows setting attstattarget when a relation is created. We make use of this by having index_concurrently_create_copy() copy over the attstattarget values when the new index is created, instead of having index_concurrently_swap() fix it up later. Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/4da8d211-d54d-44b9-9847-f2a9f1184c76@eisentraut.org
* Generalize handling of nullable pg_attribute columns in DDLPeter Eisentraut2024-03-17
| | | | | | | | | | | | | | | | | | | | | DDL code uses tuple descriptors to pass around pg_attribute values during table and index creation. But tuple descriptors don't include the variable-length/nullable columns of pg_attribute, so they have to be handled separately. Right now, the attoptions field is handled in a one-off way with a separate argument passed to InsertPgAttributeTuples(). The other affected fields of pg_attribute are right now not needed at relation creation time. The goal of this patch is to generalize this to allow handling additional variable-length/nullable columns of pg_attribute in a similar manner. For that, create a new struct FormExtraData_pg_attribute, which is to be passed around in parallel to the tuple descriptor and optionally supplies the additional columns. Right now, this struct only contains one field for attoptions, so no functionality is actually changed by this. Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/4da8d211-d54d-44b9-9847-f2a9f1184c76@eisentraut.org
* Make stxstattarget nullablePeter Eisentraut2024-03-17
| | | | | | | | | To match attstattarget change (commit 4f622503d6d). The logic inside CreateStatistics() is clarified a bit compared to that previous patch, and so here we also update ATExecSetStatistics() to match. Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/4da8d211-d54d-44b9-9847-f2a9f1184c76@eisentraut.org
* Fix EXPLAIN output for subplans in MERGE.Dean Rasheed2024-03-17
| | | | | | | | | | | | | | | | | | Given a subplan in a MERGE query, EXPLAIN would sometimes fail to properly display expressions involving Params referencing variables in other parts of the plan tree. This would affect subplans outside the topmost join plan node, for which expansion of Params would go via the top-level ModifyTable plan node. The problem was that "inner_tlist" for the ModifyTable node's deparse_namespace was set to the join node's targetlist, but "inner_plan" was set to the ModifyTable node itself, rather than the join node, leading to incorrect results when descending to the referenced variable. Fix and backpatch to v15, where MERGE was introduced. Discussion: https://postgr.es/m/CAEZATCWAv-sZuH%2BwG5xJ-%2BGt7qGNGX8wUQd3XYydMFDKgRB9nw%40mail.gmail.com
* Separate equalRowTypes() from equalTupleDescs()Peter Eisentraut2024-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | | This introduces a new function equalRowTypes() that is effectively a subset of equalTupleDescs() but only compares the number of attributes and attribute name, type, typmod, and collation. This is enough for most existing uses of equalTupleDescs(), which are changed to use the new function. The only remaining callers of equalTupleDescs() are those that really want to check the full tuple descriptor as such, without concern about record or row or record type semantics. The existing function hashTupleDesc() is renamed to hashRowType(), because it now corresponds more to equalRowTypes(). The purpose of this change is to be clearer about the semantics of the equality asked for by each caller. (At least one caller had a comment that questioned whether equalTupleDescs() was too restrictive.) For example, 4f622503d6d removed attstattarget from the tuple descriptor structure. It was not fully clear at the time how this should affect equalTupleDescs(). Now the answer is clear: By their own definitions, equalRowTypes() does not care, and equalTupleDescs() just compares whatever is in the tuple descriptor but does not care why it is in there. Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/f656d6d9-6660-4518-a006-2f65cafbebd1%40eisentraut.org
* Add destroyStringInfo function for cleaning up StringInfosDaniel Gustafsson2024-03-16
| | | | | | | | | | | | | destroyStringInfo() is a counterpart to makeStringInfo(), freeing a palloc'd StringInfo and its data. This is a convenience function to align the StringInfo API with the PQExpBuffer API. Originally added in the OAuth patchset, it was extracted and committed separately in order to aid upcoming JSON work. Author: Daniel Gustafsson <daniel@yesql.se> Author: Jacob Champion <jacob.champion@enterprisedb.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CAOYmi+mWdTd6ujtyF7MsvXvk7ToLRVG_tYAcaGbQLvf=N4KrQw@mail.gmail.com
* Fix race condition in transaction timeout TAP testsAlexander Korotkov2024-03-15
| | | | | | | | | | The interruption handler within the injection point can get stuck in an infinite loop while handling transaction timeout. To avoid this situation we reset the timeout flag before invoking the injection point. Author: Alexander Korotkov Reviewed-by: Andrey Borodin Discussion: https://postgr.es/m/ZfPchPC6oNN71X2J%40paquier.xyz
* Improve log messages referring to background worker processesHeikki Linnakangas2024-03-15
| | | | | | | | | "Worker" could also mean autovacuum worker or slot sync worker, so let's be more explicit. Per Tristan Partin's suggestion. Discussion: https://www.postgresql.org/message-id/CZM6WDX5H4QI.NZG1YUCKWLA@neon.tech
* Refactor initial hash lookup in dynahash.cMichael Paquier2024-03-15
| | | | | | | | | | | | | | The same pattern is used three times in dynahash.c to retrieve a bucket number and a hash bucket from a hash value. This has popped up while discussing improvements for the type cache, where this piece of refactoring would become useful. Note that hash_search_with_hash_value() does not need the bucket number, just the hash bucket. Author: Teodor Sigaev Reviewed-by: Aleksander Alekseev, Michael Paquier Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1@sigaev.ru
* Trim ORDER BY/DISTINCT aggregate pathkeys in gather_grouping_pathsDavid Rowley2024-03-15
| | | | | | | | | | | | | Similar to d8a295389, trim off any PathKeys which are for ORDER BY / DISTINCT aggregate functions from the PathKey List for the Gather Merge paths created by gather_grouping_paths(). These additional PathKeys are not valid to use after grouping has taken place as these PathKeys belong to columns which are inputs to an aggregate function and, therefore are unavailable after aggregation. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/cf63174c-8c89-3953-cb49-48f41f74941a@gmail.com Backpatch-through: 16, where 1349d2790 was added
* Login event trigger documentation wordsmithingDaniel Gustafsson2024-03-14
| | | | | | | | | Minor wordsmithing on the login trigger documentation and code comments to improve readability, as well as fixing a few small incorrect statements in the comments. Author: Robert Treat <rob@xzilla.net> Discussion: https://postgr.es/m/CAJSLCQ0aMWUh1m6E9YdjeqV61baQ=EhteJX8XOxXg8H_2Lcr0Q@mail.gmail.com
* Make INSERT-from-multiple-VALUES-rows handle domain target columns.Tom Lane2024-03-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a3c7a993d fixed some cases involving target columns that are arrays or composites by applying transformAssignedExpr to the VALUES entries, and then stripping off any assignment ArrayRefs or FieldStores that the transformation added. But I forgot about domains over arrays or composites :-(. Such cases would either fail with surprising complaints about mismatched datatypes, or insert unexpected coercions that could lead to odd results. To fix, extend the stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef or FieldStore. While poking at this, I realized that there's a poorly documented and not-at-all-tested behavior nearby: we coerce each VALUES column to the domain type separately, and rely on the rewriter to merge those operations so that the domain constraints are checked only once. If that merging did not happen, it's entirely possible that we'd get unexpected domain constraint failures due to checking a partially-updated container value. There's no bug there, but while we're here let's improve the commentary about it and add some test cases that explicitly exercise that behavior. Per bug #18393 from Pablo Kharo. Back-patch to all supported branches. Discussion: https://postgr.es/m/18393-65fedb1a0de9260d@postgresql.org
* Add pg_column_toast_chunk_id().Nathan Bossart2024-03-14
| | | | | | | | | | | | | This function returns the chunk_id of an on-disk TOASTed value. If the value is un-TOASTed or not on-disk, it returns NULL. This is useful for identifying which values are actually TOASTed and for investigating "unexpected chunk number" errors. Bumps catversion. Author: Yugo Nagata Reviewed-by: Jian He Discussion: https://postgr.es/m/20230329105507.d764497456eeac1ca491b5bd%40sraoss.co.jp
* Remove redundant snapshot copying from parallel leader to workersHeikki Linnakangas2024-03-14
| | | | | | | | | | | | | | | | | | | | | | | | The parallel query infrastructure copies the leader backend's active snapshot to the worker processes. But BitmapHeapScan node also had bespoken code to pass the snapshot from leader to the worker. That was redundant, so remove it. The removed code was analogous to the snapshot serialization in table_parallelscan_initialize(), but that was the wrong role model. A parallel bitmap heap scan is more like an independent non-parallel bitmap heap scan in each parallel worker as far as the table AM is concerned, because the coordination is done in nodeBitmapHeapscan.c, and the table AM doesn't need to know anything about it. This relies on the assumption that es_snapshot == GetActiveSnapshot(). That's not a new assumption, things would get weird if you used the QueryDesc's snapshot for visibility checks in the scans, but the active snapshot for evaluating quals, for example. This could use some refactoring and cleanup, but for now, just add some assertions. Reviewed-by: Dilip Kumar, Robert Haas Discussion: https://www.postgresql.org/message-id/5f3b9d59-0f43-419d-80ca-6d04c07cf61a@iki.fi
* Allow a no-wait lock acquisition to succeed in more cases.Robert Haas2024-03-14
| | | | | | | | | | | | | | | | | | | | | We don't determine the position at which a process waiting for a lock should insert itself into the wait queue until we reach ProcSleep(), and we may at that point discover that we must insert ourselves ahead of everyone who wants a conflicting lock, in which case we obtain the lock immediately. Up until now, a no-wait lock acquisition would fail in such cases, erroneously claiming that the lock couldn't be obtained immediately. Fix that by trying ProcSleep even in the no-wait case. No back-patch for now, because I'm treating this as an improvement to the existing no-wait feature. It could instead be argued that it's a bug fix, on the theory that there should never be any case whatsoever where no-wait fails to obtain a lock that would have been obtained immediately without no-wait, but I'm reluctant to interpret the semantics of no-wait that strictly. Robert Haas and Jingxian Li Discussion: http://postgr.es/m/CA+TgmobCH-kMXGVpb0BB-iNMdtcNkTvcZ4JBxDJows3kYM+GDg@mail.gmail.com
* Add TAP tests for timeoutsAlexander Korotkov2024-03-14
| | | | | | | | | | | This commit adds new tests to verify that transaction_timeout, idle_session_timeout, and idle_in_transaction_session_timeout work as expected. We introduce new injection points in before throwing a timeout FATAL error and check these injection points are reached. Discussion: https://postgr.es/m/CAAhFRxiQsRs2Eq5kCo9nXE3HTugsAAJdSQSmxncivebAxdmBjQ%40mail.gmail.com Author: Andrey Borodin Reviewed-by: Alexander Korotkov
* Fix false reports in pg_visibilityAlexander Korotkov2024-03-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, pg_visibility computes its xid horizon using the GetOldestNonRemovableTransactionId(). The problem is that this horizon can sometimes go backward. That can lead to reporting false errors. In order to fix that, this commit implements a new function GetStrictOldestNonRemovableTransactionId(). This function computes the xid horizon, which would be guaranteed to be newer or equal to any xid horizon computed before. We have to do the following to achieve this. 1. Ignore processes xmin's, because they consider connection to other databases that were ignored before. 2. Ignore KnownAssignedXids, because they are not database-aware. At the same time, the primary could compute its horizons database-aware. 3. Ignore walsender xmin, because it could go backward if some replication connections don't use replication slots. As a result, we're using only currently running xids to compute the horizon. Surely these would significantly sacrifice accuracy. But we have to do so to avoid reporting false errors. Inspired by earlier patch by Daniel Shelepanov and the following discussion with Robert Haas and Tom Lane. Discussion: https://postgr.es/m/1649062270.289865713%40f403.i.mail.ru Reviewed-by: Alexander Lakhin, Dmitry Koval
* Fix typos in reorderbuffer.c.Amit Kapila2024-03-14
| | | | | Author: Kyotaro Horiguchi Discussion: https://postgr.es/m/20240314.132817.1496502692848380820.horikyota.ntt@gmail.com
* Introduce "builtin" collation provider.Jeff Davis2024-03-13
| | | | | | | | | | | | | | | | | | | | | | | | | New provider for collations, like "libc" or "icu", but without any external dependency. Initially, the only locale supported by the builtin provider is "C", which is identical to the libc provider's "C" locale. The libc provider's "C" locale has always been treated as a special case that uses an internal implementation, without using libc at all -- so the new builtin provider uses the same implementation. The builtin provider's locale is independent of the server environment variables LC_COLLATE and LC_CTYPE. Using the builtin provider, the database collation locale can be "C" while LC_COLLATE and LC_CTYPE are set to "en_US", which is impossible with the libc provider. By offering a new builtin provider, it clarifies that the semantics of a collation using this provider will never depend on libc, and makes it easier to document the behavior. Discussion: https://postgr.es/m/ab925f69-5f9d-f85e-b87c-bd2a44798659@joeconway.com Discussion: https://postgr.es/m/dd9261f4-7a98-4565-93ec-336c1c110d90@manitou-mail.org Discussion: https://postgr.es/m/ff4c2f2f9c8fc7ca27c1c24ae37ecaeaeaff6b53.camel%40j-davis.com Reviewed-by: Daniel Vérité, Peter Eisentraut, Jeremy Schneider
* Put genbki.pl output into src/include/catalog/ directlyPeter Eisentraut2024-03-14
| | | | | | | | | | | | | | | | With the makefile rules, the output of genbki.pl was written to src/backend/catalog/, and then the header files were linked to src/include/catalog/. This changes it so that the output files are written directly to src/include/catalog/. This makes the logic simpler, and it also makes the behavior consistent with the meson build system. Also, the list of catalog files is now kept in parallel in src/include/catalog/{meson.build,Makefile}, while before the makefiles had it in src/backend/catalog/Makefile. Reviewed-by: Andreas Karlsson <andreas@proxel.se> Discussion: https://www.postgresql.org/message-id/flat/21b74bdc-183d-4dd5-9c27-9378d178f459@eisentraut.org
* Reintroduce MAINTAIN privilege and pg_maintain predefined role.Nathan Bossart2024-03-13
| | | | | | | | | | | | | | | | | | Roles with MAINTAIN on a relation may run VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZE VIEW, CLUSTER, and LOCK TABLE on the relation. Roles with privileges of pg_maintain may run those same commands on all relations. This was previously committed for v16, but it was reverted in commit 151c22deee due to concerns about search_path tricks that could be used to escalate privileges to the table owner. Commits 2af07e2f74, 59825d1639, and c7ea3f4229 resolved these concerns by restricting search_path when running maintenance commands. Bumps catversion. Reviewed-by: Jeff Davis Discussion: https://postgr.es/m/20240305161235.GA3478007%40nathanxps13
* Add the system identifier to backup manifests.Robert Haas2024-03-13
| | | | | | | | | | | | | | | | | | Before this patch, if you took a full backup on server A and then tried to use the backup manifest to take an incremental backup on server B, it wouldn't know that the manifest was from a different server and so the incremental backup operation could potentially complete without error. When you later tried to run pg_combinebackup, you'd find out that your incremental backup was and always had been invalid. That's poor timing, because nobody likes finding out about backup problems only at restore time. With this patch, you'll get an error when trying to take the (invalid) incremental backup, which seems a lot nicer. Amul Sul, revised by me. Review by Michael Paquier. Discussion: http://postgr.es/m/CA+TgmoYLZzbSAMM3cAjV4Y+iCRZn-bR9H2+Mdz7NdaJFU1Zb5w@mail.gmail.com
* Make the order of the header file includes consistentPeter Eisentraut2024-03-13
| | | | | | | | Similar to commit 7e735035f20. Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAMbWs4-WhpCFMbXCjtJ%2BFzmjfPrp7Hw1pk4p%2BZpU95Kh3ofZ1A%40mail.gmail.com
* Add some asserts based on LWLockHeldByMe() for replication slot statisticsMichael Paquier2024-03-13
| | | | | | | | | | | | Two assertions checking that ReplicationSlotAllocationLock is acquired are added to pgstat_create_replslot() and pgstat_drop_replslot(), corresponding to the routines in charge of the creation and the drop of replication slot statistics. The code previously relied on this assumption and documented it in comments, but did not enforce this policy at runtime. Reviewed-by: Bertrand Drouvot Discussion: https://postgr.es/m/Ze_p-hmD_yFeVYXg@paquier.xyz
* Fix confusion about the return rowtype of SQL-language procedures.Tom Lane2024-03-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a very ancient hack in check_sql_fn_retval that allows a single SELECT targetlist entry of composite type to be taken as supplying all the output columns of a function returning composite. (This is grotty and fundamentally ambiguous, but it's really hard to do nested composite-returning functions without it.) As far as I know, that doesn't cause any problems in ordinary functions. It's disastrous for procedures however. All procedures that have any output parameters are labeled with prorettype RECORD, and the CALL code expects it will get back a record with one column per output parameter, regardless of whether any of those parameters is composite. Doing something else leads to an assertion failure or core dump. This is simple enough to fix: we just need to not apply that rule when considering procedures. However, that requires adding another argument to check_sql_fn_retval, which at least in principle might be getting called by external callers. Therefore, in the back branches convert check_sql_fn_retval into an ABI-preserving wrapper around a new function check_sql_fn_retval_ext. Per report from Yahor Yuzefovich. This has been broken since we implemented procedures, so back-patch to all supported branches. Discussion: https://postgr.es/m/CABz5gWHSjj2df6uG0NRiDhZ_Uz=Y8t0FJP-_SVSsRsnrQT76Gg@mail.gmail.com
* Fix copying SockAddr structHeikki Linnakangas2024-03-12
| | | | | | | | | | | | | | | | | | | | | | | Valgrind alerted about accessing uninitialized bytes after commit 4945e4ed4a: ==700242== VALGRINDERROR-BEGIN ==700242== Conditional jump or move depends on uninitialised value(s) ==700242== at 0x6D8A2A: getnameinfo_unix (ip.c:253) ==700242== by 0x6D8BD1: pg_getnameinfo_all (ip.c:122) ==700242== by 0x4B3EB6: BackendInitialize (postmaster.c:4266) ==700242== by 0x4B684E: BackendStartup (postmaster.c:4114) ==700242== by 0x4B6986: ServerLoop (postmaster.c:1780) ==700242== by 0x4B80CA: PostmasterMain (postmaster.c:1478) ==700242== by 0x3F7424: main (main.c:197) ==700242== Uninitialised value was created by a stack allocation ==700242== at 0x4B6934: ServerLoop (postmaster.c:1737) ==700242== ==700242== VALGRINDERROR-END That was because the SockAddr struct was not copied correctly. Per buildfarm animal "skink".
* Move initialization of the Port struct to the child processHeikki Linnakangas2024-03-12
| | | | | | | | | | | | | | | | In postmaster, use a more lightweight ClientSocket struct that encapsulates just the socket itself and the remote endpoint's address that you get from accept() call. ClientSocket is passed to the child process, which initializes the bigger Port struct. This makes it more clear what information postmaster initializes, and what is left to the child process. Rename the StreamServerPort and StreamConnection functions to make it more clear what they do. Remove StreamClose, replacing it with plain closesocket() calls. Reviewed-by: Tristan Partin, Andres Freund Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
* Pass CAC as an argument to the backend processHeikki Linnakangas2024-03-12
| | | | | | | | | | We used to smuggle it to the child process in the Port struct, but it seems better to pass it down as a separate argument. This paves the way for the next commit, which moves the initialization of the Port struct to the backend process, after forking. Reviewed-by: Tristan Partin, Andres Freund Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
* Set socket options in child process after forkingHeikki Linnakangas2024-03-12
| | | | | | | | | | | | Try to minimize the work done in the postmaster process for each accepted connection, so that postmaster can quickly proceed with its duties. These function calls are very fast so this doesn't make any measurable performance difference in practice, but it's nice to have all the socket options initialization code in one place for sake of readability too. This also paves the way for an upcoming commit that will move the initialization of the Port struct to the child process. Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
* Disconnect if socket cannot be put into non-blocking modeHeikki Linnakangas2024-03-12
| | | | | | | | | | | | | | | | | | | | Commit 387da18874 moved the code to put socket into non-blocking mode from socket_set_nonblocking() into the one-time initialization function, pq_init(). In socket_set_nonblocking(), there indeed was a risk of recursion on failure like the comment said, but in pq_init(), ERROR or FATAL is fine. There's even another elog(FATAL) just after this, if setting FD_CLOEXEC fails. Note that COMMERROR merely logged the error, it did not close the connection, so if putting the socket to non-blocking mode failed we would use the connection anyway. You might not immediately notice, because most socket operations in a regular backend wait for the socket to become readable/writable anyway. But e.g. replication will be quite broken. Backpatch to all supported versions. Discussion: https://www.postgresql.org/message-id/d40a5cd0-2722-40c5-8755-12e9e811fa3c@iki.fi
* Keep replication slot statistics on invalidationMichael Paquier2024-03-12
| | | | | | | | | | | | | | | | | | | | | | | | | The code path in charge of invalidating a replication slot includes a call to pgstat_drop_replslot(), which would result in removing the statistics of the slot once invalidated. However, there is no need to remove the statistics of an invalidated slot as one could still be interested in looking at them to understand the activity of the slot until its actual removal. The initial design of the feature committed in be87200efd used the approach to drop the slots, which is likely why the statistics were still removed during the invalidation. Another problem with this operation is that it was done without holding ReplicationSlotAllocationLock, leaving it unprotected on concurrent activity. This part is arguably a bug, but that's a limited problem in practice so no backpatch is done. In passing, this commit adds a test to check this behavior. The only remaining code path where slot statistics are dropped now related to the slot getting dropped. Author: Bertrand Drouvot Discussion: https://postgr.es/m/ZermH08Eq6YydHpO@ip-10-97-1-34.eu-west-3.compute.internal
* Remove redundant fetch of the recent flush pointer in WalSndWaitForWal.Amit Kapila2024-03-12
| | | | | | | | | | In WalSndWaitForWal(), we fetch a recent flush pointer both outside the loop and inside the loop. But we start using RecentFlushPtr only after we fetch it inside the loop. So we can remove one outside the loop. Author: Shveta Malik Reviewed-by: Bertrand Drouvot, Matthias van de Meent, Amit Kapila Discussion: https://postgr.es/m/CAJpy0uBSCQz1yMD-WiEthzEe23dti2-Kr_pitVb7vAPFbFKm=A@mail.gmail.com
* Use printf's %m format instead of strerror(errno) in more placesMichael Paquier2024-03-12
| | | | | | | | | | | | | | | | | Most callers of strerror() are removed from the backend code. The remaining callers require special handling with a saved errno from a previous system call. The frontend code still needs strerror() where error states need to be handled outside of fprintf. Note that pg_regress is not changed to use %m as the TAP output may clobber errno, since those functions call fprintf() and friends before evaluating the format string. Support for %m in src/port/snprintf.c has been added in d6c55de1f99a, hence all the stable branches currently supported include it. Author: Dagfinn Ilmari Mannsåker Discussion: https://postgr.es/m/87sf13jhuw.fsf@wibble.ilmari.org
* Update obsolete index scan TID comments.Peter Geoghegan2024-03-11
| | | | Oversight in commit c2fe139c20.
* Remove unneeded vacuum_delay_point from heap_vac_scan_get_next_blockHeikki Linnakangas2024-03-11
| | | | | | | | | | heap_vac_scan_get_next_block() does relatively little work, so there is no need to call vacuum_delay_point(). A future commit will call heap_vac_scan_get_next_block() from a callback, and we would like to avoid calling vacuum_delay_point() in that callback. Author: Melanie Plageman Discussion: https://postgr.es/m/CAAKRu_Yf3gvXGcCnqqfoq0Q8LX8UM-e-qbm_B1LeZh60f8WhWA%40mail.gmail.com
* Confine vacuum skip logic to lazy_scan_skip()Heikki Linnakangas2024-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename lazy_scan_skip() to heap_vac_scan_next_block() and move more code into the function, so that the caller doesn't need to know about ranges or skipping anymore. heap_vac_scan_next_block() returns the next block to process, and the logic for determining that block is all within the function. This makes the skipping logic easier to understand, as it's all in the same function, and makes the calling code easier to understand as it's less cluttered. The state variables needed to manage the skipping logic are moved to LVRelState. heap_vac_scan_next_block() now manages its own VM buffer separately from the caller's vmbuffer variable. The caller's vmbuffer holds the VM page for the current block its processing, while heap_vac_scan_next_block() keeps a pin on the VM page for the next unskippable block. Most of the time they are the same, so we hold two pins on the same buffer, but it's more convenient to manage them separately. For readability inside heap_vac_scan_next_block(), move the logic of finding the next unskippable block to separate function, and add some comments. This refactoring will also help future patches to switch to using a streaming read interface, and eventually AIO (https://postgr.es/m/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com) Author: Melanie Plageman, Heikki Linnakangas Reviewed-by: Andres Freund (older version) Discussion: https://postgr.es/m/CAAKRu_Yf3gvXGcCnqqfoq0Q8LX8UM-e-qbm_B1LeZh60f8WhWA%40mail.gmail.com
* Set all_visible_according_to_vm correctly with DISABLE_PAGE_SKIPPINGHeikki Linnakangas2024-03-11
| | | | | | | | | | | | | | | | | | | It's important for 'all_visible_according_to_vm' to correctly reflect whether the VM bit is set or not, even when we are not trusting the VM to skip pages, because contrary to what the comment said, lazy_scan_prune() relies on it. If it's incorrectly set to 'false', when the VM bit is in fact set, lazy_scan_prune() will try to set the VM bit again and dirty the page unnecessarily. As a result, if you used DISABLE_PAGE_SKIPPING, all heap pages were dirtied, even if there were no changes. We would also fail to clear any VM bits that were set incorrectly. This was broken in commit 980ae17310, so backpatch to v16. Backpatch-through: 16 Reviewed-by: Melanie Plageman, Peter Geoghegan Discussion: https://www.postgresql.org/message-id/3df2b582-dc1c-46b6-99b6-38eddd1b2784@iki.fi
* Don't destroy SMgrRelations at relcache invalidationHeikki Linnakangas2024-03-11
| | | | | | | | | | | | | | | | With commit 21d9c3ee4e, SMgrRelations remain valid until end of transaction (or longer if they're "pinned"). Relcache invalidation can happen in the middle of a transaction, so we must not destroy them at relcache invalidation anymore. This was revealed by failures in the 'constraints' test in buildfarm animals using -DCLOBBER_CACHE_ALWAYS. That started failing with commit 8af2565248, which was the first commit that started to rely on an SMgrRelation living until end of transaction. Diagnosed-by: Tomas Vondra, Thomas Munro Reviewed-by: Thomas Munro Discussion: https://www.postgresql.org/message-id/CA%2BhUKGK%2B5DOmLaBp3Z7C4S-Yv6yoROvr1UncjH2S1ZbPT8D%2BZg%40mail.gmail.com
* Fix incorrect accessing of pfree'd memory in MemoizeDavid Rowley2024-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | For pass-by-reference types, the code added in 0b053e78b, which aimed to resolve a memory leak, was overly aggressive in resetting the per-tuple memory context which could result in pfree'd memory being accessed resulting in failing to find previously cached results in the hash table. What was happening was prepare_probe_slot() was switching to the per-tuple memory context and calling ExecEvalExpr(). ExecEvalExpr() may have required a memory allocation. Both MemoizeHash_hash() and MemoizeHash_equal() were aggressively resetting the per-tuple context and after determining the hash value, the context would have gotten reset before MemoizeHash_equal() was called. This could have resulted in MemoizeHash_equal() looking at pfree'd memory. This is less likely to have caused issues on a production build as some other allocation would have had to have reused the pfree'd memory to overwrite it. Otherwise, the original contents would have been intact. However, this clearly caused issues on MEMORY_CONTEXT_CHECKING builds. Author: Tender Wang, Andrei Lepikhov Reported-by: Tender Wang (using SQLancer) Reviewed-by: Andrei Lepikhov, Richard Guo, David Rowley Discussion: https://postgr.es/m/CAHewXNnT6N6UJkya0z-jLFzVxcwGfeRQSfhiwA+NyLg-x8iGew@mail.gmail.com Backpatch-through: 14, where Memoize was added
* Improve consistency of replication slot statisticsMichael Paquier2024-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | The replication slot stats stored in shared memory rely on an internal index number. Both pgstat_reset_replslot() and pgstat_fetch_replslot() lacked some LWLock protections with ReplicationSlotControlLock while operating on these index numbers. This issue could cause these two functions to potentially operate on incorrect slots when taken in isolation in the event of slots dropped and/or re-created concurrently. Note that pg_stat_get_replication_slot() is called once per slot when querying pg_stat_replication_slots, meaning that the stats are retrieved across multiple ReplicationSlotControlLock acquisitions. So, while this commit improves more consistency, it may still be possible that statistics are not completely consistent for a single scan of pg_stat_replication_slots under concurrent replication slot drop or creation activity. The issue should unlikely be a problem in practice, causing the report of inconsistent stats or or the stats reset of an incorrect slot, so no backpatch is done. Author: Bertrand Drouvot Reviewed-by: Heikki Linnakangas, Shveta Malik, Michael Paquier Discussion: https://postgr.es/m/ZeGq1HDWFfLkjh4o@ip-10-97-1-34.eu-west-3.compute.internal
* Add some checkpoint and redo LSNs to a couple of recovery errorsMichael Paquier2024-03-11
| | | | | | | | | | | | | | | | | | Two FATALs and one PANIC gain details about the LSNs they fail at: - When restoring from a backup_label, the FATAL log generated when not finding the checkpoint record now reports its LSN. - When restoring from a backup_label, the FATAL log generated when not finding the redo record referenced by a checkpoint record now shows both the redo and checkpoint record LSNs. - When not restoring from a backup_label, the PANIC error generated when not finding the checkpoint record now reports its LSN. This information is useful when debugging corruption issues, and these LSNs may not show up in the logs depending on the level of logging configured in the backend. Author: David Steele Discussion: https://postgr.es/m/0e90da89-77ca-4ccf-872c-9626d755e288@pgmasters.net
* Improve support for ExplainOneQuery() hookMichael Paquier2024-03-11
| | | | | | | | | | | | | | | | | | | | There is a hook called ExplainOneQuery_hook that gives modules the possibility to plug into this code path, but, like utility.c for utility statement execution, there is no corresponding "standard" routine in the case of EXPLAIN executed for one Query. This commit adds a new standard_ExplainOneQuery() in explain.c, which is able to run explain on a non-utility Query without calling its hook. Per the feedback received from a couple of hackers, this change gives the possibility to cut a few hundred lines of code in some of the popular out-of-core modules as these maintained a copy of ExplainOneQuery(), adding custom extra information at the beginning or the end of the EXPLAIN output. Author: Mats Kindahl Reviewed-by: Aleksander Alekseev, Jelte Fennema-Nio, Andrei Lepikhov Discussion: https://postgr.es/m/CA+14427V_B4EAoC_o-iYYucRdMSOTfpuH9k-QbexffY1HYJBiA@mail.gmail.com