aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor
Commit message (Collapse)AuthorAge
* Fix assertion on dereferenced objectDaniel Gustafsson2025-02-14
| | | | | | | | | | | | | Commit 27cc7cd2bc8a accidentally placed the assertion ensuring that the pointer isn't NULL after it had already been accessed. Fix by moving the pointer dereferencing to after the assertion. Backpatch to all supported branches. Author: Dmitry Koval <d.koval@postgrespro.ru> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/1618848d-cdc7-414b-9c03-08cf4bef4408@postgrespro.ru Backpatch-through: 13
* Fix Assert failure in WITH RECURSIVE UNION queriesDavid Rowley2024-12-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the non-recursive part of a recursive CTE ended up using TTSOpsBufferHeapTuple as the table slot type, then a duplicate value could cause an Assert failure in CheckOpSlotCompatibility() when checking the hash table for the duplicate value. The expected slot type for the deform step was TTSOpsMinimalTuple so the Assert failed when the TTSOpsBufferHeapTuple slot was used. This is a long-standing bug which we likely didn't notice because it seems much more likely that the non-recursive term would have required projection and used a TTSOpsVirtual slot, which CheckOpSlotCompatibility is ok with. There doesn't seem to be any harm done here other than the Assert failure. Both TTSOpsMinimalTuple and TTSOpsBufferHeapTuple slot types require tuple deformation, so the EEOP_*_FETCHSOME ExprState step would have properly existed in the ExprState. The solution is to pass NULL for the ExecBuildGroupingEqual's 'lops' parameter. This means the ExprState's EEOP_*_FETCHSOME step won't expect a fixed slot type. This makes CheckOpSlotCompatibility() happy as no checking is performed when the ExprEvalStep is not expecting a fixed slot type. Reported-by: Richard Guo Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAMbWs4-8U9q2LAtf8+ghV11zeUReA3AmrYkxzBEv0vKnDxwkKA@mail.gmail.com Backpatch-through: 13, all supported versions
* Simplify executor's determination of whether to use parallelism.Tom Lane2024-12-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our parallel-mode code only works when we are executing a query in full, so ExecutePlan must disable parallel mode when it is asked to do partial execution. The previous logic for this involved passing down a flag (variously named execute_once or run_once) from callers of ExecutorRun or PortalRun. This is overcomplicated, and unsurprisingly some of the callers didn't get it right, since it requires keeping state that not all of them have handy; not to mention that the requirements for it were undocumented. That led to assertion failures in some corner cases. The only state we really need for this is the existing QueryDesc.already_executed flag, so let's just put all the responsibility in ExecutePlan. (It could have been done in ExecutorRun too, leading to a slightly shorter patch -- but if there's ever more than one caller of ExecutePlan, it seems better to have this logic in the subroutine than the callers.) This makes those ExecutorRun/PortalRun parameters unnecessary. In master it seems okay to just remove them, returning the API for those functions to what it was before parallelism. Such an API break is clearly not okay in stable branches, but for them we can just leave the parameters in place after documenting that they do nothing. Per report from Yugo Nagata, who also reviewed and tested this patch. Back-patch to all supported branches. Discussion: https://postgr.es/m/20241206062549.710dc01cf91224809dd6c0e1@sraoss.co.jp
* Fix possible crash during WindowAgg evaluationDavid Rowley2024-12-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When short-circuiting WindowAgg node evaluation on the top-level WindowAgg node using quals on monotonic window functions, because the WindowAgg run condition can mean there's no need to evaluate subsequent window function results in the same partition once the run condition becomes false, it was possible that the executor would use stale results from the previous invocation of the window function in some cases. A fix for this was partially done by a5832722, but that commit only fixed the issue for non-top-level WindowAgg nodes. I mistakenly thought that the top-level WindowAgg didn't have this issue, but Jayesh's example case clearly shows that's incorrect. At the time, I also thought that this only affected 32-bit systems as all window functions which then supported run conditions returned BIGINT, however, that's wrong as ExecProject is still called and that could cause evaluation of any other window function belonging to the same WindowAgg node, one of which may return a byref type. The only queries affected by this are WindowAggs with a "Run Condition" which contains at least one window function with a byref result type, such as lead() or lag() on a byref column. The window clause must also contain a PARTITION BY clause (without a PARTITION BY, execution of the WindowAgg stops immediately when the run condition becomes false and there's no risk of using the stale results). Reported-by: Jayesh Dehankar Discussion: https://postgr.es/m/193261e2c4d.3dd3cd7c1842.871636075166132237@zohocorp.com Backpatch-through: 15, where WindowAgg run conditions were added
* Fix NULLIF()'s handling of read-write expanded objects.Tom Lane2024-11-25
| | | | | | | | | | | | | | | | | | | | | | | If passed a read-write expanded object pointer, the EEOP_NULLIF code would hand that same pointer to the equality function and then (unless equality was reported) also return the same pointer as its value. This is no good, because a function that receives a read-write expanded object pointer is fully entitled to scribble on or even delete the object, thus corrupting the NULLIF output. (This problem is likely unobservable with the equality functions provided in core Postgres, but it's easy to demonstrate with one coded in plpgsql.) To fix, make sure the pointer passed to the equality function is read-only. We can still return the original read-write pointer as the NULLIF result, allowing optimization of later operations. Per bug #18722 from Alexander Lakhin. This has been wrong since we invented expanded objects, so back-patch to all supported branches. Discussion: https://postgr.es/m/18722-fd9e645448cc78b4@postgresql.org
* Ensure cached plans are correctly marked as dependent on role.Nathan Bossart2024-11-11
| | | | | | | | | | | | | | If a CTE, subquery, sublink, security invoker view, or coercion projection references a table with row-level security policies, we neglected to mark the plan as potentially dependent on which role is executing it. This could lead to later executions in the same session returning or hiding rows that should have been hidden or returned instead. Reported-by: Wolfgang Walther Reviewed-by: Noah Misch Security: CVE-2024-10976 Backpatch-through: 12
* Improve fix for not entering parallel mode when holding interrupts.Tom Lane2024-11-08
| | | | | | | | | | | | | | | | | | | | | | | Commit ac04aa84a put the shutoff for this into the planner, which is not ideal because it doesn't prevent us from re-using a previously made parallel plan. Revert the planner change and instead put the shutoff into InitializeParallelDSM, modeling it on the existing code there for recovering from failure to allocate a DSM segment. However, that code path is mostly untested, and testing a bit harder showed there's at least one bug: ExecHashJoinReInitializeDSM is not prepared for us to have skipped doing parallel DSM setup. I also thought the Assert in ReinitializeParallelWorkers is pretty ill-advised, and replaced it with a silent Min() operation. The existing test case added by ac04aa84a serves fine to test this version of the fix, so no change needed there. Patch by me, but thanks to Noah Misch for the core idea that we could shut off worker creation when !INTERRUPTS_CAN_BE_PROCESSED. Back-patch to v12, as ac04aa84a was. Discussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com
* Remove unnecessary word in a commentAmit Langote2024-10-23
| | | | | | | | | Relations opened by the executor are only closed once in ExecCloseRangeTableRelations(), so the word "again" in the comment for ExecGetRangeTableRelation() is misleading and unnecessary. Discussion: https://postgr.es/m/CA+HiwqHnw-zR+u060i3jp4ky5UR0CjByRFQz50oZ05de7wUg=Q@mail.gmail.com Backpatch-through: 12
* SQL/JSON: Fix some oversights in commit b6e1157e7Amit Langote2024-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The decision in b6e1157e7 to ignore raw_expr when evaluating a JsonValueExpr was incorrect. While its value is not ultimately used (since formatted_expr's value is), failing to initialize it can lead to problems, for instance, when the expression tree in raw_expr contains Aggref nodes, which must be initialized to ensure the parent Agg node works correctly. Also, optimize eval_const_expressions_mutator()'s handling of JsonValueExpr a bit. Currently, when formatted_expr cannot be folded into a constant, we end up processing it twice -- once directly in eval_const_expressions_mutator() and again recursively via ece_generic_processing(). This recursive processing is required to handle raw_expr. To avoid the redundant processing of formatted_expr, we now process raw_expr directly in eval_const_expressions_mutator(). Finally, update the comment of JsonValueExpr to describe the roles of raw_expr and formatted_expr more clearly. Bug: #18657 Reported-by: Alexander Lakhin <exclusion@gmail.com> Diagnosed-by: Fabio R. Sluzala <fabio3rs@gmail.com> Diagnosed-by: Tender Wang <tndrwang@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18657-1b90ccce2b16bdb8@postgresql.org Backpatch-through: 16
* Fix extreme skew detection in Parallel Hash Join.Thomas Munro2024-10-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After repartitioning the inner side of a hash join that would have exceeded the allowed size, we check if all the tuples from a parent partition moved to one child partition. That is evidence that it contains duplicate keys and later attempts to repartition will also fail, so we should give up trying to limit memory (for lack of a better fallback strategy). A thinko prevented the check from working correctly in partition 0 (the one that is partially loaded into memory already). After repartitioning, we should check for extreme skew if the *parent* partition's space_exhausted flag was set, not the child partition's. The consequence was repeated futile repartitioning until per-partition data exceeded various limits including "ERROR: invalid DSA memory alloc request size 1811939328", OS allocation failure, or temporary disk space errors. (We could also do something about some of those symptoms, but that's material for separate patches.) This problem only became likely when PostgreSQL 16 introduced support for Parallel Hash Right/Full Join, allowing NULL keys into the hash table. Repartitioning always leaves NULL in partition 0, no matter how many times you do it, because the hash value is all zero bits. That's unlikely for other hashed values, but they might still have caused wasted extra effort before giving up. Back-patch to all supported releases. Reported-by: Craig Milhiser <craig@milhiser.com> Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Discussion: https://postgr.es/m/CA%2BwnhO1OfgXbmXgC4fv_uu%3DOxcDQuHvfoQ4k0DFeB0Qqd-X-rQ%40mail.gmail.com
* Further refine _SPI_execute_plan's rule for atomic execution.Tom Lane2024-10-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 2dc1deaea turns out to have been still a brick shy of a load, because CALL statements executing within a plpgsql exception block could still pass the wrong snapshot to stable functions within the CALL's argument list. That happened because standard_ProcessUtility forces isAtomicContext to true if IsTransactionBlock is true, which it always will be inside a subtransaction. Then ExecuteCallStmt would think it does not need to push a new snapshot --- but _SPI_execute_plan didn't do so either, since it thought it was in nonatomic mode. The best fix for this seems to be for _SPI_execute_plan to operate in atomic execution mode if IsSubTransaction() is true, even when the SPI context as a whole is non-atomic. This makes _SPI_execute_plan have the same rules about when non-atomic execution is allowed as _SPI_commit/_SPI_rollback have about when COMMIT/ROLLBACK are allowed, which seems appropriately symmetric. (If anyone ever tries to allow COMMIT/ROLLBACK inside a subtransaction, this would all need to be rethought ... but I'm unconvinced that such a thing could be logically consistent at all.) For further consistency, also check IsSubTransaction() in SPI_inside_nonatomic_context. That does not matter for its one present-day caller StartTransaction, which can't be reached inside a subtransaction. But if any other callers ever arise, they'd presumably want this definition. Per bug #18656 from Alexander Alehin. Back-patch to all supported branches, like previous fixes in this area. Discussion: https://postgr.es/m/18656-cade1780866ef66c@postgresql.org
* Fix use of uninitialized value in previous commit.Noah Misch2024-09-24
| | | | | | Per buildfarm member akepa and others. Back-patch to v16 and v15. Discussion: https://postgr.es/m/20240924224352.93.nmisch@google.com
* For inplace update durability, make heap_update() callers wait.Noah Misch2024-09-24
| | | | | | | | | | | | | | | | | | | The previous commit fixed some ways of losing an inplace update. It remained possible to lose one when a backend working toward a heap_update() copied a tuple into memory just before inplace update of that tuple. In catalogs eligible for inplace update, use LOCKTAG_TUPLE to govern admission to the steps of copying an old tuple, modifying it, and issuing heap_update(). This includes MERGE commands. To avoid changing most of the pg_class DDL, don't require LOCKTAG_TUPLE when holding a relation lock sufficient to exclude inplace updaters. Back-patch to v12 (all supported versions). In v13 and v12, "UPDATE pg_class" or "UPDATE pg_database" can still lose an inplace update. The v14+ UPDATE fix needs commit 86dc90056dfdbd9d1b891718d2e5614e3e432f35, and it wasn't worth reimplementing that fix without such infrastructure. Reviewed by Nitin Motiani and (in earlier versions) Heikki Linnakangas. Discussion: https://postgr.es/m/20231027214946.79.nmisch@google.com
* Add missing query ID reporting in extended query protocolMichael Paquier2024-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds query ID reports for two code paths when processing extended query protocol messages: - When receiving a bind message, setting it to the first Query retrieved from a cached cache. - When receiving an execute message, setting it to the first PlannedStmt stored in a portal. An advantage of this method is that this is able to cover all the types of portals handled in the extended query protocol, particularly these two when the report done in ExecutorStart() is not enough (neither is an addition in ExecutorRun(), actually, for the second point): - Multiple execute messages, with multiple ExecutorRun(). - Portal with execute/fetch messages, like a query with a RETURNING clause and a fetch size that stores the tuples in a first execute message going though ExecutorStart() and ExecuteRun(), followed by one or more execute messages doing only fetches from the tuplestore created in the first message. This corresponds to the case where execute_is_fetch is set, for example. Note that the query ID reporting done in ExecutorStart() is still necessary, as an EXECUTE requires it. Query ID reporting is optimistic and more calls to pgstat_report_query_id() don't matter as the first report takes priority except if the report is forced. The comment in ExecutorStart() is adjusted to reflect better the reality with the extended query protocol. The test added in pg_stat_statements is a courtesy of Robert Haas. This uses psql's \bind metacommand, hence this part is backpatched down to v16. Reported-by: Kaido Vaikla, Erik Wienhold Author: Sami Imseih Reviewed-by: Jian He, Andrei Lepikhov, Michael Paquier Discussion: https://postgr.es/m/CA+427g8DiW3aZ6pOpVgkPbqK97ouBdf18VLiHFesea2jUk3XoQ@mail.gmail.com Discussion: https://postgr.es/m/CA+TgmoZxtnf_jZ=VqBSyaU8hfUkkwoJCJ6ufy4LGpXaunKrjrg@mail.gmail.com Discussion: https://postgr.es/m/1391613709.939460.1684777418070@office.mailbox.org Backpatch-through: 14
* Fix edge case in plpgsql's make_callstmt_target().Tom Lane2024-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the plancache entry for the CALL statement is already stale, it's possible for us to fetch an old procedure OID out of it, and then fail with "cache lookup failed for function NNN". In ordinary usage this never happens because make_callstmt_target is called just once immediately after building the plancache entry. It can be forced however by setting up an erroneous CALL (that causes make_callstmt_target itself to report an error), then dropping/recreating the target procedure, then repeating the erroneous CALL. To fix, use SPI_plan_get_cached_plan() to fetch the plancache's plan, rather than assuming we can use SPI_plan_get_plan_sources(). This shouldn't add any noticeable overhead in the normal case, and in the stale-plan case we'd have had to replan anyway a little further down. The other callers of SPI_plan_get_plan_sources() seem OK, because either they don't need up-to-date plans or they know that the query was just (re) planned. But add some commentary in hopes of not falling into this trap again. Per bug #18574 from Song Hongyu. Back-patch to v14 where this coding was introduced. (Older branches have comparable code, but it's run after any required replanning, so there's no issue.) Discussion: https://postgr.es/m/18574-2ce7ba3249221389@postgresql.org
* Propagate query IDs of utility statements in functionsMichael Paquier2024-07-19
| | | | | | | | | | | | | | | | | | | For utility statements defined within a function, the query tree is copied to a PlannedStmt as utility commands do not require planning. However, the query ID was missing from the information passed down. This leads to plugins relying on the query ID like pg_stat_statements to not be able to track utility statements within function calls. Tests are added to check this behavior, depending on pg_stat_statements.track. This is an old bug. Now, query IDs for utilities are compiled using their parsed trees rather than the query string since v16 (3db72ebcbe20), leading to less bloat with utilities, so backpatch down only to this version. Author: Anthonin Bonnefoy Discussion: https://postgr.es/m/CAO6_XqrGp-uwBqi3vBPLuRULKkddjC7R5QZCgsFren=8E+m2Sg@mail.gmail.com Backpatch-through: 16
* Fix right-anti-joins when the inner relation is proven uniqueRichard Guo2024-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | For an inner_unique join, we always assume that the executor will stop scanning for matches after the first match. Therefore, for a mergejoin that is inner_unique and whose mergeclauses are sufficient to identify a match, we set the skip_mark_restore flag to true, indicating that the executor need not do mark/restore calls. However, merge-right-anti-join did not get this memo and continues scanning the inner side for matches after the first match. If there are duplicates in the outer scan, we may incorrectly skip matching some inner tuples, which can lead to wrong results. Here we fix this issue by ensuring that merge-right-anti-join also advances to next outer tuple after the first match in inner_unique cases. This also saves cycles by avoiding unnecessary scanning of inner tuples after the first match. Although hash-right-anti-join does not suffer from this wrong results issue, we apply the same change to it as well, to help save cycles for the same reason. Per bug #18522 from Antti Lampinen, and bug #18526 from Feliphe Pozzer. Back-patch to v16 where right-anti-join was introduced. Author: Richard Guo Discussion: https://postgr.es/m/18522-c7a8956126afdfd0@postgresql.org
* Expand comments and add an assertion in nodeModifyTable.c.Noah Misch2024-06-27
| | | | | | | | | | | Most comments concern RELKIND_VIEW. One addresses the ExecUpdate() "tupleid" parameter. A later commit will rely on these facts, but they hold already. Back-patch to v12 (all supported versions), the plan for that commit. Reviewed (in an earlier version) by Robert Haas. Discussion: https://postgr.es/m/20240512232923.aa.nmisch@google.com
* Fix thinkos in commentsAlvaro Herrera2024-06-27
| | | | | The first one was noticed by Tender Wang and introduced with 8aba9322511f; the other one was newly introduced with dbca3469ebf8.
* Fix partition pruning setup during DETACH CONCURRENTLYAlvaro Herrera2024-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When detaching partition in concurrent mode, it's possible for partition descriptors to not match the set that was recently seen when the plan was made, causing an assertion failure or (in production builds) failure to construct a working plan. The case that was reported involves prepared statements, but I think it may be possible to hit this bug without that too. The problem is that CreatePartitionPruneState is constructing a PartitionPruneState under the assumption that new partitions can be added, but never removed, but it turns out that this isn't true: a prepared statement gets replanned when the DETACH CONCURRENTLY session sends out its invalidation message, but if the invalidation message arrives after ExecInitAppend started, we would build a partition descriptor without the partition, and then CreatePartitionPruneState would refuse to work with it. CreatePartitionPruneState already contains code to deal with the new descriptor having more partitions than before (and behaving for the extra partitions as if they had been pruned), but doesn't have code to deal with less partitions than before, and it is naïve about the case where the number of partitions is the same. We could simply add that a new stanza for less partitions than before, and in simple testing it works to do that; but it's possible to press the test scripts even further and hit the case where one partition is added and a partition is removed quickly enough that we see the same number of partitions, but they don't actually match, causing hangs during execution. To cope with both these problems, we now memcmp() the arrays of partition OIDs, and do a more elaborate mapping (relying on the fact that both OID arrays are in partition-bounds order) if they're not identical. Backpatch to 14, where DETACH CONCURRENTLY appeared. Reported-by: yajun Hu <1026592243@qq.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/18377-e0324601cfebdfe5@postgresql.org
* Fix behavior of stable functions called from a CALL's argument list.Tom Lane2024-06-07
| | | | | | | | | | | | | | | | | | | | | | | | | | If the CALL is within an atomic context (e.g. there's an outer transaction block), _SPI_execute_plan should acquire a fresh snapshot to execute any such functions with. We failed to do that and instead passed them the Portal snapshot, which had been acquired at the start of the current SQL command. This'd lead to seeing stale values of rows modified since the start of the command. This is arguably a bug in 84f5c2908: I failed to see that "are we in non-atomic mode" needs to be defined the same way as it is further down in _SPI_execute_plan, i.e. check !_SPI_current->atomic not just options->allow_nonatomic. Alternatively the blame could be laid on plpgsql, which is unconditionally passing allow_nonatomic = true for CALL/DO even when it knows it's in an atomic context. However, fixing it in spi.c seems like a better idea since that will also fix the problem for any extensions that may have copied plpgsql's coding pattern. While here, update an obsolete comment about _SPI_execute_plan's snapshot management. Per report from Victor Yegorov. Back-patch to all supported versions. Discussion: https://postgr.es/m/CAGnEboiRe+fG2QxuBO2390F7P8e2MQ6UyBjZSL_w1Cej+E4=Vw@mail.gmail.com
* Ensure we allocate NAMEDATALEN bytes for names in Index Only ScansDavid Rowley2024-05-01
| | | | | | | | | | | | | | | As an optimization, we store "name" columns as cstrings in btree indexes. Here we modify it so that Index Only Scans convert these cstrings back to names with NAMEDATALEN bytes rather than storing the cstring in the tuple slot, as was happening previously. Bug: #17855 Reported-by: Alexander Lakhin Reviewed-by: Alexander Lakhin, Tom Lane Discussion: https://postgr.es/m/17855-5f523e0f9769a566@postgresql.org Backpatch-through: 12, all supported versions
* Fix bogus coding in ExecAppendAsyncEventWait().Etsuro Fujita2024-04-04
| | | | | | | | | | | | | | | | No configured-by-FDW events would result in "return" directly out of a PG_TRY block, making the exception stack dangling. Repair. Oversight in commit 501cfd07d; back-patch to v14, like that commit, but as we do not have this issue in HEAD (cf. commit 50c67c201), no need to apply this patch to it. In passing, improve a comment about the handling of in-process requests in a postgres_fdw.c function called from this function. Alexander Pyhalov, with comment adjustment/improvement by me. Discussion: https://postgr.es/m/425fa29a429b21b0332737c42a4fdc70%40postgrespro.ru
* Fix unnecessary use of moving-aggregate mode with non-moving frame.Tom Lane2024-03-27
| | | | | | | | | | | | | | | | | | | | When a plain aggregate is used as a window function, and the window frame start is specified as UNBOUNDED PRECEDING, the frame's head cannot move so we do not need to use moving-aggregate mode. The check for that was put into initialize_peragg(), failing to notice that ExecInitWindowAgg() calls that function before it's filled in winstate->frameOptions. Since makeNode() would have zeroed the field, this didn't provoke uninitialized-value complaints, nor would the erroneous decision have resulted in more than a little inefficiency. Still, it's wrong, so move the initialization of winstate->frameOptions earlier to make it work properly. While here, also fix a thinko in a comment. Both errors crept in in commit a9d9acbf2 which introduced the moving-aggregate mode. Spotted by Vallimaharajan G. Back-patch to all supported branches. Discussion: https://postgr.es/m/18e7f2a5167.fe36253866818.977923893562469143@zohocorp.com
* Fix EXPLAIN Bitmap heap scan to count pages with no visible tuplesHeikki Linnakangas2024-03-18
| | | | | | | | | | | | | | | | | Previously, bitmap heap scans only counted lossy and exact pages for explain when there was at least one visible tuple on the page. heapam_scan_bitmap_next_block() returned true only if there was a "valid" page with tuples to be processed. However, the lossy and exact page counters in EXPLAIN should count the number of pages represented in a lossy or non-lossy way in the constructed bitmap, regardless of whether or not the pages ultimately contained visible tuples. Backpatch to all supported versions. Author: Melanie Plageman Discussion: https://www.postgresql.org/message-id/CAAKRu_ZwCwWFeL_H3ia26bP2e7HiKLWt0ZmGXPVwPO6uXq0vaA@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAAKRu_bxrXeZ2rCnY8LyeC2Ls88KpjWrQ%2BopUrXDRXdcfwFZGA@mail.gmail.com
* Fix confusion about the return rowtype of SQL-language procedures.Tom Lane2024-03-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a very ancient hack in check_sql_fn_retval that allows a single SELECT targetlist entry of composite type to be taken as supplying all the output columns of a function returning composite. (This is grotty and fundamentally ambiguous, but it's really hard to do nested composite-returning functions without it.) As far as I know, that doesn't cause any problems in ordinary functions. It's disastrous for procedures however. All procedures that have any output parameters are labeled with prorettype RECORD, and the CALL code expects it will get back a record with one column per output parameter, regardless of whether any of those parameters is composite. Doing something else leads to an assertion failure or core dump. This is simple enough to fix: we just need to not apply that rule when considering procedures. However, that requires adding another argument to check_sql_fn_retval, which at least in principle might be getting called by external callers. Therefore, in the back branches convert check_sql_fn_retval into an ABI-preserving wrapper around a new function check_sql_fn_retval_ext. Per report from Yahor Yuzefovich. This has been broken since we implemented procedures, so back-patch to all supported branches. Discussion: https://postgr.es/m/CABz5gWHSjj2df6uG0NRiDhZ_Uz=Y8t0FJP-_SVSsRsnrQT76Gg@mail.gmail.com
* Fix incorrect accessing of pfree'd memory in MemoizeDavid Rowley2024-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | For pass-by-reference types, the code added in 0b053e78b, which aimed to resolve a memory leak, was overly aggressive in resetting the per-tuple memory context which could result in pfree'd memory being accessed resulting in failing to find previously cached results in the hash table. What was happening was prepare_probe_slot() was switching to the per-tuple memory context and calling ExecEvalExpr(). ExecEvalExpr() may have required a memory allocation. Both MemoizeHash_hash() and MemoizeHash_equal() were aggressively resetting the per-tuple context and after determining the hash value, the context would have gotten reset before MemoizeHash_equal() was called. This could have resulted in MemoizeHash_equal() looking at pfree'd memory. This is less likely to have caused issues on a production build as some other allocation would have had to have reused the pfree'd memory to overwrite it. Otherwise, the original contents would have been intact. However, this clearly caused issues on MEMORY_CONTEXT_CHECKING builds. Author: Tender Wang, Andrei Lepikhov Reported-by: Tender Wang (using SQLancer) Reviewed-by: Andrei Lepikhov, Richard Guo, David Rowley Discussion: https://postgr.es/m/CAHewXNnT6N6UJkya0z-jLFzVxcwGfeRQSfhiwA+NyLg-x8iGew@mail.gmail.com Backpatch-through: 14, where Memoize was added
* Fix handling of self-modified tuples in MERGE.Dean Rasheed2024-03-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When an UPDATE or DELETE action in MERGE returns TM_SelfModified, there are 2 possible causes: 1). The target tuple was already updated or deleted by the current command. This can happen if the target row joins to more than one source row, and the SQL standard explicitly says that this must be an error. 2). The target tuple was already updated or deleted by a later command in the current transaction. This can happen if the tuple is modified by a BEFORE trigger or a volatile function used in the query, and should be an error for the same reason that it is in a plain UPDATE or DELETE command. In MERGE's primary error handling block, it failed to check for (2), causing it to return a misleading error message in such cases. In the secondary error handling block, following a concurrent update from another session, it failed to check for (1), causing it to silently ignore target rows joined to more than one source row, instead of reporting an error. Fix this, and add tests for both of these cases. Per report from Wenjiang Zhang. Back-patch to v15, where MERGE was introduced. Discussion: https://postgr.es/m/tencent_41DE0FF443FE14B94A5898D373792109E408%40qq.com
* Fix type-checking of RECORD-returning functions in FROM.Tom Lane2024-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the corner case where a function returning RECORD has been simplified to a RECORD constant or an inlined ROW() expression, ExecInitFunctionScan failed to cross-check the function's result rowtype against the coldeflist provided by the calling query. That happened because get_expr_result_type is able to extract a tupdesc from such expressions, which led ExecInitFunctionScan to ignore the coldeflist. (Instead, it used the extracted tupdesc to check the function's output, which of course always succeeds.) I have not been able to demonstrate any really serious consequences from this, because if some column of the result is of the wrong type and is directly referenced by a Var of the calling query, CheckVarSlotCompatibility will catch it. However, we definitely do fail to report the case where the function returns more columns than the coldeflist expects, and in the converse case where it returns fewer columns, we get an assert failure (but, seemingly, no worse results in non-assert builds). To fix, always build the expected tupdesc from the coldeflist if there is one, and consult get_expr_result_type only when there isn't one. Also remove the failing Assert, even though it is no longer reached after this fix. It doesn't seem to be adding anything useful, since later checking will deal with cases with the wrong number of columns. The only other place I could find that is doing something similar is inline_set_returning_function. There's no live bug there because we cannot be looking at a Const or RowExpr, but for consistency change that code to agree with ExecInitFunctionScan. Per report from PetSerAl. After some debate I've concluded that this should be back-patched. There is a small risk that somebody has been relying on such a case not throwing an error, but I judge this outweighed by the risk that I've missed some way in which the failure to cross-check has worse consequences than sketched above. Discussion: https://postgr.es/m/CAKygsHSerA1eXsJHR9wft3Gn3wfHQ5RfP8XHBzF70_qcrrRvEg@mail.gmail.com
* Fix indentation in ExecParallelHashIncreaseNumBatches()Alexander Korotkov2024-01-08
| | | | Backpatch-through: 12
* Fix oversized memory allocation in Parallel Hash JoinAlexander Korotkov2024-01-07
| | | | | | | | | | | | During the calculations of the maximum for the number of buckets, take into account that later we round that to the next power of 2. Reported-by: Karen Talarico Bug: #16925 Discussion: https://postgr.es/m/16925-ec96d83529d0d629%40postgresql.org Author: Thomas Munro, Andrei Lepikhov, Alexander Korotkov Reviewed-by: Alena Rybakina Backpatch-through: 12
* Fix use of incorrect TupleTableSlot in DISTINCT aggregatesDavid Rowley2024-01-04
| | | | | | | | | | | | | | | | | | | | | | 1349d2790 added code to allow DISTINCT and ORDER BY aggregates to work more efficiently by using presorted input. That commit added some code that made use of the AggState's tmpcontext and adjusted the ecxt_outertuple and ecxt_innertuple slots before checking if the current row is distinct from the previously seen row. That code forgot to set the TupleTableSlots back to what they were originally, which could result in errors such as: ERROR: attribute 1 of type record has wrong type This only affects aggregate functions which have multiple arguments when DISTINCT is used. For example: string_agg(DISTINCT col, ', ') Thanks to Tom Lane for identifying the breaking commit. Bug: #18264 Reported-by: Vojtěch Beneš Discussion: https://postgr.es/m/18264-e363593d7e9feb7d@postgresql.org Backpatch-through: 16, where 1349d2790 was added
* Fix BEFORE ROW trigger handling in cross-partition MERGE update.Dean Rasheed2023-12-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a bug during MERGE if a cross-partition update is attempted on a partitioned table with a BEFORE DELETE ROW trigger that returns NULL, to prevent the update. This would cause an error to be thrown, or an assert failure in an assert-enabled build. This was an oversight in 9321c79c86, which failed to properly distinguish a DELETE prevented by a trigger from one prevented by a concurrent update. Fix by having ExecDelete() return the TM_Result status to ExecCrossPartitionUpdate(), so that it can distinguish the two cases, and make ExecCrossPartitionUpdate() return the TM_Result status to ExecUpdateAct(), so that it can return the correct status from a concurrent update. In addition, ensure that the command tag is correctly updated by having ExecMergeMatched() pass canSetTag to ExecUpdateAct(), rather than passing false, so that it updates the command tag if it does a cross-partition update, making this code path in ExecMergeMatched() consistent with ExecUpdate(). Per bug #18238 from Alexander Lakhin. Back-patch to v15, where MERGE was introduced. Dean Rasheed, reviewed by Richard Guo and Jian He. Discussion: https://postgr.es/m/18238-2f2bdc7f720180b9%40postgresql.org
* Fix resource leak when a FDW's ForeignAsyncRequest function failsHeikki Linnakangas2023-11-23
| | | | | | | | | | | | | | | | | | | | | | | | If an error is thrown after calling CreateWaitEventSet(), the memory of a WaitEventSet is free'd as it's allocated in the short-lived memory context, but the file descriptor (on epoll- or kqueue-based systems) or handles (on Windows) that it contains are leaked. Use PG_TRY-FINALLY to ensure it gets freed. (On master, I will apply a better fix, using ResourceOwners to track the WaitEventSet, but that's not backpatchable.) The added test doesn't check for leaking resources, so it passed even before this commit. But at least it covers the code path. In the passing, fix misleading comment on what the 'nevents' argument to WaitEventSetWait means. Report by Alexander Lakhin, analysis and suggestion for the fix by Tom Lane. Fixes bug #17828. Backpatch to v14 where async execution was introduced, but master gets a different fix. Discussion: https://www.postgresql.org/message-id/17828-122da8cba23236be@postgresql.org Discussion: https://www.postgresql.org/message-id/472235.1678387869@sss.pgh.pa.us
* Fix AFTER ROW trigger execution in MERGE cross-partition update.Dean Rasheed2023-11-09
| | | | | | | | | | | | | | | | | | | | | When executing a MERGE UPDATE action, if the UPDATE is turned into a cross-partition DELETE then INSERT, do not attempt to invoke AFTER UPDATE ROW triggers, or any of the other post-update actions in ExecUpdateEpilogue(). For consistency with a plain UPDATE command, such triggers should not be fired (and typically fail anyway), and similarly, other post-update actions, such as WCO/RLS checks should not be executed, and might also lead to unexpected failures. Therefore, as with ExecUpdate(), make ExecMergeMatched() return immediately if ExecUpdateAct() reports that a cross-partition update was done, to be sure that no further processing is done for that tuple. Back-patch to v15, where MERGE was introduced. Discussion: https://postgr.es/m/CAEZATCWjBgagyNZs02vgDF0DvASYj-iHTFtXG2-nP3orZhmtcw%40mail.gmail.com
* Fix intra-query memory leak when a SRF returns zero rows.Tom Lane2023-10-28
| | | | | | | | | | | | | | | | | | When looping around after finding that the set-returning function returned zero rows for the current input tuple, ExecProjectSet neglected to reset either of the two memory contexts it's responsible for cleaning out. Typically this wouldn't cause much problem, because once the SRF does return at least one row, the contexts would get reset on the next call. However, if the SRF returns no rows for many input tuples in succession, quite a lot of memory could be transiently consumed. To fix, make sure we reset both contexts while looping around. Per bug #18172 from Sergei Kornilov. Back-patch to all supported branches. Discussion: https://postgr.es/m/18172-9b8c5fc1d676ded3@postgresql.org
* Fix problems when a plain-inheritance parent table is excluded.Tom Lane2023-10-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When an UPDATE/DELETE/MERGE's target table is an old-style inheritance tree, it's possible for the parent to get excluded from the plan while some children are not. (I believe this is only possible if we can prove that a CHECK ... NO INHERIT constraint on the parent contradicts the query WHERE clause, so it's a very unusual case.) In such a case, ExecInitModifyTable mistakenly concluded that the first surviving child is the target table, leading to at least two bugs: 1. The wrong table's statement-level triggers would get fired. 2. In v16 and up, it was possible to fail with "invalid perminfoindex 0 in RTE with relid nnnn" due to the child RTE not having permissions data included in the query plan. This was hard to reproduce reliably because it did not occur unless the update triggered some non-HOT index updates. In v14 and up, this is easy to fix by defining ModifyTable.rootRelation to be the parent RTE in plain inheritance as well as partitioned cases. While the wrong-triggers bug also appears in older branches, the relevant code in both the planner and executor is quite a bit different, so it would take a good deal of effort to develop and test a suitable patch. Given the lack of field complaints about the trigger issue, I'll desist for now. (Patching v11 for this seems unwise anyway, given that it will have no more releases after next month.) Per bug #18147 from Hans Buschmann. Amit Langote and Tom Lane Discussion: https://postgr.es/m/18147-6fc796538913ee88@postgresql.org
* Fix runtime partition pruning for HASH partitioned tablesDavid Rowley2023-10-13
| | | | | | | | | | | | | | | | | | | | | | | This could only affect HASH partitioned tables with at least 2 partition key columns. If partition pruning was delayed until execution and the query contained an IS NULL qual on one of the partitioned keys, and some subsequent partitioned key was being compared to a non-Const, then this could result in a crash due to the incorrect keyno being used to calculate the stateidx for the expression evaluation code. Here we fix this by properly skipping partitioned keys which have a nullkey set. Effectively, this must be the same as what's going on inside perform_pruning_base_step(). Sergei Glukhov also provided a patch, but that's not what's being used here. Reported-by: Sergei Glukhov Reviewed-by: tender wang, Sergei Glukhov Discussion: https://postgr.es/m/d05b26fa-af54-27e1-f693-6c31590802fa@postgrespro.ru Backpatch-through: 11, where runtime partition pruning was added.
* Fix memory leak in Memoize codeDavid Rowley2023-10-05
| | | | | | | | | | Ensure we switch to the per-tuple memory context to prevent any memory leaks of detoasted Datums in MemoizeHash_hash() and MemoizeHash_equal(). Reported-by: Orlov Aleksej Author: Orlov Aleksej, David Rowley Discussion: https://postgr.es/m/83281eed63c74e4f940317186372abfd%40cft.ru Backpatch-through: 14, where Memoize was added
* Fix EvalPlanQual rechecking during MERGE.Dean Rasheed2023-09-30
| | | | | | | | | | | | | | | | | | Under some circumstances, concurrent MERGE operations could lead to inconsistent results, that varied according the plan chosen. This was caused by a lack of rowmarks on the source relation, which meant that EvalPlanQual rechecking was not guaranteed to return the same source tuples when re-running the join query. Fix by ensuring that preprocess_rowmarks() sets up PlanRowMarks for all non-target relations used in MERGE, in the same way that it does for UPDATE and DELETE. Per bug #18103. Back-patch to v15, where MERGE was introduced. Dean Rasheed, reviewed by Richard Guo. Discussion: https://postgr.es/m/18103-c4386baab8e355e3%40postgresql.org
* Fix RLS policy usage in MERGE.Dean Rasheed2023-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | If MERGE executes an UPDATE action on a table with row-level security, the code incorrectly applied the WITH CHECK clauses from the target table's INSERT policies to new rows, instead of the clauses from the table's UPDATE policies. In addition, it failed to check new rows against the target table's SELECT policies, if SELECT permissions were required (likely to always be the case). In addition, if MERGE executes a DO NOTHING action for matched rows, the code incorrectly applied the USING clauses from the target table's DELETE policies to existing target tuples. These policies were applied as checks that would throw an error, if they did not pass. Fix this, so that a MERGE UPDATE action applies the same RLS policies as a plain UPDATE query with a WHERE clause, and a DO NOTHING action does not apply any RLS checks (other than adding clauses from SELECT policies to the join). Back-patch to v15, where MERGE was introduced. Dean Rasheed, reviewed by Stephen Frost. Security: CVE-2023-39418
* Remove unnecessary checks for indexes for REPLICA IDENTITY FULL tables.Masahiko Sawada2023-07-25
| | | | | | | | | | | | | | | | | | | | | | Previously, when selecting an usable index for update/delete for the REPLICA IDENTITY FULL table, in IsIndexOnlyExpression(), we used to check if all index fields are not expressions. However, it was not necessary, because it is enough to check if only the leftmost index field is not an expression (and references the remote table column) and this check has already been done by RemoteRelContainsLeftMostColumnOnIdx(). This commit removes IsIndexOnlyExpression() and RemoteRelContainsLeftMostColumnOnIdx() and all checks for usable indexes for REPLICA IDENTITY FULL tables are now performed by IsIndexUsableForReplicaIdentityFull(). Backpatch this to remain the code consistent. Reported-by: Peter Smith Reviewed-by: Amit Kapila, Önder Kalacı Discussion: https://postgr.es/m/CAHut%2BPsGRE5WSsY0jcLHJEoA17MrbP9yy8FxdjC_ZOAACxbt%2BQ%40mail.gmail.com Backpatch-through: 16
* Don't include CaseTestExpr in JsonValueExpr.formatted_exprAmit Langote2023-07-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | A CaseTestExpr is currently being put into JsonValueExpr.formatted_expr as placeholder for the result of evaluating JsonValueExpr.raw_expr, which in turn is evaluated separately. Though, there's no need for this indirection if raw_expr itself can be embedded into formatted_expr and evaluated as part of evaluating the latter, especially as there is no special reason to evaluate it separately. So this commit makes it so. As a result, JsonValueExpr.raw_expr no longer needs to be evaluated in ExecInterpExpr(), eval_const_exprs_mutator() etc. and is now only used for displaying the original "unformatted" expression in ruleutils.c. Comments about and the code manipulating formatted_expr is updated to mention that it is now always set and is the expression that gives a JsonValueExpr its runtime value. While at it, this also removes the function makeCaseTestExpr(), because the code in makeJsonConstructorExpr() looks more readable without it IMO and isn't used by anyone else either. Finally, a note is added in the comment above CaseTestExpr's definition that JsonConstructorExpr is also using it. Backpatched to 16 from the development branch to keep the code in sync across branches. Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
* Doc: clarify the conditions of usable indexes for REPLICA IDENTITY FULL tables.Masahiko Sawada2023-07-13
| | | | | | | | | | | | | | Commit 89e46da5e allowed REPLICA IDENTITY FULL tables to use an index on the subscriber during apply of update/delete. This commit clarifies in the documentation that the leftmost field of candidate indexes must be a column (not an expression) that references the published relation column. The source code comments are also updated accordingly. Reviewed-by: Peter Smith, Amit Kapila Discussion: https://postgr.es/m/CAD21AoDJjffEvUFKXT27Q5U8-UU9JHv4rrJ9Ke8Zkc5UPWHLvA@mail.gmail.com Backpatch-through: 16
* Fix memory leak in Incremental Sort rescansTomas Vondra2023-07-02
| | | | | | | | | | | | | | | | | | | | | | | | The Incremental Sort had a couple issues, resulting in leaking memory during rescans, possibly triggering OOM. The code had a couple of related flaws: 1. During rescans, the sort states were reset but then also set to NULL (despite the comment saying otherwise). ExecIncrementalSort then sees NULL and initializes a new sort state, leaking the memory used by the old one. 2. Initializing the sort state also automatically rebuilt the info about presorted keys, leaking the already initialized info. presorted_keys was also unnecessarily reset to NULL. Patch by James Coleman, based on patches by Laurenz Albe and Tom Lane. Backpatch to 13, where Incremental Sort was introduced. Author: James Coleman, Laurenz Albe, Tom Lane Reported-by: Laurenz Albe, Zu-Ming Jiang Backpatch-through: 13 Discussion: https://postgr.es/m/b2bd02dff61af15e3526293e2771f874cf2a3be7.camel%40cybertec.at Discussion: https://postgr.es/m/db03c582-086d-e7cd-d4a1-3bc722f81765%40inf.ethz.ch
* Fix order of operations in ExecEvalFieldStoreDeForm().Tom Lane2023-06-29
| | | | | | | | | | | | | | | | | | | | | | | | | If the given composite datum is toasted out-of-line, DatumGetHeapTupleHeader will perform database accesses to detoast it. That can invalidate the result of get_cached_rowtype, as documented (perhaps not plainly enough) in that function's API spec; which leads to strange errors or crashes when we try to use the TupleDesc to read the tuple. In short then, trying to update a field of a composite column could fail intermittently if the overall column value is wide enough to require toasting. We can fix the bug at no cost by just changing the order of operations, since we don't need the TupleDesc until after detoasting. (Other callers of get_cached_rowtype appear to get this right already, so there's only one bug.) Note that the added regression test case reveals this bug reliably only with debug_discard_caches/CLOBBER_CACHE_ALWAYS. Per bug #17994 from Alexander Lakhin. Sadly, this patch does not fix the missing-values issue revealed in the bug discussion; we'll need some more work to cover that. Discussion: https://postgr.es/m/17994-5c7100b51b4790e9@postgresql.org
* Pre-beta2 mechanical code beautification.Tom Lane2023-06-20
| | | | | | | | | Run pgindent and pgperltidy. It seems we're still some ways away from all committers doing this automatically. Now that we have a buildfarm animal that will whine about poorly-indented code, we'll try to keep the tree more tidy. Discussion: https://postgr.es/m/3156045.1687208823@sss.pgh.pa.us
* Retain relkind too in RTE_SUBQUERY entries for views.Amit Langote2023-06-14
| | | | | | | | | | | | | | | | | | 47bb9db75 modified the ApplyRetrieveRule()'s conversion of a view's original RTE_RELATION entry into an RTE_SUBQUERY one to retain relid, rellockmode, and perminfoindex so that the executor can lock the view and check its permissions. It seems better to also retain relkind for cross-checking that the exception of an RTE_SUBQUERY entry being allowed to carry relation details only applies to views, so do so. Bump catversion because this changes the output format of RTE_SUBQUERY RTEs. Suggested-by: David Steele <david@pgmasters.net> Reviewed-by: David Steele <david@pgmasters.net> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/3953179e-9540-e5d1-a743-4bef368785b0%40pgmasters.net
* Use per-tuple context in ExecGetAllUpdatedColsTomas Vondra2023-06-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit fc22b6623b (generated columns) replaced ExecGetUpdatedCols() with ExecGetAllUpdatedCols() in a couple places handling UPDATE (triggers and lock mode). However, ExecGetUpdatedCols() did exec_rt_fetch() while ExecGetAllUpdatedCols() also allocates memory through bms_union() without paying attention to the memory context and happened to use the long-lived ExecutorState, leaking the memory until the end of the query. The amount of leaked memory is proportional to the number of (updated) attributes, types of UPDATE triggers, and the number of processed rows (which for UPDATE ... FROM ... may be much higher than updated rows). Fixed by switching to the per-tuple context in GetAllUpdatedColumns(). This is fine for all in-core callers, but external callers may need to copy the result. But we're not aware of any such callers. Note the issue was introduced by fc22b6623b, but the macros were later renamed by f50e888990. Backpatch to 12, where the issue was introduced. Reported-by: Tomas Vondra Reviewed-by: Andres Freund, Tom Lane, Jakub Wartak Backpatch-through: 12 Discussion: https://postgr.es/m/222a3442-7f7d-246c-ed9b-a76209d19239@enterprisedb.com
* Pre-beta mechanical code beautification.Tom Lane2023-05-19
| | | | | | | | | | | | | | | Run pgindent, pgperltidy, and reformat-dat-files. This set of diffs is a bit larger than typical. We've updated to pg_bsd_indent 2.1.2, which properly indents variable declarations that have multi-line initialization expressions (the continuation lines are now indented one tab stop). We've also updated to perltidy version 20230309 and changed some of its settings, which reduces its desire to add whitespace to lines to make assignments etc. line up. Going forward, that should make for fewer random-seeming changes to existing code. Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql