aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Fix memory leak in Memoize codeDavid Rowley2023-10-05
| | | | | | | | | | Ensure we switch to the per-tuple memory context to prevent any memory leaks of detoasted Datums in MemoizeHash_hash() and MemoizeHash_equal(). Reported-by: Orlov Aleksej Author: Orlov Aleksej, David Rowley Discussion: https://postgr.es/m/83281eed63c74e4f940317186372abfd%40cft.ru Backpatch-through: 14, where Memoize was added
* Avoid memory size overflow when allocating backend activity bufferMichael Paquier2023-10-03
| | | | | | | | | | | | | | | | | | | | | | The code in charge of copying the contents of PgBackendStatus to local memory could fail on memory allocation because of an overflow on the amount of memory to use. The overflow can happen when combining a high value track_activity_query_size (max at 1MB) with a large max_connections, when both multiplied get higher than INT32_MAX as both parameters treated as signed integers. This could for example trigger with the following functions, all calling pgstat_read_current_status(): - pg_stat_get_backend_subxact() - pg_stat_get_backend_idset() - pg_stat_get_progress_info() - pg_stat_get_activity() - pg_stat_get_db_numbackends() The change to use MemoryContextAllocHuge() has been introduced in 8d0ddccec636, so backpatch down to 12. Author: Jakub Wartak Discussion: https://postgr.es/m/CAKZiRmw8QSNVw2qNK-dznsatQqz+9DkCquxP0GHbbv1jMkGHMA@mail.gmail.com Backpatch-through: 12
* Fail hard on out-of-memory failures in xlogreader.cMichael Paquier2023-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit changes the WAL reader routines so as a FATAL for the backend or exit(FAILURE) for the frontend is triggered if an allocation for a WAL record decode fails in walreader.c, rather than treating this case as bogus data, which would be equivalent to the end of WAL. The key is to avoid palloc_extended(MCXT_ALLOC_NO_OOM) in walreader.c, relying on plain palloc() calls. The previous behavior could make WAL replay finish too early than it should. For example, crash recovery finishing earlier may corrupt clusters because not all the WAL available locally was replayed to ensure a consistent state. Out-of-memory failures would show up randomly depending on the memory pressure on the host, but one simple case would be to generate a large record, then replay this record after downsizing a host, as Ethan Mertz originally reported. This relies on bae868caf222, as the WAL reader routines now do the memory allocation required for a record only once its header has been fully read and validated, making xl_tot_len trustable. Making the WAL reader react differently on out-of-memory or bogus record data would require ABI changes, so this is the safest choice for stable branches. Also, it is worth noting that 3f1ce973467a has been using a plain palloc() in this code for some time now. Thanks to Noah Misch and Thomas Munro for the discussion. Like the other commit, backpatch down to 12, leaving out v11 that will be EOL'd soon. The behavior of considering a failed allocation as bogus data comes originally from 0ffe11abd3a0, where the record length retrieved from its header was not entirely trustable. Reported-by: Ethan Mertz Discussion: https://postgr.es/m/ZRKKdI5-RRlta3aF@paquier.xyz Backpatch-through: 12
* Fix omission of column-level privileges in selective pg_restore.Tom Lane2023-10-02
| | | | | | | | | | | | | | | | | | | | | | | | In a selective restore, ACLs for a table should be dumped if the table is selected to be dumped. However, if the table has both table-level and column-level ACLs, only the table-level ACL was restored. This happened because _tocEntryRequired assumed that an ACL could have only one dependency (the one on its table), and punted if there was more than one. But since commit ea9125304, column-level ACLs also depend on the table-level ACL if any, to ensure correct ordering in parallel restores. To fix, adjust the logic in _tocEntryRequired to ignore dependencies on ACLs. I extended a test case in 002_pg_dump.pl so that it purports to test for this; but in fact the test passes even without the fix. That's because this bug only manifests during a selective restore, while the scenarios 002_pg_dump.pl tests include only selective dumps. Perhaps somebody would like to extend the script so that it can test scenarios including selective restore, but I'm not touching that. Euler Taveira and Tom Lane, per report from Kong Man. Back-patch to all supported branches. Discussion: https://postgr.es/m/DM4PR11MB73976902DBBA10B1D652F9498B06A@DM4PR11MB7397.namprd11.prod.outlook.com
* Flush WAL stats in bgwriterHeikki Linnakangas2023-10-02
| | | | | | | | | | | bgwriter can write out WAL, but did not flush the WAL pgstat counters, so the writes were not seen in pg_stat_wal. Back-patch to v14, where pg_stat_wal was introduced. Author: Nazir Bilal Yavuz Reviewed-by: Matthias van de Meent, Kyotaro Horiguchi Discussion: https://www.postgresql.org/message-id/CAN55FZ2FPYngovZstr%3D3w1KSEHe6toiZwrurbhspfkXe5UDocg%40mail.gmail.com
* Fix datalen calculation in tsvectorrecv().Tom Lane2023-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After receiving position data for a lexeme, tsvectorrecv() advanced its "datalen" value by (npos+1)*sizeof(WordEntry) where the correct calculation is (npos+1)*sizeof(WordEntryPos). This accidentally failed to render the constructed tsvector invalid, but it did result in leaving some wasted space approximately equal to the space consumed by the position data. That could have several bad effects: * Disk space is wasted if the received tsvector is stored into a table as-is. * A legal tsvector could get rejected with "maximum total lexeme length exceeded" if the extra space pushes it over the MAXSTRPOS limit. * In edge cases, the finished tsvector could be assigned a length larger than the allocated size of its palloc chunk, conceivably leading to SIGSEGV when the tsvector gets copied somewhere else. The odds of a field failure of this sort seem low, though valgrind testing could probably have found this. While we're here, let's express the calculation as "sizeof(uint16) + npos * sizeof(WordEntryPos)" to avoid the type pun implicit in the "npos + 1" formulation. It's not wrong given that WordEntryPos had better be 2 bytes to avoid padding problems, but it seems clearer this way. Report and patch by Denis Erokhin. Back-patch to all supported versions. Discussion: https://postgr.es/m/009801d9f2d9$f29730c0$d7c59240$@datagile.ru
* In COPY FROM, fail cleanly when unsupported encoding conversion is needed.Tom Lane2023-10-01
| | | | | | | | | | | | In recent releases, such cases fail with "cache lookup failed for function 0" rather than complaining that the conversion function doesn't exist as prior versions did. Seems to be a consequence of sloppy refactoring in commit f82de5c46. Add the missing error check. Per report from Pierre Fortin. Back-patch to v14 where the oversight crept in. Discussion: https://postgr.es/m/20230929163739.3bea46e5.pfortin@pfortin.com
* Fix briefly showing old progress stats for ANALYZE on inherited tables.Heikki Linnakangas2023-09-30
| | | | | | | | | | | | | | | | | | | ANALYZE on a table with inheritance children analyzes all the child tables in a loop. When stepping to next child table, it updated the child rel ID value in the command progress stats, but did not reset the 'sample_blks_total' and 'sample_blks_scanned' counters. acquire_sample_rows() updates 'sample_blks_total' as soon as the scan starts and 'sample_blks_scanned' after processing the first block, but until then, pg_stat_progress_analyze would display a bogus combination of the new child table relid with old counter values from the previously processed child table. Fix by resetting 'sample_blks_total' and 'sample_blks_scanned' to zero at the same time that 'current_child_table_relid' is updated. Backpatch to v13, where pg_stat_progress_analyze view was introduced. Reported-by: Justin Pryzby Discussion: https://www.postgresql.org/message-id/20230122162345.GP13860%40telsasoft.com
* Fix EvalPlanQual rechecking during MERGE.Dean Rasheed2023-09-30
| | | | | | | | | | | | | | | | | | Under some circumstances, concurrent MERGE operations could lead to inconsistent results, that varied according the plan chosen. This was caused by a lack of rowmarks on the source relation, which meant that EvalPlanQual rechecking was not guaranteed to return the same source tuples when re-running the join query. Fix by ensuring that preprocess_rowmarks() sets up PlanRowMarks for all non-target relations used in MERGE, in the same way that it does for UPDATE and DELETE. Per bug #18103. Back-patch to v15, where MERGE was introduced. Dean Rasheed, reviewed by Richard Guo. Discussion: https://postgr.es/m/18103-c4386baab8e355e3%40postgresql.org
* Remove environment sensitivity in pl/tcl regression test.Tom Lane2023-09-29
| | | | | | | | | | | | | | | Add "-gmt 1" to our test invocations of the Tcl "clock" command, so that they do not consult the timezone environment. While it doesn't really matter which timezone is used here, it does matter that the command not fall over entirely. We've now discovered that at least on FreeBSD, "clock scan" will fail if /etc/localtime is missing. It seems worth making the test insensitive to that. Per Tomas Vondras' buildfarm animal dikkop. Thanks to Thomas Munro for the diagnosis. Discussion: https://postgr.es/m/316d304a-1dcd-cea1-3d6c-27f794727a06@enterprisedb.com
* Suppress macOS warnings about duplicate libraries in link commands.Tom Lane2023-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As of Xcode 15 (macOS Sonoma), the linker complains about duplicate references to the same library. We see warnings about libpgport and libpgcommon being duplicated in many client executables. This is a consequence of the hack introduced in commit 6b7ef076b to list libpgport before libpq while not removing it from $(LIBS). (Commit 8396447cd later applied the same rule to libpgcommon.) The concern in 6b7ef076b was to ensure that the client executable wouldn't unintentionally depend on pgport functions from libpq. That concern is obsolete on any platform for which we can do symbol export control, because if we can then the pgport functions in libpq won't be exposed anyway. Hence, we can fix this problem by just removing libpgport and libpgcommon from $(libpq_pgport), and letting clients depend on the occurrences in $(LIBS). In the back branches, do that only on macOS (which we know has symbol export control). In HEAD, let's be more aggressive and remove the extra libraries everywhere. The only still-supported platforms that lack export control are MinGW/Cygwin, and it doesn't seem worth sweating over ABI stability details for those (or if somebody does care, it'd probably be possible to perform symbol export control for those too). As well as being simpler, this might give some microscopic improvement in build time. The meson build system is not changed here, as it doesn't have this particular disease, though it does have some related issues that we'll fix separately. Discussion: https://postgr.es/m/467042.1695766998@sss.pgh.pa.us
* Fix btmarkpos/btrestrpos array key wraparound bug.Peter Geoghegan2023-09-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | nbtree's mark/restore processing failed to correctly handle an edge case involving array key advancement and related search-type scan key state. Scans with ScalarArrayScalarArrayOpExpr quals requiring mark/restore processing (for a merge join) could incorrectly conclude that an affected array/scan key must not have advanced during the time between marking and restoring the scan's position. As a result of all this, array key handling within btrestrpos could skip a required call to _bt_preprocess_keys(). This confusion allowed later primitive index scans to overlook tuples matching the true current array keys. The scan's search-type scan keys would still have spurious values corresponding to the final array element(s) -- not values matching the first/now-current array element(s). To fix, remember that "array key wraparound" has taken place during the ongoing btrescan in a flag variable stored in the scan's state, and use that information at the point where btrestrpos decides if another call to _bt_preprocess_keys is required. Oversight in commit 70bc5833, which taught nbtree to handle array keys during mark/restore processing, but missed this subtlety. That commit was itself a bug fix for an issue in commit 9e8da0f7, which taught nbtree to handle ScalarArrayOpExpr quals natively. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzkgP3DDRJxw6DgjCxo-cu-DKrvjEv_ArkP2ctBJatDCYg@mail.gmail.com Backpatch: 11- (all supported branches).
* Fix checking of index expressions in CompareIndexInfo().Tom Lane2023-09-28
| | | | | | | | | | | | | | | | | | | | | | | This code was sloppy about comparison of index columns that are expressions. It didn't reliably reject cases where one index has an expression where the other has a plain column, and it could index off the start of the attmap array, leading to a Valgrind complaint (though an actual crash seems unlikely). I'm not sure that the expression-vs-column sloppiness leads to any visible problem in practice, because the subsequent comparison of the two expression lists would reject cases where the indexes have different numbers of expressions overall. Maybe we could falsely match indexes having the same expressions in different column positions, but it'd require unlucky contents of the word before the attmap array. It's not too surprising that no problem has been reported from the field. Nonetheless, this code is clearly wrong. Per bug #18135 from Alexander Lakhin. Back-patch to all supported branches. Discussion: https://postgr.es/m/18135-532f4a755e71e4d2@postgresql.org
* Add missing TidRangePath handling in print_path()David Rowley2023-09-29
| | | | | | | | | | | | | Tid Range scans were added back in bb437f995. That commit forgot to add handling for TidRangePaths in print_path(). Only people building with OPTIMIZER_DEBUG might have noticed this, which likely is the reason it's taken 4 years for anyone to notice. Author: Andrey Lepikhov Reported-by: Andrey Lepikhov Discussion: https://postgr.es/m/379082d6-1b6a-4cd6-9ecf-7157d8c08635@postgrespro.ru Backpatch-through: 14, where bb437f995 was introduced
* Fix typo in src/backend/access/transam/README.Etsuro Fujita2023-09-28
|
* Stop using "-multiply_defined suppress" on macOS.Tom Lane2023-09-26
| | | | | | | | | We started to use this linker switch in commit 9df308697 of 2004-07-13, which was in the OS X 10.3 era. Apparently it's been a no-op since around OS X 10.9. Apple's most recent toolchain version actively complains about it, so it's time to get rid of it. Discussion: https://postgr.es/m/467042.1695766998@sss.pgh.pa.us
* Fix another bug in parent page splitting during GiST index build.Heikki Linnakangas2023-09-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Yet another bug in the ilk of commits a7ee7c851 and 741b88435. In 741b88435, we took care to clear the memorized location of the downlink when we split the parent page, because splitting the parent page can move the downlink. But we missed that even *updating* a tuple on the parent can move it, because updating a tuple on a gist page is implemented as a delete+insert, so the updated tuple gets moved to the end of the page. This commit fixes the bug in two different ways (belt and suspenders): 1. Clear the downlink when we update a tuple on the parent page, even if it's not split. This the same approach as in commits a7ee7c851 and 741b88435. I also noticed that gistFindCorrectParent did not clear the 'downlinkoffnum' when it stepped to the right sibling. Fix that too, as it seems like a clear bug even though I haven't been able to find a test case to hit that. 2. Change gistFindCorrectParent so that it treats 'downlinkoffnum' merely as a hint. It now always first checks if the downlink is still at that location, and if not, it scans the page like before. That's more robust if there are still more cases where we fail to clear 'downlinkoffnum' that we haven't yet uncovered. With this, it's no longer necessary to meticulously clear 'downlinkoffnum', so this makes the previous fixes unnecessary, but I didn't revert them because it still seems nice to clear it when we know that the downlink has moved. Also add the test case using the same test data that Alexander posted. I tried to reduce it to a smaller test, and I also tried to reproduce this with different test data, but I was not able to, so let's just include what we have. Backpatch to v12, like the previous fixes. Reported-by: Alexander Lakhin Discussion: https://www.postgresql.org/message-id/18129-caca016eaf0c3702@postgresql.org
* Fix behavior of "force" in pgstat_report_wal()Michael Paquier2023-09-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As implemented in 5891c7a8ed8f, setting "force" to true in pgstat_report_wal() causes the routine to not wait for the pgstat shmem lock if it cannot be acquired, in which case the WAL and I/O statistics finish by not being flushed. The origin of the confusion comes from pgstat_flush_wal() and pgstat_flush_io(), that use "nowait" as sole argument. The I/O stats are new in v16. This is the opposite behavior of what has been used in pgstat_report_stat(), where "force" is the opposite of "nowait". In this case, when "force" is true, the routine sets "nowait" to false, which would cause the routine to wait for the pgstat shmem lock, ensuring that the stats are always flushed. When "force" is false, "nowait" is set to true, and the stats would only not be flushed if the pgstat shmem lock can be acquired, returning immediately without flushing the stats if the lock cannot be acquired. This commit changes pgstat_report_wal() so as "force" has the same behavior as in pgstat_report_stat(). There are currently three callers of pgstat_report_wal(): - Two in the checkpointer where force=true during a shutdown and the main checkpointer loop. Now the code behaves so as the stats are always flushed. - One in the main loop of the bgwriter, where force=false. Now the code behaves so as the stats would not be flushed if the pgstat shmem lock could not be acquired. Before this commit, some stats on WAL and I/O could have been lost after a shutdown, for example. Reported-by: Ryoga Yoshida Author: Ryoga Yoshida, Michael Paquier Discussion: https://postgr.es/m/f87a4d7be70530606b864fd1df91718c@oss.nttdata.com Backpatch-through: 15
* Fix edge-case for xl_tot_len broken by bae868ca.Thomas Munro2023-09-26
| | | | | | | | | | | | | | | | | bae868ca removed a check that was still needed. If you had an xl_tot_len at the end of a page that was too small for a record header, but not big enough to span onto the next page, we'd immediately perform the CRC check using a bogus large length. Because of arbitrary coding differences between the CRC implementations on different platforms, nothing very bad happened on common modern systems. On systems using the _sb8.c fallback we could segfault. Restore that check, add a new assertion and supply a test for that case. Back-patch to 12, like bae868ca. Tested-by: Tom Lane <tgl@sss.pgh.pa.us> Tested-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/CA%2BhUKGLCkTT7zYjzOxuLGahBdQ%3DMcF%3Dz5ZvrjSOnW4EDhVjT-g%40mail.gmail.com
* pg_dump: tests: Correct test condition for invalid databasesAndres Freund2023-09-25
| | | | | | | | | | | | | For some reason I used not_like = { pg_dumpall_dbprivs => 1, } in the test condition of one of the tests added in in c66a7d75e65. That doesn't make sense for two reasons: 1) not_like isn't a valid test condition 2) the database should not be dumped in any of the tests. Due to 1), the test achieved its goal, but clearly the formulation is confusing. Instead use like => {}, with a comment explaining why. Reported-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://postgr.es/m/3ddf79f2-8b7b-a093-11d2-5c739bc64f86@eisentraut.org Backpatch: 11-, like c66a7d75e65
* Collect dependency information for parsed CallStmts.Tom Lane2023-09-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Parse analysis of a CallStmt will inject mutable information, for instance the OID of the called procedure, so that subsequent DDL may create a need to re-parse the CALL. We failed to detect this for CALLs in plpgsql routines, because no dependency information was collected when putting a CallStmt into the plan cache. That could lead to misbehavior or strange errors such as "cache lookup failed". Before commit ee895a655, the issue would only manifest for CALLs appearing in atomic contexts, because we re-planned non-atomic CALLs every time through anyway. It is now apparent that extract_query_dependencies() probably needs a special case for every utility statement type for which stmt_requires_parse_analysis() returns true. I wanted to add something like Assert(!stmt_requires_parse_analysis(...)) when falling out of extract_query_dependencies_walker without doing anything, but there are API issues as well as a more fundamental point: stmt_requires_parse_analysis is supposed to be applied to raw parser output, so it'd be cheating to assume it will give the correct answer for post-parse-analysis trees. I contented myself with adding a comment. Per bug #18131 from Christian Stork. Back-patch to all supported branches. Discussion: https://postgr.es/m/18131-576854e79c5cd264@postgresql.org
* Limit to_tsvector_byid's initial array allocation to something sane.Tom Lane2023-09-25
| | | | | | | | | | | | The initial estimate of the number of distinct ParsedWords is just that: an estimate. Don't let it exceed what palloc is willing to allocate. If in fact we need more entries, we'll eventually fail trying to enlarge the array. But if we don't, this allows success on inputs that currently draw "invalid memory alloc request size". Per bug #18080 from Uwe Binder. Back-patch to all supported branches. Discussion: https://postgr.es/m/18080-d5c5e58fef8c99b7@postgresql.org
* pg_upgrade: check for types removed in pg12Alvaro Herrera2023-09-25
| | | | | | | | | | | Commit cda6a8d01d39 removed a few datatypes, but didn't update pg_upgrade --check to throw error if these types are used. So the users find that pg_upgrade --check tells them that everything is fine, only to fail when the real upgrade is attempted. Reviewed-by: Tristan Partin <tristan@neon.tech> Reviewed-by: Suraj Kharage <suraj.kharage@enterprisedb.com> Discussion: https://postgr.es/m/202309201654.ng4ksea25mti@alvherre.pgsql
* Don't use Perl pack('Q') in 039_end_of_wal.pl.Thomas Munro2023-09-23
| | | | | | | | | 'Q' for 64 bit integers turns out not to work on 32 bit Perl, as revealed by the build farm. Use 'II' instead, and deal with endianness. Back-patch to 12, like bae868ca. Discussion: https://postgr.es/m/ZQ4r1vHcryBsSi_V%40paquier.xyz
* Don't trust unvalidated xl_tot_len.Thomas Munro2023-09-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | xl_tot_len comes first in a WAL record. Usually we don't trust it to be the true length until we've validated the record header. If the record header was split across two pages, previously we wouldn't do the validation until after we'd already tried to allocate enough memory to hold the record, which was bad because it might actually be garbage bytes from a recycled WAL file, so we could try to allocate a lot of memory. Release 15 made it worse. Since 70b4f82a4b5, we'd at least generate an end-of-WAL condition if the garbage 4 byte value happened to be > 1GB, but we'd still try to allocate up to 1GB of memory bogusly otherwise. That was an improvement, but unfortunately release 15 tries to allocate another object before that, so you could get a FATAL error and recovery could fail. We can fix both variants of the problem more fundamentally using pre-existing page-level validation, if we just re-order some logic. The new order of operations in the split-header case defers all memory allocation based on xl_tot_len until we've read the following page. At that point we know that its first few bytes are not recycled data, by checking its xlp_pageaddr, and that its xlp_rem_len agrees with xl_tot_len on the preceding page. That is strong evidence that xl_tot_len was truly the start of a record that was logged. This problem was most likely to occur on a standby, because walreceiver.c recycles WAL files without zeroing out trailing regions of each page. We could fix that too, but that wouldn't protect us from rare crash scenarios where the trailing zeroes don't make it to disk. With reliable xl_tot_len validation in place, the ancient policy of considering malloc failure to indicate corruption at end-of-WAL seems quite surprising, but changing that is left for later work. Also included is a new TAP test to exercise various cases of end-of-WAL detection by writing contrived data into the WAL from Perl. Back-patch to 12. We decided not to put this change into the final release of 11. Author: Thomas Munro <thomas.munro@gmail.com> Author: Michael Paquier <michael@paquier.xyz> Reported-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Noah Misch <noah@leadboat.com> (the idea, not the code) Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Sergei Kornilov <sk@zsrv.org> Reviewed-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/17928-aa92416a70ff44a2%40postgresql.org
* Avoid potential pfree on NULL on OpenSSL errorsDaniel Gustafsson2023-09-22
| | | | | | | | | | | | Guard against the pointer being NULL before pfreeing upon an error returned from OpenSSL. Also handle errors from X509_NAME_print_ex which also can return -1 on memory allocation errors. Backpatch down to v15 where the code was added. Author: Sergey Shinderuk <s.shinderuk@postgrespro.ru> Discussion: https://postgr.es/m/8db5374d-32e0-6abb-d402-40762511eff2@postgrespro.ru Backpatch-through: v15
* Fix COMMIT/ROLLBACK AND CHAIN in the presence of subtransactions.Tom Lane2023-09-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In older branches, COMMIT/ROLLBACK AND CHAIN failed to propagate the current transaction's properties to the new transaction if there was any open subtransaction (unreleased savepoint). Instead, some previous transaction's properties would be restored. This is because the "if (s->chain)" check in CommitTransactionCommand examined the wrong instance of the "chain" flag and falsely concluded that it didn't need to save transaction properties. Our regression tests would have noticed this, except they used identical transaction properties for multiple tests in a row, so that the faulty behavior was not distinguishable from correct behavior. Commit 12d768e70 fixed the problem in v15 and later, but only rather accidentally, because I removed the "if (s->chain)" test to avoid a compiler warning, while not realizing that the warning was flagging a real bug. In v14 and before, remove the if-test and save transaction properties unconditionally; just as in the newer branches, that's not expensive enough to justify thinking harder. Add the comment and extra regression test to v15 and later to forestall any future recurrence, but there's no live bug in those branches. Patch by me, per bug #18118 from Liu Xiang. Back-patch to v12 where the AND CHAIN feature was added. Discussion: https://postgr.es/m/18118-4b72fcbb903aace6@postgresql.org
* Update comment about set_join_pathlist_hook().Etsuro Fujita2023-09-21
| | | | | | | | | | | | | | | | The comment introduced by commit e7cb7ee14 was a bit too terse, which could lead to extensions doing different things within the hook function than we intend to allow. Extend the comment to explain what they can do within the hook function. Back-patch to all supported branches. In passing, I rephrased a nearby comment that I recently added to the back branches. Reviewed by David Rowley and Andrei Lepikhov. Discussion: https://postgr.es/m/CAPmGK15SBPA1nr3Aqsdm%2BYyS-ay0Ayo2BRYQ8_A2To9eLqwopQ%40mail.gmail.com
* Fix GiST README's explanation of the NSN cross-check.Heikki Linnakangas2023-09-19
| | | | | | | | The text got the condition backwards, it's "NSN > LSN", not "NSN < LSN". While we're at it, expand it a little for clarity. Reviewed-by: Daniel Gustafsson Discussion: https://www.postgresql.org/message-id/4cb46e18-e688-524a-0f73-b1f03ed5d6ee@iki.fi
* Fix assertion failure with PL/Python exceptionsMichael Paquier2023-09-19
| | | | | | | | | | | | | | | | | | PLy_elog() was not able to handle correctly cases where a SPI called failed, which would fill in a DETAIL string able to trigger an assertion. We may want to improve this infrastructure so as it is able to provide any extra detail information provided by an error stack, but this is left as a future improvement as it could impact existing error stacks and any applications that depend on them. For now, the assertion is removed and a regression test is added to cover the case of a failure with a detail string. This problem exists since 2bd78eb8d51c, so backpatch all the way down with tweaks to the regression tests output added where required. Author: Alexander Lakhin Discussion: https://postgr.es/m/18070-ab9c171cbf4ebb0f@postgresql.org Backpatch-through: 11
* Don't crash if cursor_to_xmlschema is used on a non-data-returning Portal.Tom Lane2023-09-18
| | | | | | | | | | | | | | | | cursor_to_xmlschema() assumed that any Portal must have a tupDesc, which is not so. Add a defensive check. It's plausible that this mistake occurred because of the rather poorly chosen name of the lookup function SPI_cursor_find(), which in such cases is returning something that isn't very much like a cursor. Add some documentation to try to forestall future errors of the same ilk. Report and patch by Boyu Yang (docs changes by me). Back-patch to all supported branches. Discussion: https://postgr.es/m/dd343010-c637-434c-a8cb-418f53bda3b8.yangboyu.yby@alibaba-inc.com
* Track nesting depth correctly when drilling down into RECORD Vars.Tom Lane2023-09-15
| | | | | | | | | | | | | | | | | | expandRecordVariable() failed to adjust the parse nesting structure correctly when recursing to inspect an outer-level Var. This could result in assertion failures or core dumps in corner cases. Likewise, get_name_for_var_field() failed to adjust the deparse namespace stack correctly when recursing to inspect an outer-level Var. In this case the likely result was a "bogus varno" error while deparsing a view. Per bug #18077 from Jingzhou Fu. Back-patch to all supported branches. Richard Guo, with some adjustments by me Discussion: https://postgr.es/m/18077-b9db97c6e0ab45d8@postgresql.org
* Revert "Improve error message on snapshot import in snapmgr.c"Michael Paquier2023-09-14
| | | | | | | | | | This reverts commit a0d87bcd9b57, following a remark from Andres Frend that the new error can be triggered with an incorrect SET TRANSACTION SNAPSHOT command without being really helpful for the user as it uses the internal file name. Discussion: https://postgr.es/m/20230914020724.hlks7vunitvtbbz4@awork3.anarazel.de Backpatch-through: 11
* Improve error message on snapshot import in snapmgr.cMichael Paquier2023-09-14
| | | | | | | | | | | | | | | | | When a snapshot file fails to be read in ImportSnapshot(), it would issue an ERROR as "invalid snapshot identifier" when opening a stream for it in read-only mode. This error message is reworded to be the same as all the other messages used in this case on failure, which is useful when debugging this area. Thinko introduced by bb446b689b66 where snapshot imports have been added. A backpatch down to 11 is done as this can improve any work related to snapshot imports in older branches. Author: Bharath Rupireddy Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/CALj2ACWmr=3KdxDkm8h7Zn1XxBoF6hdzq8WQyMn2y1OL5RYFrg@mail.gmail.com Backpatch-through: 11
* Fix incorrect logic in plan dependency recordingDavid Rowley2023-09-14
| | | | | | | | | | | | | | | Both 50e17ad28 and 29f45e299 mistakenly tried to record a plan dependency on a function but mistakenly inverted the OidIsValid test. This meant that we'd record a dependency only when the function's Oid was InvalidOid. Clearly this was meant to *not* record the dependency in that case. 50e17ad28 made this mistake first, then in v15 29f45e299 copied the same mistake. Reported-by: Tom Lane Backpatch-through: 14, where 50e17ad28 first made this mistake Discussion: https://postgr.es/m/2277537.1694301772@sss.pgh.pa.us
* Fix exception safety bug in typcache.c.Thomas Munro2023-09-13
| | | | | | | | | | | | | | | | If an out-of-memory error was thrown at an unfortunate time, ensure_record_cache_typmod_slot_exists() could leak memory and leave behind a global state that produced an infinite loop on the next call. Fix by merging RecordCacheArray and RecordIdentifierArray into a single array. With only one allocation or re-allocation, there is no intermediate state. Back-patch to all supported releases. Reported-by: "James Pang (chaolpan)" <chaolpan@cisco.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/PH0PR11MB519113E738814BDDA702EDADD6EFA%40PH0PR11MB5191.namprd11.prod.outlook.com
* Skip psql's TAP test for query cancellation entirely on WindowsMichael Paquier2023-09-13
| | | | | | | | | | | | This changes 020_cancel.pl so as the test is entirely skipped on Windows. This test was already doing nothing under WIN32, except initializing and starting a node without using it so this shaves a few test cycles. Author: Yugo NAGATA Reviewed-by: Fabien Coelho Discussion: https://postgr.es/m/20230810125935.22c2922ea5250ba79358965b@sraoss.co.jp Backpatch-through: 15
* Fix uninitialized access to InitialRunningXacts during decoding after ERROR.Amit Kapila2023-09-12
| | | | | | | | | | | | | | | | | | | | The transactions and subtransactions array that was allocated under snapshot builder memory context and recorded during decoding was not cleared in case of errors. This can result in an assertion failure if we attempt to retry logical decoding within the same session. To address this issue, we register a callback function under the snapshot builder memory context to clear the recorded transactions and subtransactions array along with the context. This problem doesn't exist in PG16 and HEAD as instead of using InitialRunningXacts, we added the list of transaction IDs and sub-transaction IDs, that have modified catalogs and are running during snapshot serialization, to the serialized snapshot (see commit 7f13ac8123). Author: Hou Zhijie Reviewed-by: Amit Kapila Backpatch-through: 11 Discussion: http://postgr.es/m/18055-ab3beed9f4b7b7d6@postgresql.org
* Stabilize subscription stats test.Masahiko Sawada2023-09-08
| | | | | | | | | | | | | | The new test added by commit 68a59f9e9 disables the subscription and manually drops the associated replication slot. However, since disabling the subsubscription doesn't wait for a walsender to release the replication slot and exit, pg_drop_replication_slot() could fail. Avoid failure by adding a wait for the replication slot to become inactive. Reported-by: Hou Zhijie, as per buildfarm Reviewed-by: Hou Zhijie Discussion: https://postgr.es/m/OS0PR01MB571682316378379AA34854F694E9A%40OS0PR01MB5716.jpnprd01.prod.outlook.com Backpatch-through: 15
* pg_basebackup: Generate valid temporary slot names under PQbackendPID()Michael Paquier2023-09-07
| | | | | | | | | | | | | | | | | | | | | | pgbouncer can cause PQbackendPID() to return negative values due to it filling be_pid with random bytes (even these days pid_max can only be set up to 2^22 on 64b machines on Linux, for example, so this cannot happen with normal PID numbers). When this happens, pg_basebackup may generate a temporary slot name that may not be accepted by the parser, leading to spurious failures, like: pg_basebackup: error: could not send replication command ERROR: replication slot name "pg_basebackup_-1201966863" contains invalid character This commit fixes that problem by formatting the result from PQbackendPID() as an unsigned integer when creating the temporary replication slot name, so as the invalid character is gone and the command can be parsed. Author: Jelte Fennema Reviewed-by: Daniel Gustafsson, Nishant Sharma Discussion: https://postgr.es/m/CAGECzQQOGvYfp8ziF4fWQ_o8s2K7ppaoWBQnTmdakn3s-4Z=5g@mail.gmail.com Backpatch-through: 11
* Disable 031_recovery_conflict.pl in 15 and 16.Thomas Munro2023-09-07
| | | | | | | | | | | | | | | This test fails due to known bugs in the test and the server. Those will be fixed in master shortly and possibly back-patched a bit later, but in the meantime it is unhelpful for package maintainers if the tests randomly fail, and it's not a good time to make complex changes in 16. This had already been done for older branches prior to 15's release. Now we're about to release 16, and Debian's test builds are regularly failing on one architecture, so let's do the same for 15 and 16. Reported-by: Christoph Berg <myon@debian.org> Reported-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://postgr.es/m/CALj2ACVr8au2J_9D88UfRCi0JdWhyQDDxAcSVav0B0irx9nXEg%40mail.gmail.com
* Unify gratuitously different error messagesPeter Eisentraut2023-09-05
| | | | Fixup for commit 37188cea0c.
* Fix out-of-bound read in gtsvector_picksplit()Michael Paquier2023-09-04
| | | | | | | | | | | | | | This could lead to an imprecise choice when splitting an index page of a GiST index on a tsvector, deciding which entries should remain on the old page and which entries should move to a new page. This is wrong since tsearch2 has been moved into core with commit 140d4ebcb46e, so backpatch all the way down. This error has been spotted by valgrind. Author: Alexander Lakhin Discussion: https://postgr.es/m/17950-6c80a8d2b94ec695@postgresql.org Backpatch-through: 11
* Fix handling of shared statistics with dropped databasesMichael Paquier2023-09-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dropping a database while a connection is attempted on it was able to lead to the presence of valid database entries in shared statistics. The issue is that MyDatabaseId was getting set too early than it should, as, if the connection attempted on the dropped database fails when renamed or dropped, the shutdown callback of the shared statistics would finish by re-inserting a correct entry related to the database already dropped. As analyzed by the bug reporters, this issue could lead to phantom entries in the database list maintained by the autovacuum launcher (in rebuild_database_list()) if the database dropped was part of the database list when it was still valid. After the database was dropped, it would remain the highest on the list of databases to considered by the autovacuum worker as things to process. This would prevent autovacuum jobs to happen on all the other databases still present. The commit fixes this issue by delaying setting MyDatabaseId until the database existence has been re-checked with the second scan on pg_database after getting a shared lock on it, and by switching pgstat_update_dbstats() so as nothing happens if MyDatabaseId is not valid. Issue introduced by 5891c7a8ed8f, so backpatch down to 15. Reported-by: Will Mortensen, Jacob Speidel Analyzed-by: Will Mortensen, Jacob Speidel Author: Andres Freund Discussion: https://postgr.es/m/17973-bca1f7d5c14f601e@postgresql.org Backpatch-through: 15
* Avoid possible overflow with ltsGetFreeBlock() in logtape.cMichael Paquier2023-08-30
| | | | | | | | | | | | | | | | | | nFreeBlocks, defined as a long, stores the number of free blocks in a logical tape. ltsGetFreeBlock() has been using an int to store the value of nFreeBlocks, which could lead to overflows on platforms where long and int are not the same size (in short everything except Windows where long is 4 bytes). The problematic intermediate variable is switched to be a long instead of an int. Issue introduced by c02fdc9223015, so backpatch down to 13. Author: Ranier vilela Reviewed-by: Peter Geoghegan, David Rowley Discussion: https://postgr.es/m/CAEudQApLDWCBR_xmwNjGBrDo+f+S4E87x3s7-+hoaKqYdtC4JQ@mail.gmail.com Backpatch-through: 13
* Initialize ListenSocket array earlier.Heikki Linnakangas2023-08-29
| | | | | | | | | | | | | | | | | | | After commit b0bea38705, syslogger prints 63 warnings about failing to close a listen socket at postmaster startup. That's because the syslogger process forks before the ListenSockets array is initialized, so ClosePostmasterPorts() calls "close(0)" 64 times. The first call succeeds, because fd 0 is stdin. This has been like this since commit 9a86f03b4e in version 13, which moved the SysLogger_Start() call to before initializing ListenSockets. We just didn't notice until commit b0bea38705 added the LOG message. Reported by Michael Paquier and Jeff Janes. Author: Michael Paquier Discussion: https://www.postgresql.org/message-id/ZOvvuQe0rdj2slA9%40paquier.xyz Discussion: https://www.postgresql.org/message-id/ZO0fgDwVw2SUJiZx@paquier.xyz#482670177eb4eaf4c9f03c1eed963e5f Backpatch-through: 13
* Avoid unnecessary plancache revalidation of utility statements.Tom Lane2023-08-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Revalidation of a plancache entry (after a cache invalidation event) requires acquiring a snapshot. Normally that is harmless, but not if the cached statement is one that needs to run without acquiring a snapshot. We were already aware of that for TransactionStmts, but for some reason hadn't extrapolated to the other statements that PlannedStmtRequiresSnapshot() knows mustn't set a snapshot. This can lead to unexpected failures of commands such as SET TRANSACTION ISOLATION LEVEL. We can fix it in the same way, by excluding those command types from revalidation. However, we can do even better than that: there is no need to revalidate for any statement type for which parse analysis, rewrite, and plan steps do nothing interesting, which is nearly all utility commands. To mechanize this, invent a parser function stmt_requires_parse_analysis() that tells whether parse analysis does anything beyond wrapping a CMD_UTILITY Query around the raw parse tree. If that's what it does, then rewrite and plan will just skip the Query, so that it is not possible for the same raw parse tree to produce a different plan tree after cache invalidation. stmt_requires_parse_analysis() is basically equivalent to the existing function analyze_requires_snapshot(), except that for obscure reasons that function omits ReturnStmt and CallStmt. It is unclear whether those were oversights or intentional. I have not been able to demonstrate a bug from not acquiring a snapshot while analyzing these commands, but at best it seems mighty fragile. It seems safer to acquire a snapshot for parse analysis of these commands too, which allows making stmt_requires_parse_analysis and analyze_requires_snapshot equivalent. In passing this fixes a second bug, which is that ResetPlanCache would exclude ReturnStmts and CallStmts from revalidation. That's surely *not* safe, since they contain parsable expressions. Per bug #18059 from Pavel Kulakov. Back-patch to all supported branches. Discussion: https://postgr.es/m/18059-79c692f036b25346@postgresql.org
* Update DECLARE_INDEX documentationPeter Eisentraut2023-08-24
| | | | | | | Update source code comment changes belonging to the changes in 6a6389a08b. Discussion: https://www.postgresql.org/message-id/flat/75ae5875-3abc-dafc-8aec-73247ed41cde@eisentraut.org
* ci: Make compute resources for CI configurableAndres Freund2023-08-23
| | | | | | | | | | See prior commit for an explanation for the goal of the change and why it had to be split into two commits. Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com> Discussion: https://postgr.es/m/20230808021541.7lbzdefvma7qmn3w@awork3.anarazel.de Backpatch: 15-, where CI support was added
* Fix pg_dump assertion failure when dumping pg_catalog.Jeff Davis2023-08-22
| | | | | | | | | | Commit 396d348b04 did not account for the default collation. Also, use pg_log_warning() instead of Assert(). Discussion: https://postgr.es/m/ce071503fee88334aa70f360e6e4ea14d48305ee.camel%40j-davis.com Reviewed-by: Michael Paquier Backpatch-through: 15