aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Fix missed cases in libpq's error handling.Tom Lane2022-04-21
| | | | | | | | | | | | | | | | | | | | Commit 618c16707 invented an "error_result" flag in PGconn, which intends to represent the state that we have an error condition and need to build a PGRES_FATAL_ERROR PGresult from the message text in conn->errorMessage, but have not yet done so. (Postponing construction of the error object simplifies dealing with out-of-memory conditions and with concatenation of messages for multiple errors.) For nearly all purposes, this "virtual" PGresult object should act the same as if it were already materialized. But a couple of places in fe-protocol3.c didn't get that memo, and were only testing conn->result as they used to, without also checking conn->error_result. In hopes of reducing the probability of similar mistakes in future, I invented a pgHavePendingResult() macro that includes both tests. Per report from Peter Eisentraut. Discussion: https://postgr.es/m/b52277b9-fa66-b027-4a37-fb8989c73ff8@enterprisedb.com
* Rethink method for assigning OIDs to the template0 and postgres DBs.Tom Lane2022-04-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit aa0105141 assigned fixed OIDs to template0 and postgres in a very ad-hoc way. Notably, instead of teaching Catalog.pm about these OIDs, the unused_oids script was just hacked to not show them as unused. That's problematic since, for example, duplicate_oids wouldn't report any future conflict. Hence, invent a macro DECLARE_OID_DEFINING_MACRO() that can be used to define an OID that is known to Catalog.pm and will participate in duplicate-detection as well as renumbering by renumber_oids.pl. (We don't anticipate renumbering these particular OIDs, but we might as well build out all the Catalog.pm infrastructure while we're here.) Another issue is that aa0105141 neglected to touch IsPinnedObject, with the result that it now claimed template0 and postgres are pinned. The right thing to do there seems to be to teach it that no database is pinned, since in fact DROP DATABASE doesn't check for pinned-ness (and at least for these cases, that is an intentional choice). It's not clear whether this wrong answer had any visible effect, but perhaps it could have resulted in erroneous management of dependency entries. In passing, rename the TemplateDbOid macro to Template1DbOid to reduce confusion (likely we should have done that way back when we invented template0, but we didn't), and rename the OID macros for template0 and postgres to have a similar style. There are no changes to postgres.bki here, so no need for a catversion bump. Discussion: https://postgr.es/m/2935358.1650479692@sss.pgh.pa.us
* Use DECLARE_TOAST_WITH_MACRO() to simplify toast-table declarations.Tom Lane2022-04-21
| | | | | | | | | | | | | | | This is needed so that renumber_oids.pl can handle renumbering shared catalog declarations, which need to provide C macros for the OIDs of the shared toast table and index. The previous method of writing a C macro separately was error-prone anyway. Also teach renumber_oids.pl about DECLARE_UNIQUE_INDEX_PKEY, as we missed doing when inventing that macro. There are no changes to postgres.bki here, so no need for a catversion bump. Discussion: https://postgr.es/m/2995325.1650487527@sss.pgh.pa.us
* vacuumlazy.c: MultiXactIds are MXIDs, not XMIDs.Peter Geoghegan2022-04-20
| | | | Oversights in commits 0b018fab and f3c15cbe.
* Fix CLUSTER tuplesorts on abbreviated expressions.Peter Geoghegan2022-04-20
| | | | | | | | | | | | | | | | | | | | CLUSTER sort won't use the datum1 SortTuple field when clustering against an index whose leading key is an expression. This makes it unsafe to use the abbreviated keys optimization, which was missed by the logic that sets up SortSupport state. Affected tuplesorts output tuples in a completely bogus order as a result (the wrong SortSupport based comparator was used for the leading attribute). This issue is similar to the bug fixed on the master branch by recent commit cc58eecc5d. But it's a far older issue, that dates back to the introduction of the abbreviated keys optimization by commit 4ea51cdfe8. Backpatch to all supported versions. Author: Peter Geoghegan <pg@bowt.ie> Author: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/CA+hUKG+bA+bmwD36_oDxAoLrCwZjVtST2fqe=b4=qZcmU7u89A@mail.gmail.com Backpatch: 10-
* Disallow infinite endpoints in generate_series() for timestamps.Tom Lane2022-04-20
| | | | | | | | | | Such cases will lead to infinite loops, so they're of no practical value. The numeric variant of generate_series() already threw error for this, so borrow its message wording. Per report from Richard Wesley. Back-patch to all supported branches. Discussion: https://postgr.es/m/91B44E7B-68D5-448F-95C8-B4B3B0F5DEAF@duckdblabs.com
* Allow db.schema.table patterns, but complain about random garbage.Robert Haas2022-04-20
| | | | | | | | | | | | | | | | | | | | | | | | | | psql, pg_dump, and pg_amcheck share code to process object name patterns like 'foo*.bar*' to match all tables with names starting in 'bar' that are in schemas starting with 'foo'. Before v14, any number of extra name parts were silently ignored, so a command line '\d foo.bar.baz.bletch.quux' was interpreted as '\d bletch.quux'. In v14, as a result of commit 2c8726c4b0a496608919d1f78a5abc8c9b6e0868, we instead treated this as a request for table quux in a schema named 'foo.bar.baz.bletch'. That caused problems for people like Justin Pryzby who were accustomed to copying strings of the form db.schema.table from messages generated by PostgreSQL itself and using them as arguments to \d. Accordingly, revise things so that if an object name pattern contains more parts than we're expecting, we throw an error, unless there's exactly one extra part and it matches the current database name. That way, thisdb.myschema.mytable is accepted as meaning just myschema.mytable, but otherdb.myschema.mytable is an error, and so is some.random.garbage.myschema.mytable. Mark Dilger, per report from Justin Pryzby and discussion among various people. Discussion: https://www.postgresql.org/message-id/20211013165426.GD27491%40telsasoft.com
* Fix incorrect format placeholdersPeter Eisentraut2022-04-20
|
* set_deparse_plan: Reuse variable to appease CoverityAlvaro Herrera2022-04-20
| | | | | | | | | | | | | | Coverity complains that dpns->outer_plan is deferenced (to obtain ->targetlist) when possibly NULL. We can avoid this by using dpns->outer_tlist instead, which was already obtained a few lines up. The fact that we end up with dpns->inner_tlist = dpns->outer_tlist is a bit suspicious-looking and maybe worthy of more investigation, but I'll leave that for another day. Reviewed-by: Michaël Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/202204191345.qerjy3kxi3eb@alvherre.pgsql
* Move ModifyTableContext->lockmode to UpdateContextAlvaro Herrera2022-04-20
| | | | | | | | | Should have been done this way to start with, but I failed to notice This way we avoid some pointless initialization, and better contains the variable to exist in the scope where it is really used. Reviewed-by: Michaël Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/202204191345.qerjy3kxi3eb@alvherre.pgsql
* ExecModifyTable: use context.planSlot instead of planSlotAlvaro Herrera2022-04-20
| | | | | | | | There's no reason to keep a separate local variable when we have a place for it elsewhere. This allows to simplify some code. Reviewed-by: Michaël Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/202204191345.qerjy3kxi3eb@alvherre.pgsql
* Fix breakage in AlterFunction().Tom Lane2022-04-19
| | | | | | | | | | | | | | | | An ALTER FUNCTION command that tried to update both the function's proparallel property and its proconfig list failed to do the former, because it stored the new proparallel value into a tuple that was no longer the interesting one. Carelessness in 7aea8e4f2. (I did not bother with a regression test, because the only likely future breakage would be for someone to ignore the comment I added and add some other field update after the heap_modify_tuple step. A test using existing function properties could not catch that.) Per report from Bryn Llewellyn. Back-patch to all supported branches. Discussion: https://postgr.es/m/8AC9A37F-99BD-446F-A2F7-B89AD0022774@yugabyte.com
* Remove duplicated word in comment of basebackup.cMichael Paquier2022-04-20
| | | | | | | Oversight in 39969e2. Author: Martín Marqués Discussion: https://postgr.es/m/CABeG9LviA01oHC5h=ksLUuhMyXxmZR_tftRq6q3341CMT=j=4g@mail.gmail.com
* Fix extract epoch from interval calculationPeter Eisentraut2022-04-19
| | | | | | | | | | | | | | The new numeric code for extract epoch from interval accidentally truncated the DAYS_PER_YEAR value to an integer, leading to results that mismatched the floating-point interval_part calculations. The commit a2da77cdb4661826482ebf2ddba1f953bc74afe4 that introduced this actually contains the regression test change that this reverts. I suppose this was missed at the time. Reported-by: Joseph Koshakow <koshy44@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/CAAvxfHd5n%3D13NYA2q_tUq%3D3%3DSuWU-CufmTf-Ozj%3DfrEgt7pXwQ%40mail.gmail.com
* Fix aggregate logging of pgbench.Tatsuo Ishii2022-04-19
| | | | | | | | Remove meaningless "failures" column from the aggregate logging. It was just a sum of "serialization failures" and "deadlock failures". Pointed out by Tom Lane. Patch reviewed by Fabien COELHO. Discussion: https://postgr.es/m/4183048.1649536705%40sss.pgh.pa.us
* Fix the check to limit sync workers.Amit Kapila2022-04-19
| | | | | | | | | | | | | | | | | We don't allow to invoke more sync workers once we have reached the sync worker limit per subscription. But the check to enforce this also doesn't allow to launch an apply worker if it gets restarted. This code was introduced by commit de43897122 but we caught the problem only with the test added by recent commit c91f71b9dc which started failing occasionally in the buildfarm. As per buildfarm. Diagnosed-by: Amit Kapila, Masahiko Sawada, Tomas Vondra Author: Amit Kapila Backpatch-through: 10 Discussion: https://postgr.es/m/CAH2L28vddB_NFdRVpuyRBJEBWjz4BSyTB=_ektNRH8NJ1jf95g@mail.gmail.com https://postgr.es/m/f90d2b03-4462-ce95-a524-d91464e797c8@enterprisedb.com
* Add missing error handling in pg_md5_hash().Tom Lane2022-04-18
| | | | | | | | | | It failed to provide an error string as expected for the admittedly-unlikely case of OOM in pg_cryptohash_create(). Also, make it initialize *errstr to NULL for success, as pg_md5_binary() does. Also add missing comments. Readers should not have to reverse-engineer the API spec for a publicly visible routine.
* Avoid invalid array reference in transformAlterTableStmt().Tom Lane2022-04-18
| | | | | | | | | | | | | | | | | | | | | | Don't try to look at the attidentity field of system attributes, because they're not there in the TupleDescAttr array. Sometimes this is harmless because we accidentally pick up a zero, but otherwise we'll report "no owned sequence found" from an attempt to alter a system attribute. (It seems possible that a SIGSEGV could occur, too, though I've not seen it in testing.) It's not in this function's charter to complain that you can't alter a system column, so instead just hard-wire an assumption that system attributes aren't identities. I didn't bother with a regression test because the appearance of the bug is very erratic. Per bug #17465 from Roman Zharkov. Back-patch to all supported branches. (There's not actually a live bug before v12, because before that get_attidentity() did the right thing anyway. But for consistency I changed the test in the older branches too.) Discussion: https://postgr.es/m/17465-f2a554a6cb5740d3@postgresql.org
* Fix second race condition in 002_archiving.pl with archive_cleanup_commandMichael Paquier2022-04-18
| | | | | | | | | | | | | Checking the execution of archive_cleanup_command on a standby requires a valid checkpoint coming from its primary, but the logic did not check that the standby replayed up to the point of the checkpoint, causing the test checking for the execution of archive_cleanup_command to fail. This race was more visible in slow environments. Issue introduced in 46dea24, so no backpatch is needed. Author: Tom Lane Discussion: https://postgr.es/m/4015413.1649454951@sss.pgh.pa.us
* Fix race in TAP test 002_archiving.pl when restoring history fileMichael Paquier2022-04-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This test, introduced in df86e52, uses a second standby to check that it is able to remove correctly RECOVERYHISTORY and RECOVERYXLOG at the end of recovery. This standby uses the archives of the primary to restore its contents, with some of the archive's contents coming from the first standby previously promoted. In slow environments, it was possible that the test did not check what it should, as the history file generated by the promotion of the first standby may not be stored yet on the archives the second standby feeds on. So, it could be possible that the second standby selects an incorrect timeline, without restoring a history file at all. This commits adds a wait phase to make sure that the history file required by the second standby is archived before this cluster is created. This relies on poll_query_until() with pg_stat_file() and an absolute path, something not supported in REL_10_STABLE. While on it, this adds a new test to check that the history file has been restored by looking at the logs of the second standby. This ensures that a RECOVERYHISTORY, whose removal needs to be checked, is created in the first place. This should make the test more robust. This test has been introduced by df86e52, but it came in light as an effect of the bug fixed by acf1dd42, where the extra restore_command calls made the test much slower. Reported-by: Andres Freund Discussion: https://postgr.es/m/YlT23IvsXkGuLzFi@paquier.xyz Backpatch-through: 11
* Handle compression level in pg_receivewal for LZ4Michael Paquier2022-04-18
| | | | | | | | | | | | | The new option set of pg_receivewal introduced in 042a923 to control the compression method makes it now easy to pass down various options, including the compression level. The change to be able to do is simple, and requires one LZ4F_preferences_t fed to LZ4F_compressBegin(). Note that LZ4F_INIT_PREFERENCES could be used to initialize the contents of LZ4F_preferences_t as required by LZ4, but this is only available since v1.8.3. memset()'ing its structure to 0 is enough. Discussion: https://postgr.es/m/YlPQGNAAa04raObK@paquier.xyz
* Add a temp-install prerequisite to src/interfaces/ecpg "checktcp".Noah Misch2022-04-16
| | | | | | The target failed, tested $PATH binaries, or tested a stale temporary installation. Commit c66b438db62748000700c9b90b585e756dd54141 missed this. Back-patch to v10 (all supported versions).
* Don't retry restore_command while reading ahead.Thomas Munro2022-04-17
| | | | | | | | | | | | | | | | | | | | | | | | | | Suppress further attempts to read ahead in the WAL if we run out of data, until the records already decoded have been replayed. This restores the traditional behavior for continuous archive recovery, which is to retry the failing restore_command only every 5 seconds. With the coding in 5dc0418f, we would start retrying every time through the recovery loop when our WAL decoding window hit the end of the current segment and we tried to look ahead into a not-yet-available next file. That was very slow. Also change the no_readahead_until mechanism to use <= rather than <, which seems more useful. Otherwise we'd either get one extra unwanted retry of restore_command, or we'd need to add 1 to an LSN. No change in behavior for regular streaming. That was already limited by the flushedUpto variable, which won't be updated until we replay what we have already. Reported by Andres Freund while analyzing the failure of a TAP test on build farm animal skink (investigation ongoing but probably due to otherwise unrelated timing bugs triggered by this slowness magnified by valgrind). Discussion: https://postgr.es/m/20220409005910.alw46xqmmgny2sgr%40alap3.anarazel.de
* pgstat: Use correct lock level in pgstat_drop_all_entries().Andres Freund2022-04-16
| | | | | | | | | | | | | | | Previously we didn't, which lead to an assertion failure when resetting partially loaded statistics. This was encountered on the buildfarm, for as-of-yet unknown reasons. Ttighten up a validity check when reading the stats file, verifying 'E' signals the end of the file (rather than just stopping reading). That's then used in a test appending to the stats file that crashed before the fix in pgstat_drop_all_entries(). Reported by buildfarm animals mylodon and kestrel, via Tom Lane. Discussion: https://postgr.es/m/1656446.1650043715@sss.pgh.pa.us
* Fix incorrect logic in HaveRegisteredOrActiveSnapshot().Tom Lane2022-04-16
| | | | | | | | | | | | | | | | | This function gave the wrong answer when there's more than one RegisteredSnapshots entry, whether or not any of them is the CatalogSnapshot. This leads to assertion failure in some scenarios involving fetching toasted data using a cursor. (As per discussion, I'm dubious that this is the right contract to be enforcing at all; but it surely doesn't help to be enforcing it incorrectly.) Fetching toasted data using a cursor is evidently under-tested, so add a test case too. Per report from Erik Rijkers. This is new code, so no need for back-patch. Discussion: https://postgr.es/m/dc9dd229-ed30-6c62-4c41-d733ffff776b@xs4all.nl
* Build libpq test programs under MSVCAndrew Dunstan2022-04-16
| | | | This allows the newly added TAP tests to run.
* Use standard timeout, in 010_pg_basebackup.pl.Noah Misch2022-04-15
| | | | Per buildfarm member mandrill. The test is new in v15, so no back-patch.
* Fix multi-table VACUUM VERBOSE accounting.Peter Geoghegan2022-04-15
| | | | | | | | | | | | | | | | Per-backend global variables like VacuumPageHit are initialized once per VACUUM command. This was missed by commit 49c9d9fc, which unified VACUUM VERBOSE and autovacuum logging. As a result of that oversight, incorrect values were shown when multiple relations were processed by a single VACUUM VERBOSE command. Relations that happened to be processed later on would show "buffer usage:" values that incorrectly included buffer accesses made while processing earlier unrelated relations. The same accesses were counted multiple times. To fix, take initial values for the tracker variables at the start of heap_vacuum_rel(), and report delta values later on.
* psql: fix \l display for pre-v15 databases.Tom Lane2022-04-15
| | | | | | | | | | | | | | With a pre-v15 server, show NULL for the "ICU Locale" column, matching what you see in v15 when the database locale isn't ICU. The previous coding incorrectly repeated datcollate here. (There's an unfinished discussion about whether to consolidate these columns in \l output, but in any case we'd want this fix for \l+ output.) Euler Taveira, per report from Christoph Berg Discussion: https://postgr.es/m/YlmIFCqu+TZSW4rB@msg.df7cb.de
* Tighten ComputeXidHorizons' handling of walsenders.Tom Lane2022-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ComputeXidHorizons (nee GetOldestXmin) thought that it could identify walsenders by checking for proc->databaseId == 0. Perhaps that was safe when the code was written, but it's been wrong at least since autovacuum was invented. Background processes that aren't connected to any particular database, such as the autovacuum launcher and logical replication launcher, look like that too. This imprecision is harmful because when such a process advertises an xmin, the result is to hold back dead-tuple cleanup in all databases, though it'd be sufficient to hold it back in shared catalogs (which are the only relations such a process can access). Aside from being generally inefficient, this has recently been seen to cause regression test failures in the buildfarm, as a consequence of the logical replication launcher's startup transaction preventing VACUUM from marking pages of a user table as all-visible. We only want that global hold-back effect for the case where a walsender is advertising a hot standby feedback xmin. Therefore, invent a new PGPROC flag that says that a process' xmin should be considered globally, and check that instead of using the incorrect databaseId == 0 test. Currently only a walsender sets that flag, and only if it is not connected to any particular database. (This is for bug-compatibility with the undocumented behavior of the existing code, namely that feedback sent by a client who has connected to a particular database would not be applied globally. I'm not sure this is a great definition; however, such a client is capable of issuing plain SQL commands, and I don't think we want xmins advertised for such commands to be applied globally. Perhaps this could do with refinement later.) While at it, I rewrote the comment in ComputeXidHorizons, and re-ordered the commented-upon if-tests, to make them match up for intelligibility's sake. This is arguably a back-patchable bug fix, but given the lack of complaints I think it prudent to let it age awhile in HEAD first. Discussion: https://postgr.es/m/1346227.1649887693@sss.pgh.pa.us
* VACUUM VERBOSE: Show dead items for an empty table.Peter Geoghegan2022-04-15
| | | | | | | | | | | | Be consistent about the lines that VACUUM VERBOSE outputs by including an "index scan not needed: " line for completely empty tables. This makes the output more readable, especially with multiple distinct VACUUM operations processed by the same VACUUM command. It's also more consistent; even empty tables can use the failsafe, which wasn't reported in the standard way until now. Follow-up to commit 6e20f460, which taught VACUUM VERBOSE to be more consistent about reporting on scanned pages with empty tables.
* Adjust VACUUM's removable cutoff log message.Peter Geoghegan2022-04-15
| | | | | | | | | | | | | | | | | | | | | The age of OldestXmin (a.k.a. "removable cutoff") when VACUUM ends often indicates the approximate number of XIDs consumed while VACUUM ran. However, there is at least one important exception: the cutoff could be held back by a snapshot that was acquired before our VACUUM even began. Successive VACUUM operations may even use exactly the same old cutoff in extreme cases involving long held snapshots. The log messages that described how removable cutoff aged (which were added by commit 872770fd) created the impression that we were reporting on how VACUUM's usable cutoff advanced while VACUUM ran, which was misleading in these extreme cases. Fix by using a more general wording. Per gripe from Tom Lane. In passing, relocate related instrumentation code for clarity. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/1643035.1650035653@sss.pgh.pa.us
* Revert "Temporarily add some probes of tenk1's relallvisible in ↵Tom Lane2022-04-15
| | | | | | | create_index.sql." This reverts commit 5bb2b6abc8d6cf120a814317816e4384bcbb9c1e. Not needed anymore.
* Small cleanups in SQL/JSON codeAndrew Dunstan2022-04-15
| | | | | | These are to keep Coverity happy. In one case remove a redundant NULL check, and in another explicitly ignore a function result that is already known.
* pgstat: set timestamps of fixed-numbered stats after a crash.Andres Freund2022-04-14
| | | | | | | | | | | | When not loading stats at startup (i.e. pgstat_discard_stats() getting called), reset timestamps of fixed numbered stats would be left at 0. Oversight in 5891c7a8ed8. Instead use pgstat_reset_after_failure() and add tests verifying that fixed-numbered reset timestamps are set appropriately. Reported-By: "David G. Johnston" <david.g.johnston@gmail.com> Discussion: https://postgr.es/m/CAKFQuwamFuaQHKdhcMt4Gbw5+Hca2UE741B8gOOXoA=TtAd2Yw@mail.gmail.com
* Have CLUSTER ignore partitions not owned by callerAlvaro Herrera2022-04-14
| | | | | | | | | | | | | | | | If a partitioned table has partitions owned by roles other than the owner of the partitioned table, don't include them in the to-be- clustered list. This is similar to what VACUUM FULL does (except we do it sooner, because there is no reason to postpone it). Add a simple test to verify that only owned partitions are clustered. While at it, change memory context switch-and-back to occur once per partition instead of outside of the loop. Author: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/20220411140609.GF26620@telsasoft.com
* Temporarily add some probes of tenk1's relallvisible in create_index.sql.Tom Lane2022-04-14
| | | | | | | | This is to gather some more evidence about why buildfarm member wrasse is failing. We should revert it (or at least scale it way back) once that's resolved. Discussion: https://postgr.es/m/1346227.1649887693@sss.pgh.pa.us
* Improve a couple of sql/json error messagesAndrew Dunstan2022-04-14
| | | | Fix the grammar in two, and add a hint to one.
* Fix transformJsonBehaviorAndrew Dunstan2022-04-14
| | | | | | | Commit 1a36bc9dba8 conained some logic that was a little opaque and could have involved a NULL dereference, as complained about by Coverity. Make the logic more transparent and in doing so avoid the NULL dereference.
* Add missing spaces after single-line commentsDavid Rowley2022-04-14
| | | | | | | | | | | | | Only 1 of 3 of these changes appear to be handled by pgindent. That change is new to v15. The remaining two appear to be left alone by pgindent. The exact reason for that is not 100% clear to me. It seems related to the fact that it's a line that contains *only* a single line comment and no actual code. It does not seem worth investigating this in too much detail. In any case, these do not conform to our usual practices, so fix them. Author: Justin Pryzby Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
* Fix case sensitivity in psql's tab completion for GUC names.Tom Lane2022-04-13
| | | | | | | | | | | | Input for these should be case-insensitive, but was not completely so. Comparing to the similar queries for timezone names, I realized that we'd missed forcing the comparison pattern to lower-case. With that, it behaves as I expect. While here, flatten the sub-selects in these queries; I don't find that those add any readability. Discussion: https://postgr.es/m/3369130.1649348542@sss.pgh.pa.us
* Further tweak the default behavior of psql's \dconfig.Tom Lane2022-04-13
| | | | | | | | | | | | | Define "parameters with non-default settings" as being those that not only have pg_settings.source different from 'default', but also have a current value different from the hard-wired boot_val. Adding the latter restriction removes a number of not-very-interesting cases where the active setting is chosen by initdb but in practice tends to be the same all the time. Per discussion with Jonathan Katz. Discussion: https://postgr.es/m/YlFQLzlPi4QD0wSi@msg.df7cb.de
* Prevent access to no-longer-pinned buffer in heapam_tuple_lock().Tom Lane2022-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | heap_fetch() used to have a "keep_buf" parameter that told it to return ownership of the buffer pin to the caller after finding that the requested tuple TID exists but is invisible to the specified snapshot. This was thoughtlessly removed in commit 5db6df0c0, which broke heapam_tuple_lock() (formerly EvalPlanQualFetch) because that function needs to do more accesses to the tuple even if it's invisible. The net effect is that we would continue to touch the page for a microsecond or two after releasing pin on the buffer. Usually no harm would result; but if a different session decided to defragment the page concurrently, we could see garbage data and mistakenly conclude that there's no newer tuple version to chain up to. (It's hard to say whether this has happened in the field. The bug was actually found thanks to a later change that allowed valgrind to detect accesses to non-pinned buffers.) The most reasonable way to fix this is to reintroduce keep_buf, although I made it behave slightly differently: buffer ownership is passed back only if there is a valid tuple at the requested TID. In HEAD, we can just add the parameter back to heap_fetch(). To avoid an API break in the back branches, introduce an additional function heap_fetch_extended() in those branches. In HEAD there is an additional, less obvious API change: tuple->t_data will be set to NULL in all cases where buffer ownership is not returned, in particular when the tuple exists but fails the time qual (and !keep_buf). This is to defend against any other callers attempting to access non-pinned buffers. We concluded that making that change in back branches would be more likely to introduce problems than cure any. In passing, remove a comment about heap_fetch that was obsoleted by 9a8ee1dc6. Per bug #17462 from Daniil Anisimov. Back-patch to v12 where the bug was introduced. Discussion: https://postgr.es/m/17462-9c98a0f00df9bd36@postgresql.org
* Remove extraneous blank lines before block-closing bracesAlvaro Herrera2022-04-13
| | | | | | | | | These are useless and distracting. We wouldn't have written the code with them to begin with, so there's no reason to keep them. Author: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com Discussion: https://postgr.es/m/attachment/133167/0016-Extraneous-blank-lines.patch
* Release cache tuple when no longer neededAlvaro Herrera2022-04-13
| | | | | | | | | | | | There was a small buglet in commit 52e4f0cd472d whereby a tuple acquired from cache was not released, giving rise to WARNING messages; fix that. While at it, restructure the code a bit on stylistic grounds. Author: Hou zj <houzj.fnst@fujitsu.com> Reported-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CAHut+PvKTyhTBtYCQsP6Ph7=o-oWRSX+v+PXXLXp81-o2bazig@mail.gmail.com
* Fix finalization for json_objectagg and friendsAndrew Dunstan2022-04-13
| | | | | | | | | | | | | Commit f4fb45d15c misguidedly tried to free some state during aggregate finalization for json_objectagg. This resulted in attempts to access freed memory, especially when the function is used as a window function. Commit 4eb9798879 attempted to ameliorate that, but in fact it should just be ripped out, which is done here. Also add some regression tests for json_objectagg in various flavors as a window function. Original report from Jaime Casanova, diagnosis by Andres Freund. Discussion: https://postgr.es/m/YkfeMNYRCGhySKyg@ahch-to
* Fix incorrect format placeholdersPeter Eisentraut2022-04-13
|
* Remove "recheck" argument from check_index_is_clusterable()Michael Paquier2022-04-13
| | | | | | | | | | | The last usage of this argument in this routine can be tracked down to 7e2f9062, aka 11 years ago. Getting rid of this argument can also be an advantage for extensions calling check_index_is_clusterable(), as it removes any need to worry about the meaning of what a recheck would be at this level. Author: Justin Pryzby Discussion: https://postgr.es/m/20220411140609.GF26620@telsasoft.com
* Rework compression options of pg_receivewalMichael Paquier2022-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | Since babbbb5 and the introduction of LZ4 in pg_receivewal, the compression of the WAL archived is controlled by two options: - --compression-method with "gzip", "none" or "lz4" as possible value. - --compress=N to specify a compression level. This includes a backward-incompatible change where a value of 0 leads to a failure instead of no compression enforced. This commit takes advantage of a4b5754 and 3603f7c to rework the compression options of pg_receivewal, as of: - The removal of --compression-method. - The extenction of --compress to use the same grammar as pg_basebackup, with a METHOD:DETAIL format, where a METHOD is "gzip", "none" or "lz4" and a DETAIL is a comma-separated list of options, the only keyword supported is now "level" to control the compression level. If only an integer is specified as value of this option, "none" is implied on 0 and "gzip" is implied otherwise. This brings back --compress to be backward-compatible with ~14, while still supporting LZ4. This has also the advantage of centralizing the set of checks used by pg_receivewal to validate its compression options. Author: Michael Paquier Reviewed-by: Robert Haas, Georgios Kokolatos Discussion: https://postgr.es/m/YlPQGNAAa04raObK@paquier.xyz
* Revert the addition of GetMaxBackends() and related stuff.Robert Haas2022-04-12
| | | | | | | | | | | | This reverts commits 0147fc7, 4567596, aa64f23, and 5ecd018. There is no longer agreement that introducing this function was the right way to address the problem. The consensus now seems to favor trying to make a correct value for MaxBackends available to mdules executing their _PG_init() functions. Nathan Bossart Discussion: http://postgr.es/m/20220323045229.i23skfscdbvrsuxa@jrouhaud