aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Don't set ThisTimeLineID when there's no reason to do so.Robert Haas2021-11-05
| | | | | | | | | | | | | | | | | | | | | In slotfuncs.c, pg_replication_slot_advance() needs to determine the LSN up to which the slot should be advanced, but that doesn't require us to update ThisTimeLineID, because none of the code called from here depends on it. If the replication slot is logical, pg_logical_replication_slot_advance will call read_local_xlog_page, which does use ThisTimeLineID, but also takes care of making sure it's up to date. If the replication slot is physical, the timeline isn't used for anything at all. In logicalfuncs.c, pg_logical_slot_get_changes_guts() has the same issue: the only code we're going to run that cares about timelines is in or downstream of read_local_xlog_page, which already makes sure that the correct value gets set. Hence, don't do it here. Patch by me, reviewed and tested by Michael Paquier, Amul Sul, and Álvaro Herrera. Discussion: https://postgr.es/m/CA+TgmobfAAqhfWa1kaFBBFvX+5CjM=7TE=n4r4Q1o2bjbGYBpA@mail.gmail.com
* Avoid crash in rare case of concurrent DROPAlvaro Herrera2021-11-05
| | | | | | | | | | | | | | | When a role being dropped contains is referenced by catalog objects that are concurrently also being dropped, a crash can result while trying to construct the string that describes the objects. Suppress that by ignoring objects whose descriptions are returned as NULL. The majority of relevant codesites were already cautious about this already; we had just missed a couple. This is an old bug, so backpatch all the way back. Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/17126-21887f04508cb5c8@postgresql.org
* Introduce 'bbstreamer' abstraction to modularize pg_basebackup.Robert Haas2021-11-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_basebackup knows how to do quite a few things with a backup that it gets from the server, like just write out the files, or compress them first, or even parse the tar format and inject a modified postgresql.auto.conf file into the archive generated by the server. Unforatunely, this makes pg_basebackup.c a very large source file, and also somewhat difficult to enhance, because for example the knowledge that the server is sending us a 'tar' file rather than some other sort of archive is spread all over the place rather than centralized. In an effort to improve this situation, this commit invents a new 'bbstreamer' abstraction. Each archive received from the server is fed to a bbstreamer which may choose to dispose of it or pass it along to some other bbstreamer. Chunks may also be "labelled" according to whether they are part of the payload data of a file in the archive or part of the archive metadata. So, for example, if we want to take a tar file, modify the postgresql.auto.conf file it contains, and the gzip the result and write it out, we can use a bbstreamer_tar_parser to parse the tar file received from the server, a bbstreamer_recovery_injector to modify the contents of postgresql.auto.conf, a bbstreamer_tar_archiver to replace the tar headers for the file modified in the previous step with newly-built ones that are correct for the modified file, and a bbstreamer_gzip_writer to gzip and write the resulting data. Only the objects with "tar" in the name know anything about the tar archive format, and in theory we could re-archive using some other format rather than "tar" if somebody wanted to write the code. These chances do add a substantial amount of code, but I think the result is a lot more maintainable and extensible. pg_basebackup.c itself shrinks by roughly a third, with a lot of the complexity previously contained there moving into the newly-added files. Patch by me. The larger patch series of which this is a part has been reviewed and tested at various times by Andres Freund, Sumanta Mukherjee, Dilip Kumar, Suraj Kharage, Dipesh Pandit, Tushar Ahuja, Mark Dilger, Sergei Kornilov, and Jeevan Ladhe. Discussion: https://postgr.es/m/CA+TgmoZGwR=ZVWFeecncubEyPdwghnvfkkdBe9BLccLSiqdf9Q@mail.gmail.com Discussion: https://postgr.es/m/CA+TgmoZvqk7UuzxsX1xjJRmMGkqoUGYTZLDCH8SmU1xTPr1Xig@mail.gmail.com
* Introduce 'bbsink' abstraction to modularize base backup code.Robert Haas2021-11-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The base backup code has accumulated a healthy number of new features over the years, but it's becoming increasingly difficult to maintain and further enhance that code because there's no real separation of concerns. For example, the code that understands knows the details of how we send data to the client using the libpq protocol is scattered throughout basebackup.c, rather than being centralized in one place. To try to improve this situation, introduce a new 'bbsink' object which acts as a recipient for archives generated during the base backup progress and also for the backup manifest. This commit introduces three types of bbsink: a 'copytblspc' bbsink forwards the backup to the client using one COPY OUT operation per tablespace and another for the manifest, a 'progress' bbsink performs command progress reporting, and a 'throttle' bbsink performs rate-limiting. The 'progress' and 'throttle' bbsink types also forward the data to a successor bbsink; at present, the last bbsink in the chain will always be of type 'copytblspc'. There are plans to add more types of 'bbsink' in future commits. This abstraction is a bit leaky in the case of progress reporting, but this still seems cleaner than what we had before. Patch by me, reviewed and tested by Andres Freund, Sumanta Mukherjee, Dilip Kumar, Suraj Kharage, Dipesh Pandit, Tushar Ahuja, Mark Dilger, and Jeevan Ladhe. Discussion: https://postgr.es/m/CA+TgmoZGwR=ZVWFeecncubEyPdwghnvfkkdBe9BLccLSiqdf9Q@mail.gmail.com Discussion: https://postgr.es/m/CA+TgmoZvqk7UuzxsX1xjJRmMGkqoUGYTZLDCH8SmU1xTPr1Xig@mail.gmail.com
* amcheck: Add additional TOAST pointer checks.Robert Haas2021-11-05
| | | | | | | | | | | | Expand the checks of toasted attributes to complain if the rawsize is overlarge. For compressed attributes, also complain if compression appears to have expanded the attribute or if the compression method is invalid. Mark Dilger, reviewed by Justin Pryzby, Alexander Alekseev, Heikki Linnakangas, Greg Stark, and me. Discussion: http://postgr.es/m/8E42250D-586A-4A27-B317-8B062C3816A8@enterprisedb.com
* pgcrypto: Remove non-OpenSSL supportPeter Eisentraut2021-11-05
| | | | | | | | | | | | | | pgcrypto had internal implementations of some encryption algorithms, as an alternative to calling out to OpenSSL. These were rarely used, since most production installations are built with OpenSSL. Moreover, maintaining parallel code paths makes the code more complex and difficult to maintain. This patch removes these internal implementations. Now, pgcrypto is only built if OpenSSL support is configured. Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://www.postgresql.org/message-id/flat/0b42f1df-8cba-6a30-77d7-acc241cc88c1%40enterprisedb.com
* Improve psql tab completion for COMMENTMichael Paquier2021-11-05
| | | | | | | | | | | Completion is added for more object types, like domain constraints, text search-ish objects or policies. Moreover, the area is reorganized, changing the list of objects supported by COMMENT to be in the same order as the documentation to ease future additions. Author: Ken Kato Reviewed-by: Fujii Masao, Shinya Kato, Suraj Khamkar, Michael Paquier Discussion: https://postgr.es/m/6e0c2f3f657b229bea32d098d118f307@oss.nttdata.com
* Add hardening to catch invalid TIDs in indexes.Peter Geoghegan2021-11-04
| | | | | | | | | | | | | | | | | | | Add hardening to the heapam index tuple deletion path to catch TIDs in index pages that point to a heap item that index tuples should never point to. The corruption we're trying to catch here is particularly tricky to detect, since it typically involves "extra" (corrupt) index tuples, as opposed to the absence of required index tuples in the index. For example, a heap TID from an index page that turns out to point to an LP_UNUSED item in the heap page has a good chance of being caught by one of the new checks. There is a decent chance that the recently fixed parallel VACUUM bug (see commit 9bacec15) would have been caught had that particular check been in place for Postgres 14. No backpatch of this extra hardening for now, though. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CAH2-Wzk-4_raTzawWGaiqNvkpwDXxv3y1AQhQyUeHfkU=tFCeA@mail.gmail.com
* Add support for LZ4 compression in pg_receivewalMichael Paquier2021-11-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_receivewal gains a new option, --compression-method=lz4, available when the code is compiled with --with-lz4. Similarly to gzip, this gives the possibility to compress archived WAL segments with LZ4. This option is not compatible with --compress. The implementation uses LZ4 frames, and is compatible with simple lz4 commands. Like gzip, using --synchronous ensures that any data will be flushed to disk within the current .partial segment, so as it is possible to retrieve as much WAL data as possible even from a non-completed segment (this requires completing the partial file with zeros up to the WAL segment size supported by the backend after decompression, but this is the same as gzip). The calculation of the streaming start LSN is able to transparently find and check LZ4-compressed segments. Contrary to gzip where the uncompressed size is directly stored in the object read, the LZ4 chunk protocol does not store the uncompressed data by default. There is contentSize that can be used with LZ4 frames by that would not help if using an archive that includes segments compressed with the defaults of a "lz4" command, where this is not stored. So, this commit has taken the most extensible approach by decompressing the already-archived segment to check its uncompressed size, through a blank output buffer in chunks of 64kB (no actual performance difference noticed with 8kB, 16kB or 32kB, and the operation in itself is actually fast). Tests have been added to verify the creation and correctness of the generated LZ4 files. The latter is achieved by the use of command "lz4", if found in the environment. The tar-based WAL method in walmethods.c, used now only by pg_basebackup, does not know yet about LZ4. Its code could be extended for this purpose. Author: Georgios Kokolatos Reviewed-by: Michael Paquier, Jian Guo, Magnus Hagander, Dilip Kumar Discussion: https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
* Add various assertions to heap pruning code.Peter Geoghegan2021-11-04
| | | | | | | | | | | These assertions document (and verify) our high level assumptions about how pruning can and cannot affect existing items from target heap pages. For example, one of the new assertions verifies that pruning does not set a heap-only tuple to LP_DEAD. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CAH2-Wz=vhvBx1GjF+oueHh8YQcHoQYrMi0F0zFMHEr8yc4sCoA@mail.gmail.com
* Fix some thinkos with pg_receivewal --compression-methodMichael Paquier2021-11-04
| | | | | | | | | | The option name was incorrect in one of the error messages, and the short option 'I' was used in the code but we did not intend things to be this way. While on it, fix the documentation to refer to a "method", and not a "level. Oversights in commit d62bcc8, that I have detected after more review of the LZ4 patch for pg_receivewal.
* Rework compression options of pg_receivewalMichael Paquier2021-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_receivewal includes since cada1af the option --compress, to allow the compression of WAL segments using gzip, with a value of 0 (the default) meaning that no compression can be used. This commit introduces a new option, called --compression-method, able to use as values "none", the default, and "gzip", to make things more extensible. The case of --compress=0 becomes fuzzy with this option layer, so we have made the choice to make pg_receivewal return an error when using "none" and a non-zero compression level, meaning that the authorized values of --compress are now [1,9] instead of [0,9]. Not specifying --compress with "gzip" as compression method makes pg_receivewal use the default of zlib instead (Z_DEFAULT_COMPRESSION). The code in charge of finding the streaming start LSN when scanning the existing archives is refactored and made more extensible. While on it, rename "compression" to "compression_level" in walmethods.c, to reduce the confusion with the introduction of the compression method, even if the tar method used by pg_basebackup does not rely on the compression method (yet, at least), but just on the compression level (this area could be improved more, actually). This is in preparation for an upcoming patch that adds LZ4 support to pg_receivewal. Author: Georgios Kokolatos Reviewed-by: Michael Paquier, Jian Guo, Magnus Hagander, Dilip Kumar, Robert Haas Discussion: https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
* Update alternative expected output file.Heikki Linnakangas2021-11-03
| | | | | | | | Previous commit added a test to 'largeobject', but neglected the alternative expected output file 'largeobject_1.source'. Per failure on buildfarm animal 'hamerkop'. Discussion: https://www.postgresql.org/message-id/DBA08346-9962-4706-92D1-230EE5201C10@yesql.se
* Fix snapshot reference leak if lo_export fails.Heikki Linnakangas2021-11-03
| | | | | | | | | | | | | | | | | | | If lo_export() fails to open the target file or to write to it, it leaks the created LargeObjectDesc and its snapshot in the top-transaction context and resource owner. That's pretty harmless, it's a small leak after all, but it gives the user a "Snapshot reference leak" warning. Fix by using a short-lived memory context and no resource owner for transient LargeObjectDescs that are opened and closed within one function call. The leak is easiest to reproduce with lo_export() on a directory that doesn't exist, but in principle the other lo_* functions could also fail. Backpatch to all supported versions. Reported-by: Andrew B Reviewed-by: Alvaro Herrera Discussion: https://www.postgresql.org/message-id/32bf767a-2d65-71c4-f170-122f416bab7e@iki.fi
* Fix incorrect format placeholderPeter Eisentraut2021-11-03
|
* Fix parallel amvacuumcleanup safety bug.Peter Geoghegan2021-11-02
| | | | | | | | | | | | | | Commit b4af70cb inverted the return value of the function parallel_processing_is_safe(), but missed the amvacuumcleanup test. Index AMs that don't support parallel cleanup at all were affected. The practical consequences of this bug were not very serious. Hash indexes are affected, but since they just return the number of blocks during hashvacuumcleanup anyway, it can't have had much impact. Author: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAD21AoA-Em+aeVPmBbL_s1V-ghsJQSxYL-i3JP8nTfPiD1wjKw@mail.gmail.com Backpatch: 14-, where commit b4af70cb appears.
* Blind attempt to silence SSL compile failures on hamerkop.Tom Lane2021-11-02
| | | | | | | | | | Buildfarm member hamerkop has been failing for the last few days with errors that look like OpenSSL's X509-related symbols have not been imported into be-secure-openssl.c. It's unclear why this should be, but let's try adding an explicit #include of <openssl/x509v3.h>, as there has long been in fe-secure-openssl.c. Discussion: https://postgr.es/m/1051867.1635720347@sss.pgh.pa.us
* Don't overlook indexes during parallel VACUUM.Peter Geoghegan2021-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit b4af70cb, which simplified state managed by VACUUM, performed refactoring of parallel VACUUM in passing. Confusion about the exact details of the tasks that the leader process is responsible for led to code that made it possible for parallel VACUUM to miss a subset of the table's indexes entirely. Specifically, indexes that fell under the min_parallel_index_scan_size size cutoff were missed. These indexes are supposed to be vacuumed by the leader (alongside any parallel unsafe indexes), but weren't vacuumed at all. Affected indexes could easily end up with duplicate heap TIDs, once heap TIDs were recycled for new heap tuples. This had generic symptoms that might be seen with almost any index corruption involving structural inconsistencies between an index and its table. To fix, make sure that the parallel VACUUM leader process performs any required index vacuuming for indexes that happen to be below the size cutoff. Also document the design of parallel VACUUM with these below-size-cutoff indexes. It's unclear how many users might be affected by this bug. There had to be at least three indexes on the table to hit the bug: a smaller index, plus at least two additional indexes that themselves exceed the size cutoff. Cases with just one additional index would not run into trouble, since the parallel VACUUM cost model requires two larger-than-cutoff indexes on the table to apply any parallel processing. Note also that autovacuum was not affected, since it never uses parallel processing. Test case based on tests from a larger patch to test parallel VACUUM by Masahiko Sawada. Many thanks to Kamigishi Rei for her invaluable help with tracking this problem down. Author: Peter Geoghegan <pg@bowt.ie> Author: Masahiko Sawada <sawada.mshk@gmail.com> Reported-By: Kamigishi Rei <iijima.yun@koumakan.jp> Reported-By: Andrew Gierth <andrew@tao11.riddles.org.uk> Diagnosed-By: Andres Freund <andres@anarazel.de> Bug: #17245 Discussion: https://postgr.es/m/17245-ddf06aaf85735f36@postgresql.org Discussion: https://postgr.es/m/20211030023740.qbnsl2xaoh2grq3d@alap3.anarazel.de Backpatch: 14-, where the refactoring commit appears.
* Ensure consistent logical replication of datetime and float8 values.Tom Lane2021-11-02
| | | | | | | | | | | | | | | | | | | In walreceiver, set the publisher's relevant GUCs (datestyle, intervalstyle, extra_float_digits) to the same values that pg_dump uses, and for the same reason: we need the output to be read the same way regardless of the receiver's settings. Without this, it's possible for subscribers to misinterpret transmitted values. Although this is clearly a bug fix, it's not without downsides: subscribers that are storing values into some other datatype, such as text, could get different results than before, and perhaps be unhappy about that. Given the lack of previous complaints, it seems best to change this only in HEAD, and to call it out as an incompatible change in v15. Japin Li, per report from Sadhuprasad Patro Discussion: https://postgr.es/m/CAFF0-CF=D7pc6st-3A9f1JnOt0qmc+BcBPVzD6fLYisKyAjkGA@mail.gmail.com
* Fix variable lifespan in ExecInitCoerceToDomain().Tom Lane2021-11-02
| | | | | | | | | | | | This undoes a mistake in 1ec7679f1: domainval and domainnull were meant to live across loop iterations, but they were incorrectly moved inside the loop. The effect was only to emit useless extra EEOP_MAKE_READONLY steps, so it's not a big deal; nonetheless, back-patch to v13 where the mistake was introduced. Ranier Vilela Discussion: https://postgr.es/m/CAEudQAqXuhbkaAp-sGH6dR6Nsq7v28_0TPexHOm6FiDYqwQD-w@mail.gmail.com
* Avoid O(N^2) behavior in SyncPostCheckpoint().Tom Lane2021-11-02
| | | | | | | | | | | | | | | | | | | | | As in commits 6301c3ada and e9d9ba2a4, avoid doing repetitive list_delete_first() operations, since that would be expensive when there are many files waiting to be unlinked. This is a slightly larger change than in those cases. We have to keep the list state valid for calls to AbsorbSyncRequests(), so it's necessary to invent a "canceled" field instead of immediately deleting PendingUnlinkEntry entries. Also, because we might not be able to process all the entries, we need a new list primitive list_delete_first_n(). list_delete_first_n() is almost list_copy_tail(), but it modifies the input List instead of making a new copy. I found a couple of existing uses of the latter that could profitably use the new function. (There might be more, but the other callers look like they probably shouldn't overwrite the input List.) As before, back-patch to v13. Discussion: https://postgr.es/m/CD2F0E7F-9822-45EC-A411-AE56F14DEA9F@amazon.com
* pgbench: Fix typo in comment.Fujii Masao2021-11-02
| | | | Discussion: https://postgr.es/m/f9041ec2-46b6-1b41-0e84-9c8a1e2d6bda@oss.nttdata.com
* pgbench: Improve error-handling in pgbench.Fujii Masao2021-11-02
| | | | | | | | | | | | | | | Previously failures of initial connection and logfile open caused pgbench to proceed the benchmarking, report the incomplete results and exit with status 2. It didn't make sense to proceed the benchmarking even when pgbench could not start as prescribed. This commit improves pgbench so that early errors that occur when starting benchmark such as those failures should make pgbench exit immediately with status 1. Author: Yugo Nagata Reviewed-by: Fabien COELHO, Kyotaro Horiguchi, Fujii Masao Discussion: https://postgr.es/m/TYCPR01MB5870057375ACA8A73099C649F5349@TYCPR01MB5870.jpnprd01.prod.outlook.com
* Move MarkCurrentTransactionIdLoggedIfAny() out of the critical section.Amit Kapila2021-11-02
| | | | | | | | | | | We don't modify any shared state in this function which could cause problems for any concurrent session. This will make it look similar to the other updates for the same structure (TransactionState) which avoids confusion for future readers of code. Author: Dilip Kumar Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/E1mSoYz-0007Fh-D9@gemulon.postgresql.org
* Replace XLOG_INCLUDE_XID flag with a more localized flag.Amit Kapila2021-11-02
| | | | | | | | | | | | | | | | Commit 0bead9af484c introduced XLOG_INCLUDE_XID flag to indicate that the WAL record contains subXID-to-topXID association. It uses that flag later to mark in CurrentTransactionState that top-xid is logged so that we should not try to log it again with the next WAL record in the current subtransaction. However, we can use a localized variable to pass that information. In passing, change the related function and variable names to make them consistent with what the code is actually doing. Author: Dilip Kumar Reviewed-by: Alvaro Herrera, Amit Kapila Discussion: https://postgr.es/m/E1mSoYz-0007Fh-D9@gemulon.postgresql.org
* Replace unicode characters in comments with asciiDaniel Gustafsson2021-11-01
| | | | | | | | | | | | | | | | | | | The unicode characters, while in comments and not code, caused MSVC to emit compiler warning C4819: The file contains a character that cannot be represented in the current code page (number). Save the file in Unicode format to prevent data loss. Fix by replacing the characters in print.c with descriptive comments containing the codepoints and symbol names, and remove the character in brin_bloom.c which was a footnote reference copied from the paper citation. Per report from hamerkop in the buildfarm. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/340E4118-0D0C-4E85-8141-8C40EB22DA3A@yesql.se
* Avoid some other O(N^2) hazards in list manipulation.Tom Lane2021-11-01
| | | | | | | | | | | | In the same spirit as 6301c3ada, fix some more places where we were using list_delete_first() in a loop and thereby risking O(N^2) behavior. It's not clear that the lists manipulated in these spots can get long enough to be really problematic ... but it's not clear that they can't, either, and the fixes are simple enough. As before, back-patch to v13. Discussion: https://postgr.es/m/CD2F0E7F-9822-45EC-A411-AE56F14DEA9F@amazon.com
* Handle XLOG_OVERWRITE_CONTRECORD in DecodeXLogOpAlvaro Herrera2021-11-01
| | | | | | | | | Failing to do so results in inability of logical decoding to process the WAL stream. Handle it by doing nothing. Backpatch all the way back. Reported-by: Petr Jelínek <petr.jelinek@enterprisedb.com>
* Add TAP test for pg_receivewal with timeline switchMichael Paquier2021-11-01
| | | | | | | | | | | pg_receivewal is able to follow a timeline switch, but this was not tested. This test uses an empty archive location with a restart done from a slot, making its implementation a tad simpler than if we would reuse an existing archive directory. Author: Ronan Dunklau Reviewed-by: Kyotaro Horiguchi, Michael Paquier Discussion: https://postgr.es/m/18708360.4lzOvYHigE@aivenronan
* Preserve opclass parameters across REINDEX CONCURRENTLYMichael Paquier2021-11-01
| | | | | | | | | | | | | | | | | | | The opclass parameter Datums from the old index are fetched in the same way as for predicates and expressions, by grabbing them directly from the system catalogs. They are then copied into the new IndexInfo that will be used for the creation of the new copy. This caused the new index to be rebuilt with default parameters rather than the ones pre-defined by a user. The only way to get back a new index with correct opclass parameters would be to recreate a new index from scratch. The issue has been introduced by 911e702. Author: Michael Paquier Reviewed-by: Zhihong Yu Discussion: https://postgr.es/m/YX0CG/QpLXcPr8HJ@paquier.xyz Backpatch-through: 13
* Doc: improve README files associated with TAP tests.Tom Lane2021-10-31
| | | | | | | | | | | | | Rearrange src/test/perl/README so that the first section is more clearly "how to run these tests", and the rest "how to write new tests". Add some basic info there about debugging test failures. Then, add cross-refs to that READNE from other READMEs that describe how to run TAP tests. Per suggestion from Kevin Burke, though this is not his original patch. Discussion: https://postgr.es/m/CAKcy5eiSbwiQnmCfnOnDCVC7B8fYyev3E=6pvvECP9pLE-Fcuw@mail.gmail.com
* Avoid O(N^2) behavior when the standby process releases many locks.Tom Lane2021-10-31
| | | | | | | | | | | | | | | | | When replaying a transaction that held many exclusive locks on the primary, a standby server's startup process would expend O(N^2) effort on manipulating the list of locks. This code was fine when written, but commit 1cff1b95a made repetitive list_delete_first() calls inefficient, as explained in its commit message. Fix by just iterating the list normally, and releasing storage only when done. (This'd be inadequate if we needed to recover from an error occurring partway through; but we don't.) Back-patch to v13 where 1cff1b95a came in. Nathan Bossart Discussion: https://postgr.es/m/CD2F0E7F-9822-45EC-A411-AE56F14DEA9F@amazon.com
* plpgsql: report proper line number for errors in variable initialization.Tom Lane2021-10-31
| | | | | | | | | | Previously, we pointed at the surrounding block's BEGIN keyword. If there are multiple variables being initialized in a DECLARE section, this isn't good enough: it can be quite confusing and unhelpful. We do know where the variable's declaration started, so it just takes a tiny bit more error-reporting infrastructure to use that. Discussion: https://postgr.es/m/713975.1635530414@sss.pgh.pa.us
* pg_dump: Refactor messagesPeter Eisentraut2021-10-30
| | | | This reduces the number of separate messages for translation.
* Fix race condition in startup progress reporting.Robert Haas2021-10-29
| | | | | | | | | | | | | | Commit 9ce346eabf350a130bba46be3f8c50ba28506969 added startup progress reporting, but begin_startup_progress_phase has a race condition: the timeout for the previous phase might fire just before we reschedule the interrupt for the next phase. To avoid the race, disable the timeout, clear the flag, and then re-enable the timeout. Patch by me, reviewed by Nitin Jadhav. Discussion: https://postgr.es/m/CA+TgmoYq38i6iAzfRLVxA6Cm+wMCf4WM8wC3o_a+X_JvWC8bJg@mail.gmail.com
* When fetching WAL for a basebackup, report errors with a sensible TLI.Robert Haas2021-10-29
| | | | | | | | | | | | | | | | | | | | The previous code used ThisTimeLineID, which need not even be initialized here, although it usually was in practice, because pg_basebackup issues IDENTIFY_SYSTEM before calling BASE_BACKUP, and that initializes ThisTimeLineID as a side effect. That's not really good enough, though, not only because we shoudn't be counting on side effects like that, but also because the TLI could change meanwhile. Fortunately, we have convenient access to more meaningful TLI values, so use those instead. Because of the way this logic is coded, the consequences of using a possibly-incorrect TLI here are no worse than a slightly confusing error message, I don't want to take any risk here, so no back-patch at least for now. Patch by me, reviewed by Kyotaro Horiguchi and Michael Paquier Discussion: http://postgr.es/m/CA+TgmoZRNWGWYDX9RgTXMG6_nwSdB=PB-PPRUbvMUTGfmL2sHQ@mail.gmail.com
* Demote pg_unreachable() in heapam to an assertion.Peter Geoghegan2021-10-29
| | | | | | | | | | | | | | Commit d168b66682, which overhauled index deletion, added a pg_unreachable() to the end of a sort comparator used when sorting heap TIDs from an index page. This allows the compiler to apply optimizations that assume that the heap TIDs from the index AM must always be unique. That doesn't seem like a good idea now, given recent reports of corruption involving duplicate TIDs in indexes on Postgres 14. Demote to an assertion, just in case. Backpatch: 14-, where index deletion was overhauled.
* Test and document the behavior of initialization cross-refs in plpgsql.Tom Lane2021-10-29
| | | | | | | | | | | | | | | | We had a test showing that a variable isn't referenceable in its own initialization expression, nor in prior ones in the same block. It *is* referenceable in later expressions in the same block, but AFAICS there is no test case exercising that. Add one, and also add some error cases. Also, document that this is possible, since the docs failed to cover the point. Per question from tomás at tuxteam. I don't feel any need to back-patch this, but we should ensure we don't break it in future. Discussion: https://postgr.es/m/20211029121435.GA5414@tuxteam.de
* Update time zone data files to tzdata release 2021e.Tom Lane2021-10-29
| | | | | | | | | | | | | DST law changes in Fiji, Jordan, Palestine, and Samoa. Historical corrections for Barbados, Cook Islands, Guyana, Niue, Portugal, and Tonga. Also, the Pacific/Enderbury zone has been renamed to Pacific/Kanton. The following zones have been merged into nearby, more-populous zones whose clocks have agreed since 1970: Africa/Accra, America/Atikokan, America/Blanc-Sablon, America/Creston, America/Curacao, America/Nassau, America/Port_of_Spain, Antarctica/DumontDUrville, and Antarctica/Syowa.
* Add tap tests for the schema publications.Amit Kapila2021-10-29
| | | | | | | | | | This adds additional tests for commit 5a2832465f ("Allow publishing the tables of schema.). This allows testing streaming of data in tables that are published via schema publications. Author: Vignesh C, Haiying Tang Reviewed-by: Greg Nancarrow, Hou Zhijie, Amit Kapila Discussion: https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ%40mail.gmail.com
* Speed up TAP tests of pg_receivewalMichael Paquier2021-10-29
| | | | | | | | | | | | | | | | This commit improves the speed of those tests by 25~30%, using some simple ideas to reduce the amount of data written by pg_receivewal: - Use a segment size of 1MB. While reducing the amount of data zeroed by pg_receivewal for the new segments, this improves the code coverage with a non-default segment size. - In the last test involving a slot's restart_lsn, generate a checkpoint to advance the redo LSN and the WAL retained by the slot created, reducing the number of segments that need to be archived. This counts for most of the gain. - Minimize the amount of data inserted into the dummy table. Reviewed-by: Ronan Dunklau Discussion: https://postgr.es/m/YXqYKAdVEqmyTltK@paquier.xyz
* Speed up printing of integers in snprintf.c.Tom Lane2021-10-28
| | | | | | | | | | | | | | | | Since the only possible divisors are 8, 10, and 16, it doesn't cost much code space to replace the division loop with three copies using constant divisors. On most machines, division by a constant can be done a lot more cheaply than division by an arbitrary value. A microbenchmark testing just snprintf("foo %d") with a 9-digit value showed about a 2X speedup for me (tgl). Most of Postgres isn't too dependent on the speed of snprintf, so that the effect in real-world cases is barely measurable. Still, a cycle saved is a cycle earned. Arjan van de Ven Discussion: https://postgr.es/m/40a4b32a-b841-4667-11b2-a0baedb12714@linux.intel.com Discussion: https://postgr.es/m/6e51c644-1b6d-956e-ac24-2d1b0541d532@linux.intel.com
* Improve contrib/amcheck's tests for CREATE INDEX CONCURRENTLY.Tom Lane2021-10-28
| | | | | | | | | | | | | | | | | | | | | | | | Commits fdd965d07 and 3cd9c3b92 tested CREATE INDEX CONCURRENTLY by launching two separate pgbench runs concurrently. This was needed so that only a single client thread would run CREATE INDEX CONCURRENTLY, avoiding deadlock between two CICs. However, there's a better way, which is to use an advisory lock to prevent concurrent CICs. That's better in part because the test code is shorter and more readable, but mostly because it automatically scales things to launch an appropriate number of CICs relative to the number of INSERT transactions. As committed, typically half to three-quarters of the CIC transactions were pointless because the INSERT transactions had already stopped. In passing, remove background_pgbench, which was added to support these tests and isn't needed anymore. We can always put it back if we find a use for it later. Back-patch to v12; older pgbench versions lack the conditional-execution features needed for this method. Tom Lane and Andrey Borodin Discussion: https://postgr.es/m/139687.1635277318@sss.pgh.pa.us
* Add TAP test for archive_cleanup_command and recovery_end_commandMichael Paquier2021-10-28
| | | | | | | | | | | | | | | | | | | | | This adds tests checking for the execution of both commands. The recovery test 002_archiving.pl is nicely adapted to that, as promotion is triggered already twice there, and even if any of those commands fail they don't affect recovery or promotion. A command success is checked using a file generated by an "echo" command, that should be able to work in all the buildfarm environments, even Msys (but we'll know soon about that). Command failure is tested with an "echo" command that points to a path that does not exist, scanning the backend logs to make sure that the failure happens. Both rely on the backend triggering the commands from the root of the data folder, making its logic more robust. Thanks to Neha Sharma for the extra tests on Windows. Author: Amul Sul, Michael Paquier Reviewed-by: Andres Freund, Euler Taveira Discussion: https://postgr.es/m/CAAJ_b95R_c4T5moq30qsybSU=eDzDHm=4SPiAWaiMWc2OW7=1Q@mail.gmail.com
* Remove obsolete nbtree LP_DEAD item comments.Peter Geoghegan2021-10-27
| | | | | | | | Comments above _bt_findinsertloc() that talk about LP_DEAD items are now out of place. We already discuss index tuple deletion at an earlier point in the same comment block. Oversight in commit d168b666.
* Grant memory views to pg_read_all_stats.Jeff Davis2021-10-27
| | | | | | | | | | Grant privileges on views pg_backend_memory_contexts and pg_shmem_allocations to the role pg_read_all_stats. Also grant on the underlying functions that those views depend on. Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Reviewed-by: Nathan Bossart <bossartn@amazon.com> Discussion: https://postgr.es/m/CALj2ACWAZo3Ar_EVsn2Zf9irG+hYK3cmh1KWhZS_Od45nd01RA@mail.gmail.com
* Fix typos in commentsDaniel Gustafsson2021-10-27
| | | | | Author: Peter Smith <smithpb2250@gmail.com> Discussion: https://postgr.es/m/CAHut+PsN_gmKu-KfeEb9NDARoTPbs4AN4PPu=6LZXFZRJ13SEw@mail.gmail.com
* Fix ordering of items in nbtree error message.Peter Geoghegan2021-10-27
| | | | | | Oversight in commit a5213adf. Backpatch: 13-, just like commit a5213adf.
* Fix VPATH builds for src/test/ssl targetsDaniel Gustafsson2021-10-27
| | | | | | | | | | | Commit b4c4a00ea refactored the gist of the sslfiles target into a separate makefile in order to override settings in Makefile.global. The invocation of this this file didn't however include the absolute path for VPATH builds, resulting in "make clean" failing. Fix by providing the path to the new makefile. Reported-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/20211026174152.jjcagswnbhxu7uqz@alap3.anarazel.de
* Further harden nbtree posting split code.Peter Geoghegan2021-10-27
| | | | | | | | | | | Add more defensive checks around posting list split code. These should detect corruption involving duplicate table TIDs earlier and more reliably than any existing check. Follow up to commit 8f72bbac. Discussion: https://postgr.es/m/CAH2-WzkrSY_kjyd1_M5xJK1uM0govJXMxPn8JUSvwcUOiHuWVw@mail.gmail.com Backpatch: 13-, where nbtree deduplication was introduced.