aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Clarify that --system reindexes system catalogs *only*Magnus Hagander2021-10-27
| | | | | | | | Make this more clear both in the help message and docs. Reviewed-By: Michael Paquier Backpatch-through: 9.6 Discussion: https://postgr.es/m/CABUevEw6Je0WUFTLhPKOk4+BoBuDrE-fKw3N4ckqgDBMFu4paA@mail.gmail.com
* Add test for copy of shared dependencies from template databaseMichael Paquier2021-10-27
| | | | | | | | | | | As 98ec35b has proved, there has never been any coverage in this area of the code. This commit adds a new TAP test with a template database that includes a small set of shared dependencies copied to a new database. The test is added in createdb, where we have never tested that -T generates a query with TEMPLATE, either. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/YXDTl+PfSnqmbbkE@paquier.xyz
* Allow publishing the tables of schema.Amit Kapila2021-10-27
| | | | | | | | | | | | | | | | | | | | | | | | | | A new option "FOR ALL TABLES IN SCHEMA" in Create/Alter Publication allows one or more schemas to be specified, whose tables are selected by the publisher for sending the data to the subscriber. The new syntax allows specifying both the tables and schemas. For example: CREATE PUBLICATION pub1 FOR TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2; OR ALTER PUBLICATION pub1 ADD TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2; A new system table "pg_publication_namespace" has been added, to maintain the schemas that the user wants to publish through the publication. Modified the output plugin (pgoutput) to publish the changes if the relation is part of schema publication. Updates pg_dump to identify and dump schema publications. Updates the \d family of commands to display schema publications and \dRp+ variant will now display associated schemas if any. Author: Vignesh C, Hou Zhijie, Amit Kapila Syntax-Suggested-by: Tom Lane, Alvaro Herrera Reviewed-by: Greg Nancarrow, Masahiko Sawada, Hou Zhijie, Amit Kapila, Haiying Tang, Ajin Cherian, Rahila Syed, Bharath Rupireddy, Mark Dilger Tested-by: Haiying Tang Discussion: https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ@mail.gmail.com
* Allow GRANT on pg_log_backend_memory_contexts().Jeff Davis2021-10-26
| | | | | | | | | | | | | Remove superuser check, allowing any user granted permissions on pg_log_backend_memory_contexts() to log the memory contexts of any backend. Note that this could allow a privileged non-superuser to log the memory contexts of a superuser backend, but as discussed, that does not seem to be a problem. Reviewed-by: Nathan Bossart, Bharath Rupireddy, Michael Paquier, Kyotaro Horiguchi, Andres Freund Discussion: https://postgr.es/m/e5cf6684d17c8d1ef4904ae248605ccd6da03e72.camel@j-davis.com
* Improve HINT message that FDW reports when there are no valid options.Fujii Masao2021-10-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The foreign data wrapper's validator function provides a HINT message with list of valid options for the object specified in CREATE or ALTER command, when the option given in the command is invalid. Previously postgresql_fdw_validator() and the validator functions for postgres_fdw and dblink_fdw worked in that way even there were no valid options in the object, which could lead to the HINT message with empty list (because there were no valid options). For example, ALTER FOREIGN DATA WRAPPER postgres_fdw OPTIONS (format 'csv') reported the following ERROR and HINT messages. This behavior was confusing. ERROR: invalid option "format" HINT: Valid options in this context are: There is no such issue in file_fdw. The validator function for file_fdw reports the HINT message "There are no valid options in this context." instead in that case. This commit improves postgresql_fdw_validator() and the validator functions for postgres_fdw and dblink_fdw so that they do likewise. For example, this change causes the above ALTER FOREIGN DATA WRAPPER command to report the following messages. ERROR: invalid option "nonexistent" HINT: There are no valid options in this context. Author: Kosei Masumura Reviewed-by: Bharath Rupireddy, Fujii Masao Discussion: https://postgr.es/m/557d06cebe19081bfcc83ee2affc98d3@oss.nttdata.com
* Ensure that slots are zeroed before useDaniel Gustafsson2021-10-26
| | | | | | | | | | | | | The previous coding relied on the memory for the slots being zeroed elsewhere, which while it was true in this case is not an contract which is guaranteed to hold. Explicitly clear the tts_isnull array to ensure that the slots are filled from a known state. Backpatch to v14 where the catalog multi-inserts were introduced. Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/CAJ7c6TP0AowkUgNL6zcAK-s5HYsVHVBRWfu69FRubPpfwZGM9A@mail.gmail.com Backpatch-through: 14
* Fix overly-lax regex pattern in TAP test of READ_REPLICATION_SLOTMichael Paquier2021-10-26
| | | | | | | | | The case checking for a NULL output when a slot does not exist was too lax, as it was passing for any output generated by the query. This fixes the matching pattern to be what it should be, matching only on "||". Oversight in b4ada4e.
* Allow pg_receivewal to stream from a slot's restart LSNMichael Paquier2021-10-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prior to this patch, when running pg_receivewal, the streaming start point would be the current location of the archives if anything is found in the local directory where WAL segments are written, and pg_receivewal would fall back to the current WAL flush location if there are no archives, as of the result of an IDENTIFY_SYSTEM command. If for some reason the WAL files from pg_receivewal were moved, it is better to try a restart where we left at, which is the replication slot's restart_lsn instead of skipping right to the current flush location, to avoid holes in the WAL backed up. This commit changes pg_receivewal to use the following sequence of methods to determine the starting streaming LSN: - Scan the local archives. - Use the slot's restart_lsn, if supported by the backend and if a slot is defined. - Fallback to the current flush LSN as reported by IDENTIFY_SYSTEM. To keep compatibility with older server versions, we only attempt to use READ_REPLICATION_SLOT if the backend version is at least 15, and fallback to the older behavior of streaming from the current flush LSN if the command is not supported. Some TAP tests are added to cover this feature. Author: Ronan Dunklau Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Bharath Rupireddy Discussion: https://postgr.es/m/18708360.4lzOvYHigE@aivenronan
* Reject huge_pages=on if shared_memory_type=sysv.Thomas Munro2021-10-26
| | | | | | | | | It doesn't work (it could, but hasn't been implemented). Back-patch to 12, where shared_memory_type arrived. Reported-by: Alexander Lakhin <exclusion@gmail.com> Reviewed-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/163271880203.22789.1125998876173795966@wrigleys.postgresql.org
* Initialize variable to placate compiler.Robert Haas2021-10-25
| | | | | | Per Nathan Bossart. Discussion: http://postgr.es/m/FECEE7FC-CB74-45A9-BB24-89FEE52A9585@amazon.com
* Report progress of startup operations that take a long time.Robert Haas2021-10-25
| | | | | | | | | | | | | | | | | | | | | | | | Users sometimes get concerned whe they start the server and it emits a few messages and then doesn't emit any more messages for a long time. Generally, what's happening is either that the system is taking a long time to apply WAL, or it's taking a long time to reset unlogged relations, or it's taking a long time to fsync the data directory, but it's not easy to tell which is the case. To fix that, add a new 'log_startup_progress_interval' setting, by default 10s. When an operation that is known to be potentially long-running takes more than this amount of time, we'll log a status update each time this interval elapses. To avoid undesirable log chatter, don't log anything about WAL replay when in standby mode. Nitin Jadhav and Robert Haas, reviewed by Amul Sul, Bharath Rupireddy, Justin Pryzby, Michael Paquier, and Álvaro Herrera. Discussion: https://postgr.es/m/CA+TgmoaHQrgDFOBwgY16XCoMtXxsrVGFB2jNCvb7-ubuEe1MGg@mail.gmail.com Discussion: https://postgr.es/m/CAMm1aWaHF7VE69572_OLQ+MgpT5RUiUDgF1x5RrtkJBLdpRj3Q@mail.gmail.com
* Add enable_timeout_every() to fire the same timeout repeatedly.Robert Haas2021-10-25
| | | | | | | | | | enable_timeout_at() and enable_timeout_after() can still be used when you want to fire a timeout just once. Patch by me, per a suggestion from Tom Lane. Discussion: http://postgr.es/m/2992585.1632938816@sss.pgh.pa.us Discussion: http://postgr.es/m/CA+TgmoYqSF5sCNrgTom9r3Nh=at4WmYFD=gsV-omStZ60S0ZUQ@mail.gmail.com
* Remove useless code from CreateReplicationSlot.Robert Haas2021-10-25
| | | | | | | | | | | | | | | | | | | | | According to the comments, we initialize sendTimeLineIsHistoric and sendTimeLine here for the benefit of WalSndSegmentOpen. However, the only way that can happen is if logical_read_xlog_page calls WALRead. And since logical_read_xlog_page initializes the same global variables internally, we don't need to also do it here. These initializations have been here since replication slots were introduced in commit 858ec11858a914d4c380971985709b6d6b7dd6fc. They were certainly useless at that time, too, because logical decoding didn't yet exist then, and physical replication doesn't examine any WAL at the time of slot creation. I haven't checked all the intermediate versions, but I suspect there's no point at which this code ever did anything useful. To reduce future confusion, remove the code. Since there's no functional defect, no back-patch. Discussion: http://postgr.es/m/CA+TgmobSWzacEs+r6C-7DrOPDHoDar4i9gzxB3SCBr5qjnLmVQ@mail.gmail.com
* StartupXLOG: Don't repeatedly disable/enable local xlog insertion.Robert Haas2021-10-25
| | | | | | | | | | | | | | | | | | | All the code that runs in the startup process to write WAL records before that's allowed generally is now consecutive, so there's no reason to shut the facility to write WAL locally off and then turn it on again three times in a row. Unfortunately, this requires a slight kludge in the checkpointer, which needs to separately enable writing WAL in order to write the checkpoint record. Because that code might run in the same process as StartupXLOG() if we are in single-user mode, we must save/restore the state of the LocalXLogInsertAllowed flag. Hopefully, we'll be able to eliminate this wart in further refactoring, but it's not too bad anyway. Amul Sul, with modifications by me. Discussion: http://postgr.es/m/CAAJ_b97fysj6sRSQEfOHj-y8Jfd5uPqOgO74qast89B4WfD+TA@mail.gmail.com
* StartupXLOG: Call CleanupAfterArchiveRecovery after XLogReportParameters.Robert Haas2021-10-25
| | | | | | | | | | | | | | | This does a better job grouping related operations together, since all of the WAL records that we need to write prior to allowing WAL writes generally and written by a single uninterrupted stretch of code. Since CleanupAfterArchiveRecovery() just (1) runs recovery_end_command, (2) removes non-parent xlog files, and (3) archives any final partial segment, this should be safe, because all of those things are pretty much unrelated to the WAL record written by XLogReportParameters(). Amul Sul, per a suggestion from me Discussion: http://postgr.es/m/CAAJ_b97fysj6sRSQEfOHj-y8Jfd5uPqOgO74qast89B4WfD+TA@mail.gmail.com
* Clarify the logic in a few places in the new balanced merge code.Heikki Linnakangas2021-10-25
| | | | | | | | | | | | | | | | | | | | | | | In selectnewtape(), use 'nOutputTapes' rather than 'nOutputRuns' in the check for whether to start a new tape or to append a new run to an existing tape. Until 'maxTapes' is reached, nOutputTapes is always equal to nOutputRuns, so it doesn't change the logic, but it seems more logical to compare # of tapes with # of tapes. Also, currently maxTapes is never modified after the merging begins, but written this way, the code would still work if it was. (Although the nOutputRuns == nOutputTapes assertion would need to be removed and using nOutputRuns % nOutputTapes to distribute the runs evenly across the tapes wouldn't do a good job anymore). Similarly in mergeruns(), change to USEMEM(state->tape_buffer_mem) to account for the memory used for tape buffers. It's equal to availMem currently, but tape_buffer_mem is more direct and future-proof. For example, if we changed the logic to only allocate half of the remaining memory to tape buffers, USEMEM(state->tape_buffer_mem) would still be correct. Coverity complained about these. Hopefully this patch helps it to understand the logic better. Thanks to Tom Lane for initial analysis.
* Add replication command READ_REPLICATION_SLOTMichael Paquier2021-10-25
| | | | | | | | | | | | | | | | | The command is supported for physical slots for now, and returns the type of slot, its restart_lsn and its restart_tli. This will be useful for an upcoming patch related to pg_receivewal, to allow the tool to be able to stream from the position of a slot, rather than the last WAL position flushed by the backend (as reported by IDENTIFY_SYSTEM) if the archive directory is found as empty, which would be an advantage in the case of switching to a different archive locations with the same slot used to avoid holes in WAL segment archives. Author: Ronan Dunklau Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Bharath Rupireddy Discussion: https://postgr.es/m/18708360.4lzOvYHigE@aivenronan
* Fix minor memory leaks in pg_dump.Tom Lane2021-10-24
| | | | | | | | | | | | | | | I found these by running pg_dump under "valgrind --leak-check=full". The changes in flagInhIndexes() and getIndexes() replace allocation of an array of which we use only some elements by individual allocations of just the actually-needed objects. The previous coding wasted some memory, but more importantly it confused valgrind's leak tracking. collectComments() and collectSecLabels() remain major blots on the valgrind report, because they don't PQclear their query results, in order to avoid a lot of strdup's. That's a dubious tradeoff, but I'll leave it alone here; an upcoming patch will modify those functions enough to justify changing the tradeoff.
* Move Perl test modules to a better namespaceAndrew Dunstan2021-10-24
| | | | | | | | | | | | | | | The five modules in our TAP test framework all had names in the top level namespace. This is unwise because, even though we're not exporting them to CPAN, the names can leak, for example if they are exported by the RPM build process. We therefore move the modules to the PostgreSQL::Test namespace. In the process PostgresNode is renamed to Cluster, and TestLib is renamed to Utils. PostgresVersion becomes simply PostgreSQL::Version, to avoid possible confusion about what it's the version of. Discussion: https://postgr.es/m/aede93a4-7d92-ef26-398f-5094944c2504@dunslane.net Reviewed by Erik Rijkers and Michael Paquier
* Fix CREATE INDEX CONCURRENTLY for the newest prepared transactions.Noah Misch2021-10-23
| | | | | | | | | | | | | | | | | | | The purpose of commit 8a54e12a38d1545d249f1402f66c8cde2837d97c was to fix this, and it sufficed when the PREPARE TRANSACTION completed before the CIC looked for lock conflicts. Otherwise, things still broke. As before, in a cluster having used CIC while having enabled prepared transactions, queries that use the resulting index can silently fail to find rows. It may be necessary to reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices. Fix this for future index builds by making CIC wait for arbitrarily-recent prepared transactions and for ordinary transactions that may yet PREPARE TRANSACTION. As part of that, have PREPARE TRANSACTION transfer locks to its dummy PGPROC before it calls ProcArrayClearTransaction(). Back-patch to 9.6 (all supported versions). Andrey Borodin, reviewed (in earlier versions) by Andres Freund. Discussion: https://postgr.es/m/01824242-AA92-4FE9-9BA7-AEBAFFEA3D0C@yandex-team.ru
* Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY.Noah Misch2021-10-23
| | | | | | | | | | | | | | | | | CIC and REINDEX CONCURRENTLY assume backends see their catalog changes no later than each backend's next transaction start. That failed to hold when a backend absorbed a relevant invalidation in the middle of running RelationBuildDesc() on the CIC index. Queries that use the resulting index can silently fail to find rows. Fix this for future index builds by making RelationBuildDesc() loop until it finishes without accepting a relevant invalidation. It may be necessary to reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices. Back-patch to 9.6 (all supported versions). Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres Freund. Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com
* In pg_dump, use simplehash.h to look up dumpable objects by OID.Tom Lane2021-10-22
| | | | | | | | | | | | | | | | Create a hash table that indexes dumpable objects by CatalogId (that is, catalog OID + object OID). Use this to replace the former catalogIdMap array, as well as various other single- catalog index arrays, and also the extension membership map. In principle this should be faster for databases with many objects, since lookups are now O(1) not O(log N). However, it seems that these lookups are pretty much negligible in context, so that no overall performance change can be measured. But having only one lookup data structure to maintain makes the code simpler and more flexible, so let's do it anyway. Discussion: https://postgr.es/m/2595220.1634855245@sss.pgh.pa.us
* Fix frontend version of sh_error() in simplehash.h.Tom Lane2021-10-22
| | | | | | | | | | | | | The code does not expect sh_error() to return, but the patch that made this header usable in frontend didn't get that memo. While here, plaster unlikely() on the tests that decide whether to invoke sh_error(), and add our standard copyright notice. Noted by Andres Freund. Back-patch to v13 where this frontend support came in. Discussion: https://postgr.es/m/0D54435C-1199-4361-9D74-2FBDCF8EA164@anarazel.de
* pg_dump: fix mis-dumping of non-global default privileges.Tom Lane2021-10-22
| | | | | | | | | | | | | | | | Non-global default privilege entries should be dumped as-is, not made relative to the default ACL for their object type. This would typically only matter if one had revoked some on-by-default privileges in a global entry, and then wanted to grant them again in a non-global entry. Per report from Boris Korzun. This is an old bug, so back-patch to all supported branches. Neil Chen, test case by Masahiko Sawada Discussion: https://postgr.es/m/111621616618184@mail.yandex.ru Discussion: https://postgr.es/m/CAA3qoJnr2+1dVJObNtfec=qW4Z0nz=A9+r5bZKoTSy5RDjskMw@mail.gmail.com
* Add module build directory to the PATH for TAP testsAndrew Dunstan2021-10-22
| | | | | | | | | | | | | | | | | | For non-MSVC builds this is make's $(CURDIR), while for MSVC builds it is $topdir/$Config/$module. The directory is added as the second element in the PATH, so that the install location takes precedence, but the added PATH element takes precedence over the rest of the PATH. The reason for this is to allow tests to find built products that are not installed, such as the libpq_pipeline test driver. The libpq_pipeline test is adjusted to take advantage of this. Based on a suggestion from Andres Freund. Backpatch to release 14. Discussion: https://postgr.es/m/4941f5a5-2d50-1a0e-6701-14c5fefe92d6@dunslane.net
* Doc: clarify a critical and undocumented aspect of simplehash.h.Tom Lane2021-10-21
| | | | | | I just got burnt by trying to use pg_malloc instead of pg_malloc0 with this. Save the next hacker some time by not leaving this API detail undocumented.
* Fix SSL tests on 32-bit PerlDaniel Gustafsson2021-10-21
| | | | | | | | | | | | | The certificate serial number generation was changed in b4c4a00ea to use the current timestamp. The testharness must thus interrogate the cert for the serialnumber using "openssl x509" which emits the serial in hex format. Converting the serial to integer format to match whats in pg_stat_ssl requires a 64-bit capable Perl. This adds a fallback to checking for an integer when the tests with a 32-bit Perl. Per failure on buildfarm member prairiedog. Discussion: https://postgr.es/m/0D295F43-806D-4B3F-AB98-F941A19E0271@yesql.se
* Remove unused wait events.Amit Kapila2021-10-21
| | | | | | | | | Commit 464824323e introduced the wait events which were neither used by that commit nor by follow-up commits for that work. Author: Masahiro Ikeda Backpatch-through: 14, where it was introduced Discussion: https://postgr.es/m/ff077840-3ab2-04dd-bbe4-4f5dfd2ad481@oss.nttdata.com
* Fix corruption of pg_shdepend when copying deps from template databaseMichael Paquier2021-10-21
| | | | | | | | | | | | | | | | Using for a new database a template database with shared dependencies that need to be copied over was causing a corruption of pg_shdepend because of an off-by-one computation error of the index number used for the values inserted with a slot. Issue introduced by e3931d0. Monitoring the rest of the code, there are no similar mistakes. Reported-by: Sven Klemm Author: Aleksander Alekseev Reviewed-by: Daniel Gustafsson, Michael Paquier Discussion: https://postgr.es/m/CAJ7c6TP0AowkUgNL6zcAK-s5HYsVHVBRWfu69FRubPpfwZGM9A@mail.gmail.com Backpatch-through: 14
* Improve pg_regress.c's infrastructure for issuing psql commands.Tom Lane2021-10-20
| | | | | | | | | | | | | | | | | | | | | | Support issuing more than one "-c command" switch to a single psql invocation. This allows combining some things that formerly required two or more backend launches into a single session. In particular, we can issue DROP DATABASE as one of the -c commands without getting "DROP DATABASE cannot run inside a transaction block". In addition to reducing the number of sessions needed, this patch also suppresses "NOTICE: database "foo" does not exist, skipping" chatter that was formerly generated during pg_regress's DROP DATABASE (or ROLE) IF NOT EXISTS calls. That moves us another step closer to the ideal of not seeing any messages during successful build/test. This also eliminates some hard-coded restrictions on the length of the commands issued. I don't think we were anywhere near hitting those, but getting rid of the limit is comforting. Patch by me, but thanks to Nathan Bossart for starting the discussion. Discussion: https://postgr.es/m/DCBAE0E4-BD56-482F-8A70-7FD0DC0860BE@amazon.com
* Protect against collation variations in testAlvaro Herrera2021-10-20
| | | | Discussion: https://postgr.es/m/YW/MYdSRQZtPFBWR@paquier.xyz
* Fix build of MSVC with OpenSSL 3.0.0Michael Paquier2021-10-20
| | | | | | | | | | | | | The build scripts of Visual Studio would fail to detect properly a 3.0.0 build as the check on the second digit was failing. This is adjusted where needed, allowing the builds to complete. Note that the MSIs of OpenSSL mentioned in the documentation have not changed any library names for Win32 and Win64, making this change straight-forward. Reported-by: htalaco, via github Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/YW5XKYkq6k7OtrFq@paquier.xyz Backpatch-through: 9.6
* Ensure correct lock level is used in ALTER ... RENAMEAlvaro Herrera2021-10-19
| | | | | | | | | | | | | | | | Commit 1b5d797cd4f7 intended to relax the lock level used to rename indexes, but inadvertently allowed *any* relation to be renamed with a lowered lock level, as long as the command is spelled ALTER INDEX. That's undesirable for other relation types, so retry the operation with the higher lock if the relation turns out not to be an index. After this fix, ALTER INDEX <sometable> RENAME will require access exclusive lock, which it didn't before. Author: Nathan Bossart <bossartn@amazon.com> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Onder Kalaci <onderk@microsoft.com> Discussion: https://postgr.es/m/PH0PR21MB1328189E2821CDEC646F8178D8AE9@PH0PR21MB1328.namprd21.prod.outlook.com
* pg_dump: Reorganize getTables()Tom Lane2021-10-19
| | | | | | | | | | | | | | | Along the same lines as 047329624, ed2c7f65b and daa9fe8a5, reduce code duplication by having just one copy of the parts of the query that are the same across all server versions; and make the conditionals control the smallest possible amount of code. This also gets rid of the confusing assortment of different ways to accomplish the same result that we had here before. While at it, make sure all three relevant parts of the function list the fields in the same order. This is just neatnik-ism, of course. Discussion: https://postgr.es/m/1240992.1634419055@sss.pgh.pa.us
* Adapt src/test/ldap/t/001_auth.pl to work with openldap 2.5.Andres Freund2021-10-19
| | | | | | | | | | | ldapsearch's deprecated -h/-p arguments were removed, need to use -H now - which has been around for over 20 years. As perltidy insists on reflowing the parameters anyway, change order and "phrasing" to yield a less confusing layout (per suggestion from Tom Lane). Discussion: https://postgr.es/m/20211009233850.wvr6apcrw2ai6cnj@alap3.anarazel.de Backpatch: 11-, where the tests were added.
* Refactor the sslfiles Makefile target for ease of useDaniel Gustafsson2021-10-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Makefile handling of certificate and keypairs used for TLS testing had become quite difficult to work with. Adding a new cert without the need to regenerate everything was too complicated. This patch refactors the sslfiles make target such that adding a new certificate requires only adding a .config file, adding it to the top of the Makefile, and running make sslfiles. Improvements: - Interfile dependencies should be fixed, with the exception of the CRL dirs. - New certificates have serial numbers based on the current time, reducing the chance of collision. - The CA index state is created on demand and cleaned up automatically at the end of the Make run. - *.config files are now self-contained; one certificate needs one config file instead of two. - Duplication is reduced, and along with it some unneeded code (and possible copy-paste errors). - all configuration files underneath the conf/ directory. The target is moved to its own makefile in order to avoid colliding with global make settings. Author: Jacob Champion <pchampion@vmware.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/d15a9838344ba090e09fd866abf913584ea19fb7.camel@vmware.com
* Fix assignment to array of domain over composite.Tom Lane2021-10-19
| | | | | | | | | | | | | | | An update such as "UPDATE ... SET fld[n].subfld = whatever" failed if the array elements were domains rather than plain composites. That's because isAssignmentIndirectionExpr() failed to cope with the CoerceToDomain node that would appear in the expression tree in this case. The result would typically be a crash, and even if we accidentally didn't crash, we'd not correctly preserve other fields of the same array element. Per report from Onder Kalaci. Back-patch to v11 where arrays of domains came in. Discussion: https://postgr.es/m/PH0PR21MB132823A46AA36F0685B7A29AD8BD9@PH0PR21MB1328.namprd21.prod.outlook.com
* Remove bogus assertion in transformExpressionList().Tom Lane2021-10-19
| | | | | | | | | | | | | | | | | I think when I added this assertion (in commit 8f889b108), I was only thinking of the use of transformExpressionList at top level of INSERT and VALUES. But it's also called by transformRowExpr(), which can certainly occur in an UPDATE targetlist, so it's inappropriate to suppose that p_multiassign_exprs must be empty. Besides, since the input is not expected to contain ResTargets, there's no reason it should contain MultiAssignRefs either. Hence this code need not be concerned about the state of p_multiassign_exprs, and we should just drop the assertion. Per bug #17236 from ocean_li_996. It's been wrong for years, so back-patch to all supported branches. Discussion: https://postgr.es/m/17236-3210de9bcba1d7ca@postgresql.org
* Fix bug in TOC file error message printingDaniel Gustafsson2021-10-19
| | | | | | | | | | | | | | | | | | | | | | | | If the blob TOC file cannot be parsed, the error message was failing to print the filename as the variable holding it was shadowed by the destination buffer for parsing. When the filename fails to parse, the error will print an empty string: ./pg_restore -d foo -F d dump pg_restore: error: invalid line in large object TOC file "": .. ..instead of the intended error message: ./pg_restore -d foo -F d dump pg_restore: error: invalid line in large object TOC file "dump/blobs.toc": .. Fix by renaming both variables as the shared name was too generic to store either and still convey what the variable held. Backpatch all the way down to 9.6. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/A2B151F5-B32B-4F2C-BA4A-6870856D9BDE@yesql.se Backpatch-through: 9.6
* Fix sscanf limits in pg_basebackup and pg_dumpDaniel Gustafsson2021-10-19
| | | | | | | | | | | | | | | | | | | | | Make sure that the string parsing is limited by the size of the destination buffer. In pg_basebackup the available values sent from the server is limited to two characters so there was no risk of overflow. In pg_dump the buffer is bounded by MAXPGPATH, and thus the limit must be inserted via preprocessor expansion and the buffer increased by one to account for the terminator. There is no risk of overflow here, since in this case, the buffer scanned is smaller than the destination buffer. Backpatch the pg_basebackup fix to 11 where it was introduced, and the pg_dump fix all the way down to 9.6. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/B14D3D7B-F98C-4E20-9459-C122C67647FB@yesql.se Backpatch-through: 11 and 9.6
* Block ALTER INDEX/TABLE index_name ALTER COLUMN colname SET (options)Michael Paquier2021-10-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The grammar of this command run on indexes with column names has always been authorized by the parser, and it has never been documented. Since 911e702, it is possible to define opclass parameters as of CREATE INDEX, which actually broke the old case of ALTER INDEX/TABLE where relation-level parameters n_distinct and n_distinct_inherited could be defined for an index (see 76a47c0 and its thread where this point has been touched, still remained unused). Attempting to do that in v13~ would cause the index to become unusable, as there is a new dedicated code path to load opclass parameters instead of the relation-level ones previously available. Note that it is possible to fix things with a manual catalog update to bring the relation back online. This commit disables this command for now as the use of column names for indexes does not make sense anyway, particularly when it comes to index expressions where names are automatically computed. One way to properly support this case properly in the future would be to use column numbers when it comes to indexes, in the same way as ALTER INDEX .. ALTER COLUMN .. SET STATISTICS. Partitioned indexes were already blocked, but not indexes. Some tests are added for both cases. There was some code in ANALYZE to enforce n_distinct to be used for an index expression if the parameter was defined, but just remove it for now until/if there is support for this (note that index-level parameters never had support in pg_dump either, previously), so this was just dead code. Reported-by: Matthijs van der Vleuten Author: Nathan Bossart, Michael Paquier Reviewed-by: Vik Fearing, Dilip Kumar Discussion: https://postgr.es/m/17220-15d684c6c2171a83@postgresql.org Backpatch-through: 13
* Invalidate partitions of table being attached/detachedAlvaro Herrera2021-10-18
| | | | | | | | | | | | | | | | Failing to do that, any direct inserts/updates of those partitions would fail to enforce the correct constraint, that is, one that considers the new partition constraint of their parent table. Backpatch to 10. Reported by: Hou Zhijie <houzj.fnst@fujitsu.com> Author: Amit Langote <amitlangote09@gmail.com> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Nitin Jadhav <nitinjadhavpostgres@gmail.com> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Discussion: https://postgr.es/m/OS3PR01MB5718DA1C4609A25186D1FBF194089%40OS3PR01MB5718.jpnprd01.prod.outlook.com
* Fix parallel sort, broken by the balanced merge patch.Heikki Linnakangas2021-10-18
| | | | | | | | | | | The code for initializing the tapes on each merge iteration was skipped in a parallel worker. I put the !WORKER(state) check in wrong place while rebasing the patch. That caused failures in the index build in 'multiple-row-versions' isolation test, in multiple buildfarm members. On my laptop it was easier to reproduce by building an index on a larger table, so that you got a parallel sort more reliably.
* Fix duplicate typedef LogicalTape.Heikki Linnakangas2021-10-18
| | | | To make buildfarm member locust happy.
* Fix format modifier used in elog.Heikki Linnakangas2021-10-18
| | | | | | | | The previous commit 65014000b3 changed the variable passed to elog from an int64 to a size_t variable, but neglected to change the modifier in the format string accordingly. Per failure on buildfarm member lapwing.
* Replace polyphase merge algorithm with a simple balanced k-way merge.Heikki Linnakangas2021-10-18
| | | | | | | | | | | | | | The advantage of polyphase merge is that it can reuse the input tapes as output tapes efficiently, but that is irrelevant on modern hardware, when we can easily emulate any number of tape drives. The number of input tapes we can/should use during merging is limited by work_mem, but output tapes that we are not currently writing to only cost a little bit of memory, so there is no need to skimp on them. This makes sorts that need multiple merge passes faster. Discussion: https://www.postgresql.org/message-id/420a0ec7-602c-d406-1e75-1ef7ddc58d83%40iki.fi Reviewed-by: Peter Geoghegan, Zhihong Yu, John Naylor
* Refactor LogicalTapeSet/LogicalTape interface.Heikki Linnakangas2021-10-18
| | | | | | | | | | | | All the tape functions, like LogicalTapeRead and LogicalTapeWrite, now take a LogicalTape as argument, instead of LogicalTapeSet+tape number. You can create any number of LogicalTapes in a single LogicalTapeSet, and you don't need to decide the number upfront, when you create the tape set. This makes the tape management in hash agg spilling in nodeAgg.c simpler. Discussion: https://www.postgresql.org/message-id/420a0ec7-602c-d406-1e75-1ef7ddc58d83%40iki.fi Reviewed-by: Peter Geoghegan, Zhihong Yu, John Naylor
* Reset properly snapshot export state during transaction abortMichael Paquier2021-10-18
| | | | | | | | | | | | | | | | | | | | | | | | During a replication slot creation, an ERROR generated in the same transaction as the one creating a to-be-exported snapshot would have left the backend in an inconsistent state, as the associated static export snapshot state was not being reset on transaction abort, but only on the follow-up command received by the WAL sender that created this snapshot on replication slot creation. This would trigger inconsistency failures if this session tried to export again a snapshot, like during the creation of a replication slot. Note that a snapshot export cannot happen in a transaction block, so there is no need to worry resetting this state for subtransaction aborts. Also, this inconsistent state would very unlikely show up to users. For example, one case where this could happen is an out-of-memory error when building the initial snapshot to-be-exported. Dilip found this problem while poking at a different patch, that caused an error in this code path for reasons unrelated to HEAD. Author: Dilip Kumar Reviewed-by: Michael Paquier, Zhihong Yu Discussion: https://postgr.es/m/CAFiTN-s0zA1Kj0ozGHwkYkHwa5U0zUE94RSc_g81WrpcETB5=w@mail.gmail.com Backpatch-through: 9.6
* Fix portability issues in new TAP tests of psqlMichael Paquier2021-10-18
| | | | | | | | | | | | | | | | | | | The tests added by c0280bc and d9ddc50 in 001_basic.pl have introduced commands calling directly psql, making them sensitive to the environment. One issue was that those commands forgot -X to not use a local .psqlrc, causing all those tests to fail if psql cannot properly parse this file. TAP tests should be designed so as they run in an isolated fashion, without any dependency on the environment where they are run. As PostgresNode::psql gives already all the facilities those new tests need, switch to that instead of calling plain psql commands where interactions with a backend are needed. The test is slightly refactored to be able to check after the expected patterns of stdout and stderr, keeping the same amount of coverage as previously. Reported-by: Peter Geoghegan Discussion: https://postgr.es/m/CAH2-Wzn8ftvcDPwomn+y04JJzbT=TG7TN=QsmSEATUOW-ZuvQQ@mail.gmail.com
* Avoid core dump in pg_dump when dumping from pre-8.3 server.Tom Lane2021-10-16
| | | | | | Commit f0e21f2f6 missed adding a tgisinternal output column to getTriggers' query for pre-8.3 servers. Back-patch to v11, like that commit.