aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* Suppress unnecessary RelabelType nodes in yet more cases.Tom Lane2020-08-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a477bfc1d fixed eval_const_expressions() to ensure that it didn't generate unnecessary RelabelType nodes, but I failed to notice that some other places in the planner had the same issue. Really noplace in the planner should be using plain makeRelabelType(), for fear of generating expressions that should be equal() to semantically equivalent trees, but aren't. An example is that because canonicalize_ec_expression() failed to be careful about this, we could end up with an equivalence class containing both a plain Const, and a Const-with-RelabelType representing exactly the same value. So far as I can tell this led to no visible misbehavior, but we did waste a bunch of cycles generating and evaluating "Const = Const-with-RelabelType" to prove such entries are redundant. Hence, move the support function added by a477bfc1d to where it can be more generally useful, and use it in the places where planner code previously used makeRelabelType. Back-patch to v12, like the previous patch. While I have no concrete evidence of any real misbehavior here, it's certainly possible that I overlooked a case where equivalent expressions that aren't equal() could cause a user-visible problem. In any case carrying extra RelabelType nodes through planning to execution isn't very desirable. Discussion: https://postgr.es/m/1311836.1597781384@sss.pgh.pa.us
* Avoid non-constant format string argument to fprintf().Heikki Linnakangas2020-08-18
| | | | | | | | | As Tom Lane pointed out, it could defeat the compiler's printf() format string verification. Backpatch to v12, like that patch that introduced it. Discussion: https://www.postgresql.org/message-id/1069283.1597672779%40sss.pgh.pa.us
* Disable autovacuum for BRIN test tableAlvaro Herrera2020-08-17
| | | | | | | | This should improve stability in the tests. Per buildfarm member hyrax (CLOBBER_CACHE_ALWAYS) via Tom Lane. Discussion: https://postgr.es/m/871534.1597503261@sss.pgh.pa.us
* Doc: fix description of UNION/CASE/etc type unification.Tom Lane2020-08-17
| | | | | | | | | The description of what select_common_type() does was not terribly accurate. Improve it. David Johnston and Tom Lane Discussion: https://postgr.es/m/1019930.1597613200@sss.pgh.pa.us
* Fix printing last progress report line in client programs.Heikki Linnakangas2020-08-17
| | | | | | | | | | | | | | | | | A number of client programs have a "--progress" option that when printing to a TTY, updates the current line by printing a '\r' and overwriting it. After the last line, '\n' needs to be printed to move the cursor to the next line. pg_basebackup and pgbench got this right, but pg_rewind and pg_checksums were slightly wrong. pg_rewind printed the newline to stdout instead of stderr, and pg_checksums printed the newline even when not printing to a TTY. Fix them, and also add a 'finished' argument to pg_basebackup's progress_report() function, to keep it consistent with the other programs. Backpatch to v12. pg_rewind's newline was broken with the logging changes in commit cc8d415117 in v12, and pg_checksums was introduced in v12. Discussion: https://www.postgresql.org/message-id/82b539e5-ae33-34b0-1aee-22b3379fd3eb@iki.fi
* doc: Fix description about bgwriter and checkpoint in HA sectionMichael Paquier2020-08-17
| | | | | | | | | Since 806a2ae, the work of the bgwriter is split the checkpointer, but a portion of the documentation did not get the message. Author: Masahiko Sawada Discussion: https://postgr.es/m/CA+fd4k6jXxjAtjMVC=wG3=QGpauZBtcgN3Jhw+oV7zXGKVLKzQ@mail.gmail.com Backpatch-through: 9.5
* Move new LOCKTAG_DATABASE_FROZEN_IDS to end of enum LockTagType.Noah Misch2020-08-15
| | | | | | | | | | | Several PGXN modules reference LockTagType values; renumbering would force a recompile of those modules. Oversight in back-patch of today's commit 566372b3d6435639e4cc4476d79b8505a0297c87. Back-patch to released branches, v12 through 9.5. Reported by Tom Lane. Discussion: https://postgr.es/m/921383.1597523945@sss.pgh.pa.us
* Prevent concurrent SimpleLruTruncate() for any given SLRU.Noah Misch2020-08-15
| | | | | | | | | | | | | | | | | The SimpleLruTruncate() header comment states the new coding rule. To achieve this, add locktype "frozenid" and two LWLocks. This closes a rare opportunity for data loss, which manifested as "apparent wraparound" or "could not access status of transaction" errors. Data loss is more likely in pg_multixact, due to released branches' thin margin between multiStopLimit and multiWrapLimit. If a user's physical replication primary logged ": apparent wraparound" messages, the user should rebuild standbys of that primary regardless of symptoms. At less risk is a cluster having emitted "not accepting commands" errors or "must be vacuumed" warnings at some point. One can test a cluster for this data loss by running VACUUM FREEZE in every database. Back-patch to 9.5 (all supported versions). Discussion: https://postgr.es/m/20190218073103.GA1434723@rfd.leadboat.com
* Be more careful about the shape of hashable subplan clauses.Tom Lane2020-08-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | nodeSubplan.c expects that the testexpr for a hashable ANY SubPlan has the form of one or more OpExprs whose LHS is an expression of the outer query's, while the RHS is an expression over Params representing output columns of the subquery. However, the planner only went as far as verifying that the clauses were all binary OpExprs. This works 99.99% of the time, because the clauses have the right shape when emitted by the parser --- but it's possible for function inlining to break that, as reported by PegoraroF10. To fix, teach the planner to check that the LHS and RHS contain the right things, or more accurately don't contain the wrong things. Given that this has been broken for years without anyone noticing, it seems sufficient to just give up hashing when it happens, rather than go to the trouble of commuting the clauses back again (which wouldn't necessarily work anyway). While poking at that, I also noticed that nodeSubplan.c had a baked-in assumption that the number of hash clauses is identical to the number of subquery output columns. Again, that's fine as far as parser output goes, but it's not hard to break it via function inlining. There seems little reason for that assumption though --- AFAICS, the only thing it's buying us is not having to store the number of hash clauses explicitly. Adding code to the planner to reject such cases would take more code than getting nodeSubplan.c to cope, so I fixed it that way. This has been broken for as long as we've had hashable SubPlans, so back-patch to all supported branches. Discussion: https://postgr.es/m/1549209182255-0.post@n3.nabble.com
* pg_dump: fix dependencies on FKs to partitioned tablesAlvaro Herrera2020-08-14
| | | | | | | | | | | | | | | | | | | | | | | | Parallel-restoring a foreign key that references a partitioned table with several levels of partitions can fail: pg_restore: while PROCESSING TOC: pg_restore: from TOC entry 6684; 2606 29166 FK CONSTRAINT fk fk_a_fkey postgres pg_restore: error: could not execute query: ERROR: there is no unique constraint matching given keys for referenced table "pk" Command was: ALTER TABLE fkpart3.fk ADD CONSTRAINT fk_a_fkey FOREIGN KEY (a) REFERENCES fkpart3.pk(a); This happens in parallel restore mode because some index partitions aren't yet attached to the topmost partitioned index that the FK uses, and so the index is still invalid. The current code marks the FK as dependent on the first level of index-attach dump objects; the bug is fixed by recursively marking the FK on their children. Backpatch to 12, where FKs to partitioned tables were introduced. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/3170626.1594842723@sss.pgh.pa.us Backpatch: 12-master
* Fix postmaster's behavior during smart shutdown.Tom Lane2020-08-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now, upon receipt of a SIGTERM ("smart shutdown" command), the postmaster has immediately killed all "optional" background processes, and subsequently refused to launch new ones while it's waiting for foreground client processes to exit. No doubt this seemed like an OK policy at some point; but it's a pretty bad one now, because it makes for a seriously degraded environment for the remaining clients: * Parallel queries are killed, and new ones fail to launch. (And our parallel-query infrastructure utterly fails to deal with the case in a reasonable way --- it just hangs waiting for workers that are not going to arrive. There is more work needed in that area IMO.) * Autovacuum ceases to function. We can tolerate that for awhile, but if bulk-update queries continue to run in the surviving client sessions, there's eventually going to be a mess. In the worst case the system could reach a forced shutdown to prevent XID wraparound. * The bgwriter and walwriter are also stopped immediately, likely resulting in performance degradation. Hence, let's rearrange things so that the only immediate change in behavior is refusing to let in new normal connections. Once the last normal connection is gone, shut everything down as though we'd received a "fast" shutdown. To implement this, remove the PM_WAIT_BACKUP and PM_WAIT_READONLY states, instead staying in PM_RUN or PM_HOT_STANDBY while normal connections remain. A subsidiary state variable tracks whether or not we're letting in new connections in those states. This also allows having just one copy of the logic for killing child processes in smart and fast shutdown modes. I moved that logic into PostmasterStateMachine() by inventing a new state PM_STOP_BACKENDS. Back-patch to 9.6 where parallel query was added. In principle this'd be a good idea in 9.5 as well, but the risk/reward ratio is not as good there, since lack of autovacuum is not a problem during typical uses of smart shutdown. Per report from Bharath Rupireddy. Patch by me, reviewed by Thomas Munro Discussion: https://postgr.es/m/CALj2ACXAZ5vKxT9P7P89D87i3MDO9bfS+_bjMHgnWJs8uwUOOw@mail.gmail.com
* Fix typo in test comment.Heikki Linnakangas2020-08-14
|
* Handle new HOT chains in index-build table scansAlvaro Herrera2020-08-13
| | | | | | | | | | | | | | | | | | | | | | | | | | When a table is scanned by heapam_index_build_range_scan (née IndexBuildHeapScan) and the table lock being held allows concurrent data changes, it is possible for new HOT chains to sprout in a page that were unknown when the scan of a page happened. This leads to an error such as ERROR: failed to find parent tuple for heap-only tuple at (X,Y) in table "tbl" because the root tuple was not present when we first obtained the list of the page's root tuples. This can be fixed by re-obtaining the list of root tuples, if we see that a heap-only tuple appears to point to a non-existing root. This was reported by Anastasia as occurring for BRIN summarization (which exists since 9.5), but I think it could theoretically also happen with CREATE INDEX CONCURRENTLY (much older) or REINDEX CONCURRENTLY (very recent). It seems a happy coincidence that BRIN forces us to backpatch this all the way to 9.5. Reported-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Diagnosed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Co-authored-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/602d8487-f0b2-5486-0088-0f372b2549fa@postgrespro.ru Backpatch: 9.5 - master
* BRIN: Handle concurrent desummarization properlyAlvaro Herrera2020-08-12
| | | | | | | | | | | | | | | If a page range is desummarized at just the right time concurrently with an index walk, BRIN would raise an error indicating index corruption. This is scary and unhelpful; silently returning that the page range is not summarized is sufficient reaction. This bug was introduced by commit 975ad4e602ff as additional protection against a bug whose actual fix was elsewhere. Backpatch equally. Reported-By: Anastasia Lubennikova <a.lubennikova@postgrespro.ru> Diagnosed-By: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/2588667e-d07d-7e10-74e2-7e1e46194491@postgrespro.ru Backpatch: 9.5 - master
* Stamp 12.4.REL_12_4Tom Lane2020-08-10
|
* Last-minute updates for release notes.Tom Lane2020-08-10
| | | | Security: CVE-2020-14349, CVE-2020-14350
* Document clashes between logical replication and untrusted users.Noah Misch2020-08-10
| | | | | | Back-patch to v10, which introduced logical replication. Security: CVE-2020-14349
* Empty search_path in logical replication apply worker and walsender.Noah Misch2020-08-10
| | | | | | | | | | | | | | This is like CVE-2018-1058 commit 582edc369cdbd348d68441fc50fa26a84afd0c1a. Today, a malicious user of a publisher or subscriber database can invoke arbitrary SQL functions under an identity running replication, often a superuser. This fix may cause "does not exist" or "no schema has been selected to create in" errors in a replication process. After upgrading, consider watching server logs for these errors. Objects accruing schema qualification in the wake of the earlier commit are unlikely to need further correction. Back-patch to v10, which introduced logical replication. Security: CVE-2020-14349
* Move connect.h from fe_utils to src/include/common.Noah Misch2020-08-10
| | | | | | | Any libpq client can use the header. Clients include backend components postgres_fdw, dblink, and logical replication apply worker. Back-patch to v10, because another fix needs this. In released branches, just copy the header and keep the original.
* Make contrib modules' installation scripts more secure.Tom Lane2020-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hostile objects located within the installation-time search_path could capture references in an extension's installation or upgrade script. If the extension is being installed with superuser privileges, this opens the door to privilege escalation. While such hazards have existed all along, their urgency increases with the v13 "trusted extensions" feature, because that lets a non-superuser control the installation path for a superuser-privileged script. Therefore, make a number of changes to make such situations more secure: * Tweak the construction of the installation-time search_path to ensure that references to objects in pg_catalog can't be subverted; and explicitly add pg_temp to the end of the path to prevent attacks using temporary objects. * Disable check_function_bodies within installation/upgrade scripts, so that any security gaps in SQL-language or PL-language function bodies cannot create a risk of unwanted installation-time code execution. * Adjust lookup of type input/receive functions and join estimator functions to complain if there are multiple candidate functions. This prevents capture of references to functions whose signature is not the first one checked; and it's arguably more user-friendly anyway. * Modify various contrib upgrade scripts to ensure that catalog modification queries are executed with secure search paths. (These are in-place modifications with no extension version changes, since it is the update process itself that is at issue, not the end result.) Extensions that depend on other extensions cannot be made fully secure by these methods alone; therefore, revert the "trusted" marking that commit eb67623c9 applied to earthdistance and hstore_plperl, pending some better solution to that set of issues. Also add documentation around these issues, to help extension authors write secure installation scripts. Patch by me, following an observation by Andres Freund; thanks to Noah Misch for review. Security: CVE-2020-14350
* Translation updatesPeter Eisentraut2020-08-10
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 444a6779aafc552ac452715caa65cfca0e723073
* Check for fseeko() failure in pg_dump's _tarAddFile().Tom Lane2020-08-09
| | | | | | | | | Coverity pointed out, not unreasonably, that we checked fseeko's result at every other call site but these. Failure to seek in the temp file (note this is NOT pg_dump's output file) seems quite unlikely, and even if it did happen the file length cross-check further down would probably detect the problem. Still, that's a poor excuse for not checking the result of a system call.
* Release notes for 12.4, 11.9, 10.14, 9.6.19, 9.5.23.Tom Lane2020-08-08
|
* walsnd: Don't set waiting_for_ping_response spuriouslyAlvaro Herrera2020-08-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ashutosh Bapat noticed that when logical walsender needs to wait for WAL, and it realizes that it must send a keepalive message to walreceiver to update the sent-LSN, which *does not* request a reply from walreceiver, it wrongly sets the flag that it's going to wait for that reply. That means that any future would-be sender of feedback messages ends up not sending a feedback message, because they all believe that a reply is expected. With built-in logical replication there's not much harm in this, because WalReceiverMain will send a ping-back every wal_receiver_timeout/2 anyway; but with other logical replication systems (e.g. pglogical) it can cause significant pain. This problem was introduced in commit 41d5f8ad734, where the request-reply flag was changed from true to false to WalSndKeepalive, without at the same time removing the line that sets waiting_for_ping_response. Just removing that line would be a sufficient fix, but it seems better to shift the responsibility of setting the flag to WalSndKeepalive itself instead of requiring caller to do it; this is clearly less error-prone. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com> Backpatch: 9.5 and up Discussion: https://postgr.es/m/20200806225558.GA22401@alvherre.pgsql
* Fix yet another issue with step generation in partition pruning.Etsuro Fujita2020-08-07
| | | | | | | | | | | | | | | | Commit 13838740f fixed some issues with step generation in partition pruning, but there was yet another one: get_steps_using_prefix() assumes that clauses in the passed-in prefix list are sorted in ascending order of their partition key numbers, but the caller failed to ensure this for range partitioning, which led to an assertion failure in debug builds. Adjust the caller function to arrange the clauses in the prefix list in the required order for range partitioning. Back-patch to v11, like the previous commit. Patch by me, reviewed by Amit Langote. Discussion: https://postgr.es/m/CAPmGK16jkXiFG0YqMbU66wte-oJTfW6D1HaNvQf%3D%2B5o9%3Dm55wQ%40mail.gmail.com
* First-draft release notes for 12.4.Tom Lane2020-08-06
| | | | | As usual, the release notes for other branches will be made by cutting these down, but put them up for community review first.
* Fix typo.Robert Haas2020-08-06
| | | | | Per report from Tom Lane. Previously fixed in master by commit f057980149ddccd4b862d2c6b3920ed498b0d7ec.
* Fix minor problems with non-exclusive backup cleanup.Robert Haas2020-08-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous coding imagined that it could call before_shmem_exit() when a non-exclusive backup began and then remove the previously-added handler by calling cancel_before_shmem_exit() when that backup ended. However, this only works provided that nothing else in the system has registered a before_shmem_exit() hook in the interim, because cancel_before_shmem_exit() is documented to remove a callback only if it is the latest callback registered. It also only works if nothing can ERROR out between the time that sessionBackupState is reset and the time that cancel_before_shmem_exit(), which doesn't seem to be strictly true. To fix, leave the handler installed for the lifetime of the session, arrange to install it just once, and teach it to quietly do nothing if there isn't a non-exclusive backup in process. This was originally committed to master as 303640199d0436c5e7acdf50b837a027b5726594, but I did not back-patch at the time because the consequences were minor. However, now there's been a second report of this causing trouble with a slightly different test case than the one I reported originally, so now I'm back-patching as far as v11 where JIT was introduced. Patch by me, reviewed by Kyotaro Horiguchi, Michael Paquier (who preferred a different approach, but got outvoted), Fujii Masao, and Tom Lane, and with comments by various others. New problem report from Bharath Rupireddy. Discussion: http://postgr.es/m/CA+TgmobMjnyBfNhGTKQEDbqXYE3_rXWpc4CM63fhyerNCes3mA@mail.gmail.com Discussion: http://postgr.es/m/CALj2ACWk7j4F2v2fxxYfrroOF=AdFNPr1WsV+AGtHAFQOqm_pw@mail.gmail.com
* doc: clarify "state" table reference in tutorialBruce Momjian2020-08-05
| | | | | | | | Reported-by: Vyacheslav Shablistyy Discussion: https://postgr.es/m/159586122762.680.1361378513036616007@wrigleys.postgresql.org Backpatch-through: 9.5
* Fix matching of sub-partitions when a partitioned plan is stale.Tom Lane2020-08-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Since we no longer require AccessExclusiveLock to add a partition, the executor may see that a partitioned table has more partitions than the planner saw. ExecCreatePartitionPruneState's code for matching up the partition lists in such cases was faulty, and would misbehave if the planner had successfully pruned any partitions from the query. (Thus, trouble would occur only if a partition addition happens concurrently with a query that uses both static and dynamic partition pruning.) This led to an Assert failure in debug builds, and probably to crashes or query misbehavior in production builds. To repair the bug, just explicitly skip zeroes in the plan's relid_map[] list. I also made some cosmetic changes to make the code more readable (IMO anyway). Also, convert the cross-checking Assert to a regular test-and-elog, since it's now apparent that this logic is more fragile than one would like. Currently, there's no way to repeatably exercise this code, except with manual use of a debugger to stop the backend between planning and execution. Hence, no test case in this patch. We oughta do something about that testability gap, but that's for another day. Amit Langote and Tom Lane, per report from Justin Pryzby. Oversight in commit 898e5e329; backpatch to v12 where that appeared. Discussion: https://postgr.es/m/20200802181131.GA27754@telsasoft.com
* Increase hard-wired timeout values in ecpg regression tests.Tom Lane2020-08-04
| | | | | | | | | | | | | | A couple of test cases had connect_timeout=14, a value that seems to have been plucked from a hat. While it's more than sufficient for normal cases, slow/overloaded buildfarm machines can get a timeout failure here, as per recent report from "sungazer". Increase to 180 seconds, which is in line with our typical timeouts elsewhere in the regression tests. Back-patch to 9.6; the code looks different in 9.5, and this doesn't seem to be quite worth the effort to adapt to that. Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2020-08-04%2007%3A12%3A22
* Doc: fix obsolete info about allowed range of TZ offsets in timetz.Tom Lane2020-08-03
| | | | | | | | | We've allowed UTC offsets up to +/- 15:59 since commit cd0ff9c0f, but that commit forgot to fix the documentation about timetz. Per bug #16571 from osdba. Discussion: https://postgr.es/m/16571-eb7501598de78c8a@postgresql.org
* Fix rare failure in LDAP tests.Thomas Munro2020-08-03
| | | | | | | | | | | Instead of writing a query to psql's stdin, use -c. This avoids a failure where psql exits before we write, seen a few times on the build farm. Thanks to Tom Lane for the suggestion. Back-patch to 11, where the LDAP tests arrived. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/CA%2BhUKGLFmW%2BHQYPeKiwSp5sdFFHtFViCpw4Mh6yAgEx74r5-Cw%40mail.gmail.com
* Restore lost amcheck TOAST test coverage.Peter Geoghegan2020-07-31
| | | | | | | | | | | | | | | | | | | | Commit eba77534 fixed an amcheck false positive bug involving inconsistencies in TOAST input state between table and index. A test case was added that verified that such an inconsistency didn't result in a spurious corruption related error. Test coverage from the test was accidentally lost by commit 501e41dd, which propagated ALTER TABLE ... SET STORAGE attstorage state to indexes. This broke the test because the test specifically relied on attstorage not being propagated. This artificially forced there to be index tuples whose datums were equivalent to the datums in the heap without the datums actually being bitwise equal. Fix this by updating pg_attribute directly instead. Commit 501e41dd made similar changes to a test_decoding TOAST-related test case which made the same assumption, but overlooked the amcheck test case. Backpatch: 11-, just like commit eba77534 (and commit 501e41dd).
* Fix recently-introduced performance problem in ts_headline().Tom Lane2020-07-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new hlCover() algorithm that I introduced in commit c9b0c678d turns out to potentially take O(N^2) or worse time on long documents, if there are many occurrences of individual query words but few or no substrings that actually satisfy the query. (One way to hit this behavior is with a "common_word & rare_word" type of query.) This seems unavoidable given the original goal of checking every substring of the document, so we have to back off that idea. Fortunately, it seems unlikely that anyone would really want headlines spanning all of a long document, so we can avoid the worse-than-linear behavior by imposing a maximum length of substring that we'll consider. For now, just hard-wire that maximum length as a multiple of max_words times max_fragments. Perhaps at some point somebody will argue for exposing it as a ts_headline parameter, but I'm hesitant to make such a feature addition in a back-patched bug fix. I also noted that the hlFirstIndex() function I'd added in that commit was unnecessarily stupid: it really only needs to check whether a HeadlineWordEntry's item pointer is null or not. This wouldn't make all that much difference in typical cases with queries having just a few terms, but a cycle shaved is a cycle earned. In addition, add a CHECK_FOR_INTERRUPTS call in TS_execute_recurse. This ensures that hlCover's loop is cancellable if it manages to take a long time, and it may protect some other TS_execute callers as well. Back-patch to 9.6 as the previous commit was. I also chose to add the CHECK_FOR_INTERRUPTS call to 9.5. The old hlCover() algorithm seems to avoid the O(N^2) behavior, at least on the test case I tried, but nonetheless it's not very quick on a long document. Per report from Stephen Frost. Discussion: https://postgr.es/m/20200724160535.GW12375@tamriel.snowman.net
* Doc: fix high availability solutions comparison.Tatsuo Ishii2020-07-31
| | | | | | | | | | | | In "High Availability, Load Balancing, and Replication" chapter, certain descriptions of Pgpool-II were not correct at this point. It does not need conflict resolution. Also "Multiple-Server Parallel Query Execution" is not supported anymore. Discussion: https://postgr.es/m/20200726.230128.53842489850344110.t-ishii%40sraoss.co.jp Author: Tatsuo Ishii Reviewed-by: Bruce Momjian Backpatch-through: 9.5
* doc: Mention index references in pg_inheritsMichael Paquier2020-07-30
| | | | | | | | | Partitioned indexes are also registered in pg_inherits, but the description of this catalog did not reflect that. Author: Dagfinn Ilmari Mannsåker Discussion: https://postgr.es/m/87k0ynj35y.fsf@wibble.ilmari.org Backpatch-through: 11
* Doc: Improve documentation for pg_jit_available()David Rowley2020-07-28
| | | | | | | Per complaint from Scott Ribe. Based on wording suggestion from Tom Lane. Discussion: https://postgr.es/m/1956E806-1468-4417-9A9D-235AE1D5FE1A@elevated-dev.com Backpatch-through: 11, where pg_jit_available() was added
* Fix some issues with step generation in partition pruning.Etsuro Fujita2020-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the case of range partitioning, get_steps_using_prefix() assumes that the passed-in prefix list contains at least one clause for each of the partition keys earlier than one specified in the passed-in step_lastkeyno, but the caller (ie, gen_prune_steps_from_opexps()) didn't take it into account, which led to a server crash or incorrect results when the list contained no clauses for such partition keys, as reported in bug #16500 and #16501 from Kobayashi Hisanori. Update the caller to call that function only when the list created there contains at least one clause for each of the earlier partition keys in the case of range partitioning. While at it, fix some other issues: * The list to pass to get_steps_using_prefix() is allowed to contain multiple clauses for the same partition key, as described in the comment for that function, but that function actually assumed that the list contained just a single clause for each of middle partition keys, which led to an assertion failure when the list contained multiple clauses for such partition keys. Update that function to match the comment. * In the case of hash partitioning, partition keys are allowed to be NULL, in which case the list to pass to get_steps_using_prefix() contains no clauses for NULL partition keys, but that function treats that case as like the case of range partitioning, which led to the assertion failure. Update the assertion test to take into account NULL partition keys in the case of hash partitioning. * Fix a typo in a comment in get_steps_using_prefix_recurse(). * gen_partprune_steps() failed to detect self-contradiction from strict-qual clauses and an IS NULL clause for the same partition key in some cases, producing incorrect partition-pruning steps, which led to incorrect results of partition pruning, but didn't cause any user-visible problems fortunately, as the self-contradiction is detected later in the query planning. Update that function to detect the self-contradiction. Per bug #16500 and #16501 from Kobayashi Hisanori. Patch by me, initial diagnosis for the reported issue and review by Dmitry Dolgov. Back-patch to v11, where partition pruning was introduced. Discussion: https://postgr.es/m/16500-d1613f2a78e1e090%40postgresql.org Discussion: https://postgr.es/m/16501-5234a9a0394f6754%40postgresql.org
* Fix corner case with 16kB-long decompression in pgcrypto, take 2Michael Paquier2020-07-27
| | | | | | | | | | | | | | | | | | | | | | | A compressed stream may end with an empty packet. In this case decompression finishes before reading the empty packet and the remaining stream packet causes a failure in reading the following data. This commit makes sure to consume such extra data, avoiding a failure when decompression the data. This corner case was reproducible easily with a data length of 16kB, and existed since e94dd6a. A cheap regression test is added to cover this case based on a random, incompressible string. The first attempt of this patch has allowed to find an older failure within the compression logic of pgcrypto, fixed by b9b6105. This involved SLES 15 with z390 where a custom flavor of libz gets used. Bonus thanks to Mark Wong for providing access to the specific environment. Reported-by: Frank Gagnepain Author: Kyotaro Horiguchi, Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/16476-692ef7b84e5fb893@postgresql.org Backpatch-through: 9.5
* Fix handling of structure for bytea data type in ECPGMichael Paquier2020-07-27
| | | | | | | | | | | | | | Some code paths dedicated to bytea used the structure for varchar. This did not lead to any actual bugs, as bytea and varchar have the same definition, but it could become a trap if one of these definitions changes for a new feature or a bug fix. Issue introduced by 050710b. Author: Shenhao Wang Reviewed-by: Vignesh C, Michael Paquier Discussion: https://postgr.es/m/07ac7dee1efc44f99d7f53a074420177@G08CNEXMBPEKD06.g08.fujitsu.local Backpatch-through: 12
* Fix buffer usage stats for nodes above Gather Merge.Amit Kapila2020-07-25
| | | | | | | | | | | | | | | Commit 85c9d347 addressed a similar problem for Gather and Gather Merge nodes but forgot to account for nodes above parallel nodes. This still works for nodes above Gather node because we shut down the workers for Gather node as soon as there are no more tuples. We can do a similar thing for Gather Merge as well but it seems better to account for stats during nodes shutdown after completing the execution. Reported-by: Stéphane Lorek, Jehan-Guillaume de Rorthais Author: Jehan-Guillaume de Rorthais <jgdr@dalibo.com> Reviewed-by: Amit Kapila Backpatch-through: 10, where it was introduced Discussion: https://postgr.es/m/20200718160206.584532a2@firost
* Fix ancient violation of zlib's API spec.Tom Lane2020-07-23
| | | | | | | | | | | | | | | | | | | | | | | | | | contrib/pgcrypto mishandled the case where deflate() does not consume all of the offered input on the first try. It reset the next_in pointer to the start of the input instead of leaving it alone, causing the wrong data to be fed to the next deflate() call. This has been broken since pgcrypto was committed. The reason for the lack of complaints seems to be that it's fairly hard to get stock zlib to not consume all the input, so long as the output buffer is big enough (which it normally would be in pgcrypto's usage; AFAICT the input is always going to be packetized into packets no larger than ZIP_OUT_BUF). However, IBM's zlibNX implementation for AIX evidently will do it in some cases. I did not add a test case for this, because I couldn't find one that would fail with stock zlib. When we put back the test case for bug #16476, that will cover the zlibNX situation well enough. While here, write deflate()'s second argument as Z_NO_FLUSH per its API spec, instead of hard-wiring the value zero. Per buildfarm results and subsequent investigation. Discussion: https://postgr.es/m/16476-692ef7b84e5fb893@postgresql.org
* doc: Document that ssl_ciphers does not affect TLS 1.3Peter Eisentraut2020-07-23
| | | | | | | | | | TLS 1.3 uses a different way of specifying ciphers and a different OpenSSL API. PostgreSQL currently does not support setting those ciphers. For now, just document this. In the future, support for this might be added somehow. Reviewed-by: Jonathan S. Katz <jkatz@postgresql.org> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* Fix error message.Thomas Munro2020-07-23
| | | | | | | Remove extra space. Back-patch to all releases, like commit 7897e3bb. Author: Lu, Chenyang <lucy.fnst@cn.fujitsu.com> Discussion: https://postgr.es/m/795d03c6129844d3803e7eea48f5af0d%40G08CNEXMBPEKD04.g08.fujitsu.local
* Revert "Fix corner case with PGP decompression in pgcrypto"Michael Paquier2020-07-23
| | | | | | | | | | | | | | This reverts commit 9e10898, after finding out that buildfarm members running SLES 15 on z390 complain on the compression and decompression logic of the new test: pipistrelles, barbthroat and steamerduck. Those hosts are visibly using hardware-specific changes to improve zlib performance, requiring more investigation. Thanks to Tom Lane for the discussion. Discussion: https://postgr.es/m/20200722093749.GA2564@paquier.xyz Backpatch-through: 9.5
* Fix corner case with PGP decompression in pgcryptoMichael Paquier2020-07-22
| | | | | | | | | | | | | | | | | A compressed stream may end with an empty packet, and PGP decompression finished before reading this empty packet in the remaining stream. This caused a failure in pgcrypto, handling this case as corrupted data. This commit makes sure to consume such extra data, avoiding a failure when decompression the entire stream. This corner case was reproducible with a data length of 16kB, and existed since its introduction in e94dd6a. A cheap regression test is added to cover this case. Thanks to Jeff Janes for the extra investigation. Reported-by: Frank Gagnepain Author: Kyotaro Horiguchi, Michael Paquier Discussion: https://postgr.es/m/16476-692ef7b84e5fb893@postgresql.org Backpatch-through: 9.5
* neqjoinsel must now pass through collation to eqjoinsel.Tom Lane2020-07-21
| | | | | | | | | | | | | Since commit 044c99bc5, eqjoinsel passes the passed-in collation to any operators it invokes. However, neqjoinsel failed to pass on whatever collation it got, so that if we invoked a collation-dependent operator via that code path, we'd get "could not determine which collation to use for string comparison" or the like. Per report from Justin Pryzby. Back-patch to v12, like the previous commit. Discussion: https://postgr.es/m/20200721191606.GL5748@telsasoft.com
* Assert that we don't insert nulls into attnotnull catalog columns.Tom Lane2020-07-21
| | | | | | | | | | | | | | | | | | | | | The executor checks for this error, and so does the bootstrap catalog loader, but we never checked for it in retail catalog manipulations. The folly of that has now been exposed, so let's add assertions checking it. Checking in CatalogTupleInsert[WithInfo] and CatalogTupleUpdate[WithInfo] should be enough to cover this. Back-patch to v10; the aforesaid functions didn't exist before that, and it didn't seem worth adapting the patch to the oldest branches. But given the risk of JIT crashes, I think we certainly need this as far back as v11. Pre-v13, we have to explicitly exclude pg_subscription.subslotname and pg_subscription_rel.srsublsn from the checks, since they are mismarked. (Even if we change our mind about applying BKI_FORCE_NULL in the branch tips, it doesn't seem wise to have assertions that would fire in existing databases.) Discussion: https://postgr.es/m/298837.1595196283@sss.pgh.pa.us
* Avoid direct C access to possibly-null pg_subscription_rel.srsublsn.Tom Lane2020-07-21
| | | | | | | | | | | | | | | | | | | | | | This coding technique is unsafe, since we'd be accessing off the end of the tuple if the field is null. SIGSEGV is pretty improbable, but perhaps not impossible. Also, returning garbage for the LSN doesn't seem like a great idea, even if callers aren't looking at it today. Also update docs to point out explicitly that pg_subscription.subslotname and pg_subscription_rel.srsublsn can be null. Perhaps we should mark these two fields BKI_FORCE_NULL, so that they'd be correctly labeled in databases that are initdb'd in the future. But we can't force that for existing databases, and on balance it's not too clear that having a mix of different catalog contents in the field would be wise. Apply to v10 (where this code came in) through v12. Already fixed in v13 and HEAD. Discussion: https://postgr.es/m/732838.1595278439@sss.pgh.pa.us