aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Improve comments about USE_VALGRIND in pg_config_manual.h.Tom Lane2021-05-09
| | | | | | | | These comments left the impression that USE_VALGRIND isn't really essential for valgrind testing. But that's wrong, as I learned the hard way today. Discussion: https://postgr.es/m/512778.1620588546@sss.pgh.pa.us
* Move memory accounting Asserts for Result Cache codeDavid Rowley2021-05-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In 9eacee2e6, I included some code to verify the cache's memory tracking is correct by counting up the number of entries and the memory they use each time we evict something from the cache. Those values are then compared to the expected values using Assert. The problem is that this requires looping over the entire cache hash table each time we evict an entry from the cache. That can be pretty expensive, as noted by Pavel Stehule. Here we move this memory accounting checking code so that we only verify it on cassert builds once when shutting down the Result Cache node. Aside from the performance increase, this has two distinct advantages: 1) We do the memory checks at the last possible moment before destroying the cache. This means we'll now catch accounting problems that might sneak in after a cache eviction. 2) We now do the memory Assert checks when there were no cache evictions. This increases the coverage. One small disadvantage is that we'll now miss any memory tracking issues that somehow managed to resolve themselves by the end of execution. However, it seems to me that such a memory tracking problem would be quite unlikely, and likely somewhat less harmful if one were to exist. In passing, adjust the loop over the hash table to use the standard simplehash.h method of iteration. Reported-by: Pavel Stehule Discussion: https://postgr.es/m/CAFj8pRAzgoSkdEiqrKbT=7yG9FA5fjUAP3jmJywuDqYq6Ki5ug@mail.gmail.com
* Sync guc.c and postgresql.conf.sample with the SGML docs.Tom Lane2021-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It seems that various people have moved GUCs around in the config.sgml listing without bothering to make the code agree. Ensure that the config_group codes assigned to GUCs match where they are listed in config.sgml. Likewise ensure that postgresql.conf.sample lists GUCs in the same sub-section and same ordering as they appear in config.sgml. (I've got some doubts about some of these choices, but for the purposes of this patch, we'll treat config.sgml as gospel.) Notably, this requires adding a WAL_RECOVERY config_group value, because 1d257577e didn't. As long as we're renumbering that enum anyway, let's take out the values corresponding to major groups that are divided into sub-groups. No GUC should be assigned to the major group itself, so those values just create a temptation to do the wrong thing, while adding work for translators. In passing, adjust the short_desc strings for PRESET_OPTIONS GUCs to uniformly use the phrasing "Shows XYZ.", removing the impression some of these strings left that you can set the value. While some of these errors are old, no back-patch, as changing the contents of the pg_settings view in stable branches seems more likely to be seen as a compatibility break than anything helpful. Bharath Rupireddy, Justin Pryzby, Tom Lane Discussion: https://postgr.es/m/16997-ff16127f6e0d1390@postgresql.org Discussion: https://postgr.es/m/20210413123139.GE6091@telsasoft.com
* Fix incorrect error code for CREATE/ALTER TABLE COMPRESSIONMichael Paquier2021-05-08
| | | | | | | | | Specifying an incorrect value for the compression method of an attribute caused ERRCODE_FEATURE_NOT_SUPPORTED to be raised as error. Use instead ERRCODE_INVALID_PARAMETER_VALUE to be more consistent. Author: Dilip Kumar Discussion: https://postgr.es/m/CAFiTN-vH84fE-8C4zGZw4v0Wyh4Y2v=5JWg2fGE5+LPaDvz1GQ@mail.gmail.com
* Add a README and Makefile recipe for Gen_dummy_probes.plAndrew Dunstan2021-05-07
| | | | Discussion: https://postgr.es/m/20210506035602.3akutfvvojngj3nb@alap3.anarazel.de
* Fix typoPeter Eisentraut2021-05-07
|
* AlterSubscription_refresh: avoid stomping on global variableAlvaro Herrera2021-05-07
| | | | | | | | | | | | | | | | | | | | | | | This patch replaces use of the global "wrconn" variable in AlterSubscription_refresh with a local variable of the same name, making it consistent with other functions in subscriptioncmds.c (e.g. DropSubscription). The global wrconn is only meant to be used for logical apply/tablesync worker. Abusing it this way is known to cause trouble if an apply worker manages to do a subscription refresh, such as reported by Jeremy Finzel and diagnosed by Andres Freund back in November 2020, at https://www.postgresql.org/message-id/20201111215820.qihhrz7fayu6myfi@alap3.anarazel.de Backpatch to 10. In branch master, also move the connection establishment to occur outside the PG_TRY block; this way we can remove a test for NULL in PG_FINALLY, and it also makes the code more consistent with similar code in the same file. Author: Peter Smith <peter.b.smith@fujitsu.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Reviewed-by: Japin Li <japinli@hotmail.com> Discussion: https://postgr.es/m/CAHut+Pu7Jv9L2BOEx_Z0UtJxfDevQSAUW2mJqWU+CtmDrEZVAg@mail.gmail.com
* Remove extraneous newlines added by perl copyright patchAndrew Dunstan2021-05-07
|
* Add a copyright notice to perl files lacking one.Andrew Dunstan2021-05-07
|
* Fix typos in comments about extended statisticsTomas Vondra2021-05-07
| | | | | Reported-by: Justin Pryzby Discussion: https://postgr.es/m/20210505210947.GA27406%40telsasoft.com
* Make pg_get_statisticsobjdef_expressions return NULLTomas Vondra2021-05-07
| | | | | | | | | The usual behavior for functions in ruleutils.c is to return NULL when the object does not exist. pg_get_statisticsobjdef_expressions raised an error instead, so correct that. Reported-by: Justin Pryzby Discussion: https://postgr.es/m/20210505210947.GA27406%40telsasoft.com
* Revert per-index collation version tracking feature.Thomas Munro2021-05-07
| | | | | | | | | | | | | | | | | | | | | | | Design problems were discovered in the handling of composite types and record types that would cause some relevant versions not to be recorded. Misgivings were also expressed about the use of the pg_depend catalog for this purpose. We're out of time for this release so we'll revert and try again. Commits reverted: 1bf946bd: Doc: Document known problem with Windows collation versions. cf002008: Remove no-longer-relevant test case. ef387bed: Fix bogus collation-version-recording logic. 0fb0a050: Hide internal error for pg_collation_actual_version(<bad OID>). ff942057: Suppress "warning: variable 'collcollate' set but not used". d50e3b1f: Fix assertion in collation version lookup. f24b1569: Rethink extraction of collation dependencies. 257836a7: Track collation versions for indexes. cd6f479e: Add pg_depend.refobjversion. 7d1297df: Remove pg_collation.collversion. Discussion: https://postgr.es/m/CA%2BhUKGLhj5t1fcjqAu8iD9B3ixJtsTNqyCCD4V0aTO9kAKAjjA%40mail.gmail.com
* Remove redundant variableAlvaro Herrera2021-05-06
| | | | | | | | Author: Amul Sul <sulamul@gmail.com> Reviewed-by: Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/CAAJ_b94HaNcrPVREUuB9-qUn2uB+gfcoX3FG_Vx0S6aFse+yhw@mail.gmail.com
* Remove overzealous VACUUM visibility map assertion.Peter Geoghegan2021-05-06
| | | | | | | | | | | | | The all_visible_according_to_vm variable's value is inherently prone to becoming invalidated concurrently, since it is set before we even acquire a lock on a related heap page buffer. Oversight in commit 7136bf34, which added the assertion in passing. Author: Masahiko Sawada <sawada.mshk@gmail.com> Reported-By: Tang <tanghy.fnst@fujitsu.com> Diagnosed-By:: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAD21AoDzgc8_MYrA5m1fyydomw_eVKtQiYh7sfDK4KEhdMsf_g@mail.gmail.com
* Track detached partitions more accurately in partdescsAlvaro Herrera2021-05-06
| | | | | | | | | | | | | In d6b8d29419df I (Álvaro) was sloppy about recording whether a partition descripor does or does not include detached partitions, when the snapshot checking does not see the pg_inherits row marked detached. In that case no partition was omitted, yet in the relcache entry we were saving the partdesc as omitting partitions. Flip that (so we save it as a partdesc not omitting partitions, which indeed it doesn't), which hopefully makes the code easier to reason about. Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CA+HiwqE7GxGU4VdzwZzfiz+Ont5SsopoFkgtrZGEdPqWRL+biA@mail.gmail.com
* Update replication statistics after every stream/spill.Amit Kapila2021-05-06
| | | | | | | | | | | | | | Currently, replication slot statistics are updated at prepare, commit, and rollback. Now, if the transaction is interrupted the stats might not get updated. Fixed this by updating replication statistics after every stream/spill. In passing update the docs to change the description of some of the slot stats. Author: Vignesh C, Sawada Masahiko Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
* jit: Fix warning reported by gcc-11 caused by dubious function signature.Andres Freund2021-05-05
| | | | | | Reported-By: Erik Rijkers <er@xs4all.nl> Discussion: https://postgr.es/m/833107370.1313189.1619647621213@webmailclassic.xs4all.nl Backpatch: 13, where b059d2f45685 introduced the issue.
* Doc: update RELEASE_CHANGES checklist.Tom Lane2021-05-05
| | | | | | | | | | | | | Update checklist to reflect current practice: * The platform-specific FAQ files are long gone. * We've never routinely updated the libbind code we borrowed, either, and there seems no reason to start now. * Explain current practice of running pgindent twice per cycle. Discussion: https://postgr.es/m/4038398.1620238684@sss.pgh.pa.us
* Tighten the concurrent abort check during decoding.Amit Kapila2021-05-06
| | | | | | | | | | | | During decoding of an in-progress or prepared transaction, we detect concurrent abort with an error code ERRCODE_TRANSACTION_ROLLBACK. That is not sufficient because a callback can decide to throw that error code at other times as well. Reported-by: Tom Lane Author: Amit Kapila Reviewed-by: Dilip Kumar Discussion: https://postgr.es/m/CAA4eK1KCjPRS4aZHB48QMM4J8XOC1+TD8jo-4Yu84E+MjwqVhA@mail.gmail.com
* Remove unused argument of ATAddForeignConstraintAlvaro Herrera2021-05-05
| | | | | | | Commit 0325d7a5957b made this unused but forgot to remove it. Do so now. Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/209c99fe-b9a2-94f4-cd68-a8304186a09e@lab.ntt.co.jp
* Have ALTER CONSTRAINT recurse on partitioned tablesAlvaro Herrera2021-05-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When ALTER TABLE .. ALTER CONSTRAINT changes deferrability properties changed in a partitioned table, we failed to propagate those changes correctly to partitions and to triggers. Repair by adding a recursion mechanism to affect all derived constraints and all derived triggers. (In particular, recurse to partitions even if their respective parents are already in the desired state: it is possible for the partitions to have been altered individually.) Because foreign keys involve tables in two sides, we cannot use the standard ALTER TABLE recursion mechanism, so we invent our own by following pg_constraint.conparentid down. When ALTER TABLE .. ALTER CONSTRAINT is invoked on the derived pg_constraint object that's automaticaly created in a partition as a result of a constraint added to its parent, raise an error instead of pretending to work and then failing to modify all the affected triggers. Before this commit such a command would be allowed but failed to affect all triggers, so it would silently misbehave. (Restoring dumps of existing databases is not affected, because pg_dump does not produce anything for such a derived constraint anyway.) Add some tests for the case. Backpatch to 11, where foreign key support was added to partitioned tables by commit 3de241dba86f. (A related change is commit f56f8f8da6af in pg12 which added support for FKs *referencing* partitioned tables; this is what forces us to use an ad-hoc recursion mechanism for this.) Diagnosed by Tom Lane from bug report from Ron L Johnson. As of this writing, no reviews were offered. Discussion: https://postgr.es/m/75fe0761-a291-86a9-c8d8-4906da077469@gmail.com Discussion: https://postgr.es/m/3144850.1607369633@sss.pgh.pa.us
* GUC description improvements for clarityPeter Eisentraut2021-05-05
|
* Fix OID passed to object-alter hook during ALTER CONSTRAINTAlvaro Herrera2021-05-04
| | | | | | | | | | The OID of the constraint is used instead of the OID of the trigger -- an easy mistake to make. Apparently the object-alter hooks are not very well tested :-( Backpatch to 12, where this typo was introduced by 578b229718e8 Discussion: https://postgr.es/m/20210503231633.GA6994@alvherre.pgsql
* pg_dump: Fix dump of generated columns in partitionsPeter Eisentraut2021-05-04
| | | | | | | | | | The previous fix for dumping of inherited generated columns (0bf83648a52df96f7c8677edbbdf141bfa0cf32b) must not be applied to partitions, since, unlike normal inherited tables, they are always dumped separately and reattached. Reported-by: Santosh Udupi <email@hitha.net> Discussion: https://www.postgresql.org/message-id/flat/CACLRvHZ4a-%2BSM_159%2BtcrHdEqxFrG%3DW4gwTRnwf7Oj0UNj5R2A%40mail.gmail.com
* Fix ALTER TABLE / INHERIT with generated columnsPeter Eisentraut2021-05-04
| | | | | | | | | When running ALTER TABLE t2 INHERIT t1, we must check that columns in t2 that correspond to a generated column in t1 are also generated and have the same generation expression. Otherwise, this would allow creating setups that a normal CREATE TABLE sequence would not allow. Discussion: https://www.postgresql.org/message-id/22de27f6-7096-8d96-4619-7b882932ca25@2ndquadrant.com
* Update query_id computationBruce Momjian2021-05-03
| | | | | | | | | | | | | Properly fix: - the "ONLY" in FROM [ONLY] isn't hashed - the agglevelsup field in GROUPING isn't hashed - WITH TIES not being hashed (new in PG 13) - "DISTINCT" in "GROUP BY [DISTINCT]" isn't hashed (new in PG 14) Reported-by: Julien Rouhaud Discussion: https://postgr.es/m/20210425081119.ulyzxqz23ueh3wuj@nol
* Fix performance issue in new regex match-all detection code.Tom Lane2021-05-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 824bf7190 introduced a new search of the NFAs generated by regex compilation. I failed to think hard about the performance characteristics of that search, with the predictable outcome that it's bad: weird regexes can trigger exponential search time. Worse, there's no check-for-interrupt in that code, so you can't even cancel the query if this happens. Fix by introducing memo-ization of the search results, so that any one NFA state need be examined in detail just once. This potentially uses a lot of memory, but we can bound the memory usage by putting a limit on the number of states for which we'll try to prove match-all-ness. That is sane because we already have a limit (DUPINF) on the maximum finite string length that a matchall regex can match; and patterns that involve much more than DUPINF states would probably exceed that limit anyway. Also, rearrange the logic so that we check the basic is-the-graph- all-RAINBOW-arcs property before we start the recursive search to determine path lengths. This will ensure that we fall out quickly whenever the NFA couldn't possibly be matchall. Also stick in a check-for-interrupt, just in case these measures don't completely eliminate the risk of slowness. Discussion: https://postgr.es/m/3483895.1619898362@sss.pgh.pa.us
* Prevent lwlock dtrace probes from unnecessary workPeter Eisentraut2021-05-03
| | | | | | | | | | | If dtrace is compiled in but disabled, the lwlock dtrace probes still evaluate their arguments. Since PostgreSQL 13, T_NAME(lock) does nontrivial work, so it should be avoided if not needed. To fix, make these calls conditional on the *_ENABLED() macro corresponding to each probe. Reviewed-by: Craig Ringer <craig.ringer@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/CAGRY4nwxKUS_RvXFW-ugrZBYxPFFM5kjwKT5O+0+Stuga5b4+Q@mail.gmail.com
* Remove unused function argumentPeter Eisentraut2021-05-03
| | | | became unused by 04942bffd0aa9bd0d143d99b473342eb9ecee88b
* libpq: Refactor some error messages for easier translationPeter Eisentraut2021-05-03
|
* Factor out system call names from error messagesPeter Eisentraut2021-05-03
| | | | | One more that ought to have been part of 82c3cd974131d7fa1cfcd07cebfb04fffe26ee35.
* Fix the computation of slot stats for 'total_bytes'.Amit Kapila2021-05-03
| | | | | | | | | | | | Previously, we were using the size of all the changes present in ReorderBuffer to compute total_bytes after decoding a transaction and that can lead to counting some of the transactions' changes more than once. Fix it by using the size of the changes decoded for a transaction to compute 'total_bytes'. Author: Sawada Masahiko Reviewed-by: Vignesh C, Amit Kapila Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
* Make websearch_to_tsquery() parse text in quotes as a single tokenAlexander Korotkov2021-05-03
| | | | | | | | | | | | | | | | | | | | | | | | websearch_to_tsquery() splits text in quotes into tokens and connects them with phrase operator on its own. However, that leads to surprising results when the token contains no words. For instance, websearch_to_tsquery('"aaa: bbb"') is 'aaa <2> bbb', because it is equivalent of to_tsquery(E'aaa <-> \':\' <-> bbb'). But websearch_to_tsquery('"aaa: bbb"') has to be 'aaa <-> bbb' in order to match to_tsvector('aaa: bbb'). Since 0c4f355c6a, we anyway connect lexemes of complex tokens with phrase operators. Thus, let's just websearch_to_tsquery() parse text in quotes as a single token. Therefore, websearch_to_tsquery() should process the quoted text in the same way phraseto_tsquery() does. This solution is what we exactly need and also simplifies the code. This commit is an incompatible change, so we don't backpatch it. Reported-by: Valentin Gatien-Baron Discussion: https://postgr.es/m/CA%2B0DEqiZs7gdOd4ikmg%3D0UWG%2BSwWOLxPsk_JW-sx9WNOyrb0KQ%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Tom Lane, Zhihong Yu
* Revert use singular for -1 (commits 9ee7d533da and 5da9868ed9Bruce Momjian2021-05-01
| | | | | | | | | | | | | | Turns out you can specify negative values using plurals: https://english.stackexchange.com/questions/9735/is-1-followed-by-a-singular-or-plural-noun so the previous code was correct enough, and consistent with other usage in our code. Also add comment in the two places where this could be confused. Reported-by: Noah Misch Diagnosed-by: 20210425115726.GA2353095@rfd.leadboat.com
* Disallow calling anything but plain functions via the fastpath API.Tom Lane2021-04-30
| | | | | | | | | | | | | | | | | | | | | | | Reject aggregates, window functions, and procedures. Aggregates failed anyway, though with a somewhat obscure error message. Window functions would hit an Assert or null-pointer dereference. Procedures seemed to work as long as you didn't try to do transaction control, but (a) transaction control is sort of the point of a procedure, and (b) it's not entirely clear that no bugs lurk in that path. Given the lack of testing of this area, it seems safest to be conservative in what we support. Also reject proretset functions, as the fastpath protocol can't support returning a set. Also remove an easily-triggered assertion that the given OID isn't 0; the subsequent lookups can handle that case themselves. Per report from Theodor-Arsenij Larionov-Trichkin. Back-patch to all supported branches. (The procedure angle only applies in v11+, of course.) Discussion: https://postgr.es/m/2039442.1615317309@sss.pgh.pa.us
* Fix the bugs in selecting the transaction for streaming.Amit Kapila2021-04-30
| | | | | | | | | | | | | | | | There were two problems: a. We were always selecting the next available txn instead of selecting it when it is larger than the previous transaction. b. We were selecting the transactions which haven't made any changes to the database (base snapshot is not set). Later it was hitting an Assert because we don't decode such transactions and the changes in txn remain as it is. It is better not to choose such transactions for streaming in the first place. Reported-by: Haiying Tang Author: Dilip Kumar Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/OS0PR01MB61133B94E63177040F7ECDA1FB429@OS0PR01MB6113.jpnprd01.prod.outlook.com
* Adjust EXPLAIN output for parallel Result Cache plansDavid Rowley2021-04-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here we adjust the EXPLAIN ANALYZE output for Result Cache so that we don't show any Result Cache stats for parallel workers who don't contribute anything to Result Cache plan nodes. I originally had ideas that workers who don't help could still have their Result Cache stats displayed. The idea with that was so that I could write some parallel Result Cache regression tests that show the EXPLAIN ANALYZE output. However, I realized a little too late that such tests would just not be possible to have run in a stable way on the buildfarm. With that knowledge, before 9eacee2e6 went in, I had removed all of the tests that were showing the EXPLAIN ANALYZE output of a parallel Result Cache plan, however, I forgot to put back the code that adjusts the EXPLAIN output to hide the Result Cache stats for parallel workers who were not fast enough to help out before query execution was over. All other nodes behave this way and so should Result Cache. Additionally, with this change, it now seems safe enough to remove the SET force_parallel_mode = off that I had added to the regression tests. Also, perform some cleanup in the partition_prune tests. I had adjusted the explain_parallel_append() function to sanitize the Result Cache EXPLAIN ANALYZE output. However, since I didn't actually include any parallel Result Cache tests that show their EXPLAIN ANALYZE output, that code does nothing and can be removed. In passing, move the setting of memPeakKb into the scope where it's used. Reported-by: Amit Khandekar Author: David Rowley, Amit Khandekar Discussion: https://postgr.es/m/CAJ3gD9d8SkfY95GpM1zmsOtX2-Ogx5q-WLsf8f0ykEb0hCRK3w@mail.gmail.com
* Improve wording of some pg_upgrade failure reports.Tom Lane2021-04-29
| | | | | | | | | | | Don't advocate dropping a whole table when dropping a column would serve. While at it, try to make the layout of these messages a bit cleaner and more consistent. Per suggestion from Daniel Gustafsson. No back-patch, as we generally don't like to churn translatable messages in released branches. Discussion: https://postgr.es/m/2798740.1619622555@sss.pgh.pa.us
* Fix some more omissions in pg_upgrade's tests for non-upgradable types.Tom Lane2021-04-29
| | | | | | | | | | | | | | | | | | | | | | | | | Commits 29aeda6e4 et al closed up some oversights involving not checking for non-upgradable types within container types, such as arrays and ranges. However, I only looked at version.c, failing to notice that there were substantially-equivalent tests in check.c. (The division of responsibility between those files is less than clear...) In addition, because genbki.pl does not guarantee that auto-generated rowtype OIDs will hold still across versions, we need to consider that the composite type associated with a system catalog or view is non-upgradable. It seems unlikely that someone would have a user column declared that way, but if they did, trying to read it in another PG version would likely draw "no such pg_type OID" failures, thanks to the type OID embedded in composite Datums. To support the composite and reg*-type cases, extend the recursive query that does the search to allow any base query that returns a column of pg_type OIDs, rather than limiting it to exactly one starting type. As before, back-patch to all supported branches. Discussion: https://postgr.es/m/2798740.1619622555@sss.pgh.pa.us
* psql: Fix line continuation prompts for unbalanced parenthesesPeter Eisentraut2021-04-29
| | | | | | | | | This was broken by a silly mistake in e717a9a18b2e34c9c40e5259ad4d31cd7e420750. Reported-by: Jeff Janes <jeff.janes@gmail.com> Author: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://www.postgresql.org/message-id/CAMkU=1zKGWEJdBbYKw7Tn7cJmYR_UjgdcXTPDqJj=dNwCETBCQ@mail.gmail.com
* pg_hba.conf.sample: Reword connection type sectionPeter Eisentraut2021-04-29
| | | | | | | | | | Improve the wording in the connection type section of pg_hba.conf.sample a bit. After the hostgssenc part was added on, the whole thing became a bit wordy, and it's also a bit inaccurate for example in that the current wording for "host" appears to say that it does not apply to GSS-encrypted connections. Discussion: https://www.postgresql.org/message-id/flat/fc06dcc5-513f-e944-cd07-ba51dd7c6916%40enterprisedb.com
* Add heuristic incoming-message-size limits in the server.Tom Lane2021-04-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We had a report of confusing server behavior caused by a client bug that sent junk to the server: the server thought the junk was a very long message length and waited patiently for data that would never come. We can reduce the risk of that by being less trusting about message lengths. For a long time, libpq has had a heuristic rule that it wouldn't believe large message size words, except for a small number of message types that are expected to be (potentially) long. This provides some defense against loss of message-boundary sync and other corrupted-data cases. The server does something similar, except that up to now it only limited the lengths of messages received during the connection authentication phase. Let's do the same as in libpq and put restrictions on the allowed length of all messages, while distinguishing between message types that are expected to be long and those that aren't. I used a limit of 10000 bytes for non-long messages. (libpq's corresponding limit is 30000 bytes, but given the asymmetry of the FE/BE protocol, there's no good reason why the numbers should be the same.) Experimentation suggests that this is at least a factor of 10, maybe a factor of 100, more than we really need; but plenty of daylight seems desirable to avoid false positives. In any case we can adjust the limit based on beta-test results. For long messages, set a limit of MaxAllocSize - 1, which is the most that we can absorb into the StringInfo buffer that the message is collected in. This just serves to make sure that a bogus message size is reported as such, rather than as a confusing gripe about not being able to enlarge a string buffer. While at it, make sure that non-mainline code paths (such as COPY FROM STDIN) are as paranoid as SocketBackend is, and validate the message type code before believing the message length. This provides an additional guard against getting stuck on corrupted input. Discussion: https://postgr.es/m/2003757.1619373089@sss.pgh.pa.us
* Allow a partdesc-omitting-partitions to be cachedAlvaro Herrera2021-04-28
| | | | | | | | | | | | | | | | | | | | | Makes partition descriptor acquisition faster during the transient period in which a partition is in the process of being detached. This also adds the restriction that only one partition can be in pending-detach state for a partitioned table. While at it, return find_inheritance_children() API to what it was before 71f4c8c6f74b, and create a separate find_inheritance_children_extended() that returns detailed info about detached partitions. (This incidentally fixes a bug in 8aba9322511 whereby a memory context holding a transient partdesc is reparented to a NULL PortalContext, leading to permanent leak of that memory. The fix is to no longer rely on reparenting contexts to PortalContext. Reported by Amit Langote.) Per gripe from Amit Langote Discussion: https://postgr.es/m/CA+HiwqFgpP1LxJZOBYGt9rpvTjXXkg5qG2+Xch2Z1Q7KrqZR1A@mail.gmail.com
* Fix use-after-release issue with pg_identify_object_as_address()Michael Paquier2021-04-28
| | | | | | | | | Spotted by buildfarm member prion, with -DRELCACHE_FORCE_RELEASE. Introduced in f7aab36. Discussion: https://postgr.es/m/2759018.1619577848@sss.pgh.pa.us Backpatch-through: 9.6
* Fix pg_identify_object_as_address() with event triggersMichael Paquier2021-04-28
| | | | | | | | | | | | | | | Attempting to use this function with event triggers failed, as, since its introduction in a676201, this code has never associated an object name with event triggers. This addresses the failure by adding the event trigger name to the set defining its object address. Note that regression tests are added within event_trigger and not object_address to avoid issues with concurrent connections in parallel schedules. Author: Joel Jacobson Discussion: https://postgr.es/m/3c905e77-a026-46ae-8835-c3f6cd1d24c8@www.fastmail.com Backpatch-through: 9.6
* Improve logic in PostgresVersion.pmAndrew Dunstan2021-04-27
| | | | | | | | | | Handle the situation where perl swaps the order of operands of the comparison operator. See `perldoc overload` for details: The third argument is set to TRUE if (and only if) the two operands have been swapped. Perl may do this to ensure that the first argument ($self) is an object implementing the overloaded operation, in line with general object calling conventions.
* Don't pass "ONLY" options specified in TRUNCATE to foreign data wrapper.Fujii Masao2021-04-27
| | | | | | | | | | | | | | | | | | | | | | Commit 8ff1c94649 allowed TRUNCATE command to truncate foreign tables. Previously the information about "ONLY" options specified in TRUNCATE command were passed to the foreign data wrapper. Then postgres_fdw constructed the TRUNCATE command to issue the remote server and included "ONLY" options in it based on the passed information. On the other hand, "ONLY" options specified in SELECT, UPDATE or DELETE have no effect when accessing or modifying the remote table, i.e., are not passed to the foreign data wrapper. So it's inconsistent to make only TRUNCATE command pass the "ONLY" options to the foreign data wrapper. Therefore this commit changes the TRUNCATE command so that it doesn't pass the "ONLY" options to the foreign data wrapper, for the consistency with other statements. Also this commit changes postgres_fdw so that it always doesn't include "ONLY" options in the TRUNCATE command that it constructs. Author: Fujii Masao Reviewed-by: Bharath Rupireddy, Kyotaro Horiguchi, Justin Pryzby, Zhihong Yu Discussion: https://postgr.es/m/551ed8c1-f531-818b-664a-2cecdab99cd8@oss.nttdata.com
* Use HTAB for replication slot statistics.Amit Kapila2021-04-27
| | | | | | | | | | | | | | | | | | | | | | | Previously, we used to use the array of size max_replication_slots to store stats for replication slots. But that had two problems in the cases where a message for dropping a slot gets lost: 1) the stats for the new slot are not recorded if the array is full and 2) writing beyond the end of the array if the user reduces the max_replication_slots. This commit uses HTAB for replication slot statistics, resolving both problems. Now, pgstat_vacuum_stat() search for all the dead replication slots in stats hashtable and tell the collector to remove them. To avoid showing the stats for the already-dropped slots, pg_stat_replication_slots view searches slot stats by the slot name taken from pg_replication_slots. Also, we send a message for creating a slot at slot creation, initializing the stats. This reduces the possibility that the stats are accumulated into the old slot stats when a message for dropping a slot gets lost. Reported-by: Andres Freund Author: Sawada Masahiko, test case by Vignesh C Reviewed-by: Amit Kapila, Vignesh C, Dilip Kumar Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
* Fix Logical Replication of Truncate in synchronous commit mode.Amit Kapila2021-04-27
| | | | | | | | | | | | | | | | | | | | | | | The Truncate operation acquires an exclusive lock on the target relation and indexes. It then waits for logical replication of the operation to finish at commit. Now because we are acquiring the shared lock on the target index to get index attributes in pgoutput while sending the changes for the Truncate operation, it leads to a deadlock. Actually, we don't need to acquire a lock on the target index as we build the cache entry using a historic snapshot and all the later changes are absorbed while decoding WAL. So, we wrote a special purpose function for logical replication to get a bitmap of replica identity attribute numbers where we get that information without locking the target index. We decided not to backpatch this as there doesn't seem to be any field complaint about this issue since it was introduced in commit 5dfd1e5a in v11. Reported-by: Haiying Tang Author: Takamichi Osumi, test case by Li Japin Reviewed-by: Amit Kapila, Ajin Cherian Discussion: https://postgr.es/m/OS0PR01MB6113C2499C7DC70EE55ADB82FB759@OS0PR01MB6113.jpnprd01.prod.outlook.com
* psql: tab-complete ALTER ... DETACH CONCURRENTLY / FINALIZEAlvaro Herrera2021-04-26
| | | | | | New keywords per 71f4c8c6f74b. Discussion: https://postgr.es/m/20210422204035.GA25929@alvherre.pgsql