aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Fix an oversight in commit 4c70098ff.Tom Lane2020-01-23
| | | | | | | | | | | | | | | | | | | | I had supposed that the from_char_seq_search() call sites were all passing the constant arrays you'd expect them to pass ... but on looking closer, the one for DY format was passing the days[] array not days_short[]. This accidentally worked because the day abbreviations in English are all the same as the first three letters of the full day names. However, once we took out the "maximum comparison length" logic, it stopped working. As penance for that oversight, add regression test cases covering this, as well as every other switch case in DCH_from_char() that was not reached according to the code coverage report. Also, fold the DCH_RM and DCH_rm cases into one --- now that seq_search is case independent, there's no need to pass different comparison arrays for those cases. Back-patch, as the previous commit was.
* Clean up formatting.c's logic for matching constant strings.Tom Lane2020-01-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | seq_search(), which is used to match input substrings to constants such as month and day names, had a lot of bizarre and unnecessary behaviors. It was mostly possible to avert our eyes from that before, but we don't want to duplicate those behaviors in the upcoming patch to allow recognition of non-English month and day names. So it's time to clean this up. In particular: * seq_search scribbled on the input string, which is a pretty dangerous thing to do, especially in the badly underdocumented way it was done here. Fortunately the input string is a temporary copy, but that was being made three subroutine levels away, making it something easy to break accidentally. The behavior is externally visible nonetheless, in the form of odd case-folding in error reports about unrecognized month/day names. The scribbling is evidently being done to save a few calls to pg_tolower, but that's such a cheap function (at least for ASCII data) that it's pretty pointless to worry about. In HEAD I switched it to be pg_ascii_tolower to ensure it is cheap in all cases; but there are corner cases in Turkish where this'd change behavior, so leave it as pg_tolower in the back branches. * seq_search insisted on knowing the case form (all-upper, all-lower, or initcap) of the constant strings, so that it didn't have to case-fold them to perform case-insensitive comparisons. This likewise seems like excessive micro-optimization, given that pg_tolower is certainly very cheap for ASCII data. It seems unsafe to assume that we know the case form that will come out of pg_locale.c for localized month/day names, so it's better just to define the comparison rule as "downcase all strings before comparing". (The choice between downcasing and upcasing is arbitrary so far as English is concerned, but it might not be in other locales, so follow citext's lead here.) * seq_search also had a parameter that'd cause it to report a match after a maximum number of characters, even if the constant string were longer than that. This was not actually used because no caller passed a value small enough to cut off a comparison. Replicating that behavior for localized month/day names seems expensive as well as useless, so let's get rid of that too. * from_char_seq_search used the maximum-length parameter to truncate the input string in error reports about not finding a matching name. This leads to rather confusing reports in many cases. Worse, it is outright dangerous if the input string isn't all-ASCII, because we risk truncating the string in the middle of a multibyte character. That'd lead either to delivering an illegible error message to the client, or to encoding-conversion failures that obscure the actual data problem. Get rid of that in favor of truncating at whitespace if any (a suggestion due to Alvaro Herrera). In addition to fixing these things, I const-ified the input string pointers of DCH_from_char and its subroutines, to make sure there aren't any other scribbling-on-input problems. The risk of generating a badly-encoded error message seems like enough of a bug to justify back-patching, so patch all supported branches. Discussion: https://postgr.es/m/29432.1579731087@sss.pgh.pa.us
* Clarify some comments in vacuumlazy.cMichael Paquier2020-01-23
| | | | | Author: Justin Pryzby Discussion: https://postgr.es/m/20200113004542.GA26045@telsasoft.com
* Add GUC ignore_invalid_pages.Fujii Masao2020-01-22
| | | | | | | | | | | | | | | Detection of WAL records having references to invalid pages during recovery causes PostgreSQL to raise a PANIC-level error, aborting the recovery. Setting ignore_invalid_pages to on causes the system to ignore those WAL records (but still report a warning), and continue recovery. This behavior may cause crashes, data loss, propagate or hide corruption, or other serious problems. However, it may allow you to get past the PANIC-level error, to finish the recovery, and to cause the server to start up. Author: Fujii Masao Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAHGQGwHCK6f77yeZD4MHOnN+PaTf6XiJfEB+Ce7SksSHjeAWtg@mail.gmail.com
* Fix the computation of max dead tuples during the vacuum.Amit Kapila2020-01-22
| | | | | | | | | | | | In commit 40d964ec99, we changed the way memory is allocated for dead tuples but forgot to update the place where we compute the maximum number of dead tuples. This could lead to invalid memory requests. Reported-by: Andres Freund Diagnosed-by: Andres Freund Author: Masahiko Sawada Reviewed-by: Amit Kapila and Dilip Kumar Discussion: https://postgr.es/m/20200121060020.e3cr7s7fj5rw4lok@alap3.anarazel.de
* Fix concurrent indexing operations with temporary tablesMichael Paquier2020-01-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Attempting to use CREATE INDEX, DROP INDEX or REINDEX with CONCURRENTLY on a temporary relation with ON COMMIT actions triggered unexpected errors because those operations use multiple transactions internally to complete their work. Here is for example one confusing error when using ON COMMIT DELETE ROWS: ERROR: index "foo" already contains data Issues related to temporary relations and concurrent indexing are fixed in this commit by enforcing the non-concurrent path to be taken for temporary relations even if using CONCURRENTLY, transparently to the user. Using a non-concurrent path does not matter in practice as locks cannot be taken on a temporary relation by a session different than the one owning the relation, and the non-concurrent operation is more effective. The problem exists with REINDEX since v12 with the introduction of CONCURRENTLY, and with CREATE/DROP INDEX since CONCURRENTLY exists for those commands. In all supported versions, this caused only confusing error messages to be generated. Note that with REINDEX, it was also possible to issue a REINDEX CONCURRENTLY for a temporary relation owned by a different session, leading to a server crash. The idea to enforce transparently the non-concurrent code path for temporary relations comes originally from Andres Freund. Reported-by: Manuel Rigger Author: Michael Paquier, Heikki Linnakangas Reviewed-by: Andres Freund, Álvaro Herrera, Heikki Linnakangas Discussion: https://postgr.es/m/CA+u7OA6gP7YAeCguyseusYcc=uR8+ypjCcgDDCTzjQ+k6S9ksQ@mail.gmail.com Backpatch-through: 9.4
* Clarify behavior of adding and altering a column in same ALTER command.Tom Lane2020-01-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The behavior of something like ALTER TABLE transactions ADD COLUMN status varchar(30) DEFAULT 'old', ALTER COLUMN status SET default 'current'; is to fill existing table rows with 'old', not 'current'. That's intentional and desirable for a couple of reasons: * It makes the behavior the same whether you merge the sub-commands into one ALTER command or give them separately; * If we applied the new default while filling the table, there would be no way to get the existing behavior in one SQL command. The same reasoning applies in cases that add a column and then manipulate its GENERATED/IDENTITY status in a second sub-command, since the generation expression is really just a kind of default. However, that wasn't very obvious (at least not to me; earlier in the referenced discussion thread I'd thought it was a bug to be fixed). And it certainly wasn't documented. Hence, add documentation, code comments, and a test case to clarify that this behavior is all intentional. In passing, adjust ATExecAddColumn's defaults-related relkind check so that it matches up exactly with ATRewriteTables, instead of being effectively (though not literally) the negated inverse condition. The reasoning can be explained a lot more concisely that way, too (not to mention that the comment now matches the code, which it did not before). Discussion: https://postgr.es/m/10365.1558909428@sss.pgh.pa.us
* Fix edge case leading to agg transitions skipping ExecAggTransReparent() calls.Andres Freund2020-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code checking whether an aggregate transition value needs to be reparented into the current context has always only compared the transition return value with the previous transition value by datum, i.e. without regard for NULLness. This normally works, because when the transition function returns NULL (via fcinfo->isnull), it'll return a value that won't be the same as its input value. But there's no hard requirement that that's the case. And it turns out, it's possible to hit this case (see discussion or reproducers), leading to a non-null transition value not being reparented, followed by a crash caused by that. Instead of adding another comparison of NULLness, instead have ExecAggTransReparent() ensure that pergroup->transValue ends up as 0 when the new transition value is NULL. That avoids having to add an additional branch to the much more common cases of the transition function returning the old transition value (which is a pointer in this case), and when the new value is different, but not NULL. In branches since 69c3936a149, also deduplicate the reparenting code between the expression evaluation based transitions, and the path for ordered aggregates. Reported-By: Teodor Sigaev, Nikita Glukhov Author: Andres Freund Discussion: https://postgr.es/m/bd34e930-cfec-ea9b-3827-a8bc50891393@sigaev.ru Backpatch: 9.4-, this issue has existed since at least 7.4
* Further tweaking of jsonb_set_lax().Tom Lane2020-01-20
| | | | | | | | | Some buildfarm members were still warning about this, because in 9c679a08f I'd missed decorating one of the ereport() code paths with a dummy return. Also, adjust the error messages to be more in line with project style guide.
* Fix crash in BRIN inclusion op functions, due to missing datum copy.Heikki Linnakangas2020-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | The BRIN add_value() and union() functions need to make a longer-lived copy of the argument, if they want to store it in the BrinValues struct also passed as argument. The functions for the "inclusion operator classes" used with box, range and inet types didn't take into account that the union helper function might return its argument as is, without making a copy. Check for that case, and make a copy if necessary. That case arises at least with the range_union() function, when one of the arguments is an 'empty' range: CREATE TABLE brintest (n numrange); CREATE INDEX brinidx ON brintest USING brin (n); INSERT INTO brintest VALUES ('empty'); INSERT INTO brintest VALUES (numrange(0, 2^1000::numeric)); INSERT INTO brintest VALUES ('(-1, 0)'); SELECT brin_desummarize_range('brinidx', 0); SELECT brin_summarize_range('brinidx', 0); Backpatch down to 9.5, where BRIN was introduced. Discussion: https://www.postgresql.org/message-id/e6e1d6eb-0a67-36aa-e779-bcca59167c14%40iki.fi Reviewed-by: Emre Hasegeli, Tom Lane, Alvaro Herrera
* Allow vacuum command to process indexes in parallel.Amit Kapila2020-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | | This feature allows the vacuum to leverage multiple CPUs in order to process indexes. This enables us to perform index vacuuming and index cleanup with background workers. This adds a PARALLEL option to VACUUM command where the user can specify the number of workers that can be used to perform the command which is limited by the number of indexes on a table. Specifying zero as a number of workers will disable parallelism. This option can't be used with the FULL option. Each index is processed by at most one vacuum process. Therefore parallel vacuum can be used when the table has at least two indexes. The parallel degree is either specified by the user or determined based on the number of indexes that the table has, and further limited by max_parallel_maintenance_workers. The index can participate in parallel vacuum iff it's size is greater than min_parallel_index_scan_size. Author: Masahiko Sawada and Amit Kapila Reviewed-by: Dilip Kumar, Amit Kapila, Robert Haas, Tomas Vondra, Mahendra Singh and Sergei Kornilov Tested-by: Mahendra Singh and Prabhat Sahu Discussion: https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.com https://postgr.es/m/CAA4eK1J-VoR9gzS5E75pcD-OH0mEyCdp8RihcwKrcuw7J-Q0+w@mail.gmail.com
* Silence minor compiler warnings.Tom Lane2020-01-19
| | | | | | | | | | | | Ensure that ClassifyUtilityCommandAsReadOnly() has defined behavior even if TransactionStmt.kind has a value that's not one of the declared values for its enum. Suppress warnings from compilers that don't know that elog(ERROR) doesn't return, in ClassifyUtilityCommandAsReadOnly() and jsonb_set_lax(). Per Coverity and buildfarm.
* Remove separate files for the initial contents of pg_(sh)descriptionHeikki Linnakangas2020-01-19
| | | | | | | | | | | This data was only in separate files because it was the most convenient way to handle it with a shell script. Now that we use a general-purpose programming language, it's easy to assemble the data into the same format as the rest of the catalogs and output it into postgres.bki. This allows removal of some special-purpose code from initdb.c. Discussion: https://www.postgresql.org/message-id/CACPNZCtVFtjHre6hg9dput0qRPp39pzuyA2A6BT8wdgrRy%2BQdA%40mail.gmail.com Author: John Naylor
* Add GUC checks for ssl_min_protocol_version and ssl_max_protocol_versionMichael Paquier2020-01-18
| | | | | | | | | | | | | | | | | Mixing incorrect bounds set in the SSL context leads to confusing error messages generated by OpenSSL which are hard to act on. New checks are added within the GUC machinery to improve the user experience as they apply to any SSL implementation, not only OpenSSL, and doing the checks beforehand avoids the creation of a SSL during a reload (or startup) which we know will never be used anyway. Backpatch down to 12, as those parameters have been introduced by e73e67c. Author: Michael Paquier Reviewed-by: Daniel Gustafsson Discussion: https://postgr.es/m/20200114035420.GE1515@paquier.xyz Backpatch-through: 12
* Avoid full scan of GIN indexes when possibleAlexander Korotkov2020-01-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The strategy of GIN index scan is driven by opclass-specific extract_query method. This method that needed search mode is GIN_SEARCH_MODE_ALL. This mode means that matching tuple may contain none of extracted entries. Simple example is '!term' tsquery, which doesn't need any term to exist in matching tsvector. In order to handle such scan key GIN calculates virtual entry, which contains all TIDs of all entries of attribute. In fact this is full scan of index attribute. And typically this is very slow, but allows to handle some queries correctly in GIN. However, current algorithm calculate such virtual entry for each GIN_SEARCH_MODE_ALL scan key even if they are multiple for the same attribute. This is clearly not optimal. This commit improves the situation by introduction of "exclude only" scan keys. Such scan keys are not capable to return set of matching TIDs. Instead, they are capable only to filter TIDs produced by normal scan keys. Therefore, each attribute should contain at least one normal scan key, while rest of them may be "exclude only" if search mode is GIN_SEARCH_MODE_ALL. The same optimization might be applied to the whole scan, not per-attribute. But that leads to NULL values elimination problem. There is trade-off between multiple possible ways to do this. We probably want to do this later using some cost-based decision algorithm. Discussion: https://postgr.es/m/CAOBaU_YGP5-BEt5Cc0%3DzMve92vocPzD%2BXiZgiZs1kjY0cj%3DXBg%40mail.gmail.com Author: Nikita Glukhov, Alexander Korotkov, Tom Lane, Julien Rouhaud Reviewed-by: Julien Rouhaud, Tomas Vondra, Tom Lane
* Repair more failures with SubPlans in multi-row VALUES lists.Tom Lane2020-01-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 9b63c13f0 turns out to have been fundamentally misguided: the parent node's subPlan list is by no means the only way in which a child SubPlan node can be hooked into the outer execution state. As shown in bug #16213 from Matt Jibson, we can also get short-lived tuple table slots added to the outer es_tupleTable list. At this point I have little faith that there aren't other possible connections as well; the long time it took to notice this problem shows that this isn't a heavily-exercised situation. Therefore, revert that fix, returning to the coding that passed a NULL parent plan pointer down to the transiently-built subexpressions. That gives us a pretty good guarantee that they won't hook into the outer executor state in any way. But then we need some other solution to make SubPlans work. Adopt the solution speculated about in the previous commit's log message: do expression initialization at plan startup for just those VALUES rows containing SubPlans, abandoning the goal of reclaiming memory intra-query for those rows. In practice it seems unlikely that queries containing a vast number of VALUES rows would be using SubPlans in them, so this should not give up much. (BTW, this test case also refutes my claim in connection with the prior commit that the issue only arises with use of LATERAL. That was just wrong: some variants of SubLink always produce SubPlans.) As with previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/16213-871ac3bc208ecf23@postgresql.org
* Set ReorderBufferTXN->final_lsn more eagerlyAlvaro Herrera2020-01-17
| | | | | | | | | | | | | | | | | | | ... specifically, set it incrementally as each individual change is spilled down to disk. This way, it is set correctly when the transaction disappears without trace, ie. without leaving an XACT_ABORT wal record. (This happens when the server crashes midway through a transaction.) Failing to have final_lsn prevents ReorderBufferRestoreCleanup() from working, since it needs the final_lsn in order to know the endpoint of its iteration through spilled files. Commit df9f682c7bf8 already tried to fix the problem, but it didn't set the final_lsn in all cases. Revert that, since it's no longer needed. Author: Vignesh C Reviewed-by: Amit Kapila, Dilip Kumar Discussion: https://postgr.es/m/CALDaNm2CLk+K9JDwjYST0sPbGg5AQdvhUt0jbKyX_HdAE0jk3A@mail.gmail.com
* Allocate freechunks bitmap as part of SlabContextTomas Vondra2020-01-17
| | | | | | | | | | | | | | | | | | | | The bitmap used by SlabCheck to cross-check free chunks in a block used to be allocated for each SlabCheck call, and was never freed. The memory leak could be fixed by simply adding a pfree call, but it's actually a bad idea to do any allocations in SlabCheck at all as it assumes the state of the memory management as a whole is sane. So instead we allocate the bitmap as part of SlabContext, which means we don't need to do any allocations in SlabCheck and the bitmap goes away together with the SlabContext. Backpatch to 10, where the Slab context was introduced. Author: Tomas Vondra Reported-by: Andres Freund Reviewed-by: Tom Lane Backpatch-through: 10 Discussion: https://www.postgresql.org/message-id/20200116044119.g45f7pmgz4jmodxj%40alap3.anarazel.de
* Add a non-strict version of jsonb_setAndrew Dunstan2020-01-17
| | | | | | | | | | | | jsonb_set_lax() is the same as jsonb_set, except that it takes and extra argument that specifies what to do if the value argument is NULL. The default is 'use_json_null'. Other possibilities are 'raise_exception', 'return_target' and 'delete_key', all these behaviours having been suggested as reasonable by various users. Discussion: https://postgr.es/m/375873e2-c957-3a8d-64f9-26c43c2b16e7@2ndQuadrant.com Reviewed by: Pavel Stehule
* Move OpenSSL routines for min/max protocol setting to src/common/Michael Paquier2020-01-17
| | | | | | | | | | | | | | | | Two routines have been added in OpenSSL 1.1.0 to set the protocol bounds allowed within a given SSL context: - SSL_CTX_set_min_proto_version - SSL_CTX_set_max_proto_version As Postgres supports OpenSSL down to 1.0.1 (as of HEAD), equivalent replacements exist in the tree, which are only available for the backend. A follow-up patch is planned to add control of the SSL protocol bounds for libpq, so move those routines to src/common/ so as libpq can use them. Author: Daniel Gustafsson Discussion: https://postgr.es/m/4F246AE3-A7AE-471E-BD3D-C799D3748E03@yesql.se
* Rationalize code placement between wchar.c, encnames.c, and mbutils.c.Tom Lane2020-01-16
| | | | | | | | | | | | | | | | | | | | | | | | | | Move all the backend-only code that'd crept into wchar.c and encnames.c into mbutils.c. To remove the last few #ifdef dependencies from wchar.c and encnames.c, also make the following changes: * Adjust get_encoding_name_for_icu to return NULL, not throw an error, for unsupported encodings. Its sole caller can perfectly well throw an error instead. (While at it, I also made this function and its sibling is_encoding_supported_by_icu proof against out-of-range encoding IDs.) * Remove the overlength-name error condition from pg_char_to_encoding. It's completely silly not to treat that just like any other the-name-is-not-in-the-table case. Also, get rid of pg_mic_mblen --- there's no obvious reason why conv.c shouldn't call pg_mule_mblen instead. Other than that, this is just code movement and comment-polishing with no functional changes. Notably, I reordered declarations in pg_wchar.h to show which functions are frontend-accessible and which are not. Discussion: https://postgr.es/m/CA+TgmoYO8oq-iy8E02rD8eX25T-9SmyxKWqqks5OMHxKvGXpXQ@mail.gmail.com
* Move wchar.c and encnames.c to src/common/.Tom Lane2020-01-16
| | | | | | | | | | | | | | | | Formerly, various frontend directories symlinked these two sources and then built them locally. That's an ancient, ugly hack, and we now have a much better way: put them into libpgcommon. So do that. (The immediate motivation for this is the prospect of having to introduce still more symlinking if we don't.) This commit moves these two files absolutely verbatim, for ease of reviewing the git history. There's some follow-on work to be done that will modify them a bit. Robert Haas, Tom Lane Discussion: https://postgr.es/m/CA+TgmoYO8oq-iy8E02rD8eX25T-9SmyxKWqqks5OMHxKvGXpXQ@mail.gmail.com
* Fix problems with "read only query" checks, and refactor the code.Robert Haas2020-01-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, check_xact_readonly() was responsible for determining which types of queries could not be run in a read-only transaction, standard_ProcessUtility() was responsibility for prohibiting things which were allowed in read only transactions but not in recovery, and utility commands were basically prohibited in bulk in parallel mode by calls to CommandIsReadOnly() in functions.c and spi.c. This situation was confusing and error-prone. Accordingly, move all the checks to a new function ClassifyUtilityCommandAsReadOnly(), which determines the degree to which a given statement is read only. In the old code, check_xact_readonly() inadvertently failed to handle several statement types that actually should have been prohibited, specifically T_CreatePolicyStmt, T_AlterPolicyStmt, T_CreateAmStmt, T_CreateStatsStmt, T_AlterStatsStmt, and T_AlterCollationStmt. As a result, thes statements were erroneously allowed in read only transactions, parallel queries, and standby operation. Generally, they would fail anyway due to some lower-level error check, but we shouldn't rely on that. In the new code structure, future omissions of this type should cause ClassifyUtilityCommandAsReadOnly() to complain about an unrecognized node type. As a fringe benefit, this means we can allow certain types of utility commands in parallel mode, where it's safe to do so. This allows ALTER SYSTEM, CALL, DO, CHECKPOINT, COPY FROM, EXPLAIN, and SHOW. It might be possible to allow additional commands with more work and thought. Along the way, document the thinking process behind the current set of checks, as per discussion especially with Peter Eisentraut. There is some interest in revising some of these rules, but that seems like a job for another patch. Patch by me, reviewed by Tom Lane, Stephen Frost, and Peter Eisentraut. Discussion: http://postgr.es/m/CA+TgmoZ_rLqJt5sYkvh+JpQnfX0Y+B2R+qfi820xNih6x-FQOQ@mail.gmail.com
* Minor code beautification in regexp.c.Tom Lane2020-01-16
| | | | | | | | | | Remove duplicated code (apparently introduced by commit c8ea87e4b). Also get rid of some PG_USED_FOR_ASSERTS_ONLY variables we don't really need to have. Li Japin, Tom Lane Discussion: https://postgr.es/m/PS1PR0601MB3770A5595B6E5E3FD6F35724B6360@PS1PR0601MB3770.apcprd06.prod.outlook.com
* Restructure ALTER TABLE execution to fix assorted bugs.Tom Lane2020-01-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've had numerous bug reports about how (1) IF NOT EXISTS clauses in ALTER TABLE don't behave as-expected, and (2) combining certain actions into one ALTER TABLE doesn't work, though executing the same actions as separate statements does. This patch cleans up all of the cases so far reported from the field, though there are still some oddities associated with identity columns. The core problem behind all of these bugs is that we do parse analysis of ALTER TABLE subcommands too soon, before starting execution of the statement. The root of the bugs in group (1) is that parse analysis schedules derived commands (such as a CREATE SEQUENCE for a serial column) before it's known whether the IF NOT EXISTS clause should cause a subcommand to be skipped. The root of the bugs in group (2) is that earlier subcommands may change the catalog state that later subcommands need to be parsed against. Hence, postpone parse analysis of ALTER TABLE's subcommands, and do that one subcommand at a time, during "phase 2" of ALTER TABLE which is the phase that does catalog rewrites. Thus the catalog effects of earlier subcommands are already visible when we analyze later ones. (The sole exception is that we do parse analysis for ALTER COLUMN TYPE subcommands during phase 1, so that their USING expressions can be parsed against the table's original state, which is what we need. Arguably, these bugs stem from falsely concluding that because ALTER COLUMN TYPE must do early parse analysis, every other command subtype can too.) This means that ALTER TABLE itself must deal with execution of any non-ALTER-TABLE derived statements that are generated by parse analysis. Add a suitable entry point to utility.c to accept those recursive calls, and create a struct to pass through the information needed by the recursive call, rather than making the argument lists of AlterTable() and friends even longer. Getting this to work correctly required a little bit of fiddling with the subcommand pass structure, in particular breaking up AT_PASS_ADD_CONSTR into multiple passes. But otherwise it's mostly a pretty straightforward application of the above ideas. Fixing the residual issues for identity columns requires refactoring of where the dependency link from an identity column to its sequence gets set up. So that seems like suitable material for a separate patch, especially since this one is pretty big already. Discussion: https://postgr.es/m/10365.1558909428@sss.pgh.pa.us
* Report progress of ANALYZE commandsAlvaro Herrera2020-01-15
| | | | | | | | | | This uses the progress reporting infrastructure added by c16dc1aca5e0, adding support for ANALYZE. Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Co-authored-by: Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> Reviewed-by: Julien Rouhaud, Robert Haas, Anthony Nowocien, Kyotaro Horiguchi, Vignesh C, Amit Langote
* Fix buggy logic in isTempNamespaceInUse()Michael Paquier2020-01-15
| | | | | | | | | | | | | | | The logic introduced in this routine as of 246a6c8 would report an incorrect result when a session calls it to check if the temporary namespace owned by the session is in use or not. It is possible to optimize more the routine in this case to avoid a PGPROC lookup, but let's keep the logic simple. As this routine is used only by autovacuum for now, there were no live bugs, still let's be correct for any future code involving it. Author: Michael Paquier Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/20200113093703.GA41902@paquier.xyz Backpatch-through: 11
* Introduce IndexAM fields for parallel vacuum.Amit Kapila2020-01-15
| | | | | | | | | | | | | | | | | Introduce new fields amusemaintenanceworkmem and amparallelvacuumoptions in IndexAmRoutine for parallel vacuum. The amusemaintenanceworkmem tells whether a particular IndexAM uses maintenance_work_mem or not. This will help in controlling the memory used by individual workers as otherwise, each worker can consume memory equal to maintenance_work_mem. The amparallelvacuumoptions tell whether a particular IndexAM participates in a parallel vacuum and if so in which phase (bulkdelete, vacuumcleanup) of vacuum. Author: Masahiko Sawada and Amit Kapila Reviewed-by: Dilip Kumar, Amit Kapila, Tomas Vondra and Robert Haas Discussion: https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.com https://postgr.es/m/CAA4eK1LmcD5aPogzwim5Nn58Ki+74a6Edghx4Wd8hAskvHaq5A@mail.gmail.com
* Fix compiler warning about format on WindowsPeter Eisentraut2020-01-14
| | | | | On 64-bit Windows, pid_t is long long int, so a %d format isn't enough.
* walreceiver uses a temporary replication slot by defaultPeter Eisentraut2020-01-14
| | | | | | | | | | | If no permanent replication slot is configured using primary_slot_name, the walreceiver now creates and uses a temporary replication slot. A new setting wal_receiver_create_temp_slot can be used to disable this behavior, for example, if the remote instance is out of replication slots. Reviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/CA%2Bfd4k4dM0iEPLxyVyme2RAFsn8SUgrNtBJOu81YqTY4V%2BnqZA%40mail.gmail.com
* Expose PQbackendPID() through walreceiver APIPeter Eisentraut2020-01-14
| | | | | | | This will be used by a subsequent patch. Reviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/CA%2Bfd4k4dM0iEPLxyVyme2RAFsn8SUgrNtBJOu81YqTY4V%2BnqZA%40mail.gmail.com
* ALTER TABLE ... ALTER COLUMN ... DROP EXPRESSIONPeter Eisentraut2020-01-14
| | | | | | | | Add an ALTER TABLE subcommand for dropping the generated property from a column, per SQL standard. Reviewed-by: Sergei Kornilov <sk@zsrv.org> Discussion: https://www.postgresql.org/message-id/flat/2f7f1d9c-946e-0453-d841-4f38eb9d69b6%402ndquadrant.com
* Make rewriter prevent auto-updates on views with conditional INSTEAD rules.Dean Rasheed2020-01-14
| | | | | | | | | | | | | | | | | | | | A view with conditional INSTEAD rules and no unconditional INSTEAD rules or INSTEAD OF triggers is not auto-updatable. Previously we relied on a check in the executor to catch this, but that's problematic since the planner may fail to properly handle such a query and thus return a particularly unhelpful error to the user, before reaching the executor check. Instead, trap this in the rewriter and report the correct error there. Doing so also allows us to include more useful error detail than the executor check can provide. This doesn't change the existing behaviour of updatable views; it merely ensures that useful error messages are reported when a view isn't updatable. Per report from Pengzhou Tang, though not adopting that suggested fix. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=trYr4Kn8_3_PEA@mail.gmail.com
* Reduce size of backend scanner's tables.Tom Lane2020-01-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, the core scanner's yy_transition[] array had 37045 elements. Since that number is larger than INT16_MAX, Flex generated the array to contain 32-bit integers. By reimplementing some of the bulkier scanner rules, this patch reduces the array to 20495 elements. The much smaller total length, combined with the consequent use of 16-bit integers for the array elements reduces the binary size by over 200kB. This was accomplished in two ways: 1. Consolidate handling of quote continuations into a new start condition, rather than duplicating that logic for five different string types. 2. Treat Unicode strings and identifiers followed by a UESCAPE sequence as three separate tokens, rather than one. The logic to de-escape Unicode strings is moved to the filter code in parser.c, which already had the ability to provide special processing for token sequences. While we could have implemented the conversion in the grammar, that approach was rejected for performance and maintainability reasons. Performance in microbenchmarks of raw parsing seems equal or slightly faster in most cases, and it's reasonable to expect that in real-world usage (with more competition for the CPU cache) there will be a larger win. The exception is UESCAPE sequences; lexing those is about 10% slower, primarily because the scanner now has to be called three times rather than one. This seems acceptable since that feature is very rarely used. The psql and epcg lexers are likewise modified, primarily because we want to keep them all in sync. Since those lexers don't use the space-hogging -CF option, the space savings is much less, but it's still good for perhaps 10kB apiece. While at it, merge the ecpg lexer's handling of C-style comments used in SQL and in C. Those have different rules regarding nested comments, but since we already have the ability to keep track of the previous start condition, we can use that to handle both cases within a single start condition. This matches the core scanner more closely. John Naylor Discussion: https://postgr.es/m/CACPNZCvaoa3EgVWm5yZhcSTX6RAtaLgniCPcBVOCwm8h3xpWkw@mail.gmail.com
* Fix base backup with database OIDs larger than INT32_MAXPeter Eisentraut2020-01-13
| | | | | | | | | | The use of pg_atoi() for parsing a string into an Oid fails for values larger than INT32_MAX, since OIDs are unsigned. Instead, use atooid(). While this has less error checking, the contents of the data directory are expected to be trustworthy, so we don't need to go out of our way to do full error checking. Discussion: https://www.postgresql.org/message-id/flat/dea47fc8-6c89-a2b1-07e3-754ff1ab094b%402ndquadrant.com
* Fix comment in heapam.cMichael Paquier2020-01-13
| | | | | | | Improvement per suggestion from Tom Lane. Author: Daniel Gustafsson Discussion: https://postgr.es/m/FED18699-4270-4778-8DA8-10F119A5ECF3@yesql.se
* Delete empty pages in each pass during GIST VACUUM.Amit Kapila2020-01-13
| | | | | | | | | | | | | | | | | | | | | | | Earlier, we use to postpone deleting empty pages till the second stage of vacuum to amortize the cost of scanning internal pages. However, that can sometimes (say vacuum is canceled or errored between first and second stage) delay the pages to be recycled. Another thing is that to facilitate deleting empty pages in the second stage, we need to share the information about internal and empty pages between different stages of vacuum. It will be quite tricky to share this information via DSM which is required for the upcoming parallel vacuum patch. Also, it will bring the logic to reclaim deleted pages closer to nbtree where we delete empty pages in each pass. Overall, the advantages of deleting empty pages in each pass outweigh the advantages of postponing the same. Author: Dilip Kumar, with changes by Amit Kapila Reviewed-by: Sawada Masahiko and Amit Kapila Discussion: https://postgr.es/m/CAA4eK1LGr+MN0xHZpJ2dfS8QNQ1a_aROKowZB+MPNep8FVtwAA@mail.gmail.com
* Apply multiple multivariate MCV lists when possibleTomas Vondra2020-01-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now we've only used a single multivariate MCV list per relation, covering the largest number of clauses. So for example given a query SELECT * FROM t WHERE a = 1 AND b =1 AND c = 1 AND d = 1 and extended statistics on (a,b) and (c,d), we'd only pick and use one of them. This commit improves this by repeatedly picking and applying the best statistics (matching the largest number of remaining clauses) until no additional statistics is applicable. This greedy algorithm is simple, but may not be optimal. A different choice of statistics may leave fewer clauses unestimated and/or give better estimates for some other reason. This can however happen only when there are overlapping statistics, and selecting one makes it impossible to use the other. E.g. with statistics on (a,b), (c,d), (b,c,d), we may pick either (a,b) and (c,d) or (b,c,d). But it's not clear which option is the best one. We however assume cases like this are rare, and the easiest solution is to define statistics covering the whole group of correlated columns. In the future we might support overlapping stats, using some of the clauses as conditions (in conditional probability sense). Author: Tomas Vondra Reviewed-by: Mark Dilger, Kyotaro Horiguchi Discussion: https://postgr.es/m/20191028152048.jc6pqv5hb7j77ocp@development
* Apply all available functional dependenciesTomas Vondra2020-01-13
| | | | | | | | | | | | | | | | When considering functional dependencies during selectivity estimation, it's not necessary to bother with selecting the best extended statistic object and then use just dependencies from it. We can simply consider all applicable functional dependencies at once. This means we need to deserialie all (applicable) dependencies before applying them to the clauses. This is a bit more expensive than picking the best statistics and deserializing dependencies for it. To minimize the additional cost, we ignore statistics that are not applicable. Author: Tomas Vondra Reviewed-by: Mark Dilger Discussion: https://postgr.es/m/20191028152048.jc6pqv5hb7j77ocp@development
* Fix edge-case crashes and misestimation in range containment selectivity.Tom Lane2020-01-12
| | | | | | | | | | | | | | | | | | | | | | | | When estimating the selectivity of "range_var <@ range_constant" or "range_var @> range_constant", if the upper (or respectively lower) bound of the range_constant was above the last bin of the range_var's histogram, the code would access uninitialized memory and potentially crash (though it seems the probability of a crash is quite low). Handle the endpoint cases explicitly to fix that. While at it, be more paranoid about the possibility of getting NaN or other silly results from the range type's subdiff function. And improve some comments. Ordinarily we'd probably add a regression test case demonstrating the bug in unpatched code. But it's too hard to get it to crash reliably because of the uninitialized-memory dependence, so skip that. Per bug #16122 from Adam Scott. It's been broken from the beginning, apparently, so backpatch to all supported branches. Diagnosis by Michael Paquier, patch by Andrey Borodin and Tom Lane. Discussion: https://postgr.es/m/16122-eb35bc248c806c15@postgresql.org
* Remove incorrect assertion for INSERT in logical replication's publisherMichael Paquier2020-01-12
| | | | | | | | | | | | | | | | | On the publisher, it was assumed that an INSERT change cannot happen for a relation with no replica identity. However this is true only for a change that needs references to old rows, aka UPDATE or DELETE, so trying to use logical replication with a relation that has no replica identity led to an assertion failure in the publisher when issuing an INSERT. This commit removes the incorrect assertion, and adds more regression tests to provide coverage for relations without replica identity. Reported-by: Neha Sharma Author: Dilip Kumar, Michael Paquier Reviewed-by: Andres Freund Discussion: https://postgr.es/m/CANiYTQsL1Hb8_Km08qd32svrqNumXLJeoGo014O7VZymgOhZEA@mail.gmail.com Backpatch-through: 10
* Extensive code review for GSSAPI encryption mechanism.Tom Lane2020-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix assorted bugs in handling of non-blocking I/O when using GSSAPI encryption. The encryption layer could return the wrong status information to its caller, resulting in effectively dropping some data (or possibly in aborting a not-broken connection), or in a "livelock" situation where data remains to be sent but the upper layers think transmission is done and just go to sleep. There were multiple small thinkos contributing to that, as well as one big one (failure to think through what to do when a send fails after having already transmitted data). Note that these errors could cause failures whether the client application asked for non-blocking I/O or not, since both libpq and the backend always run things in non-block mode at this level. Also get rid of use of static variables for GSSAPI inside libpq; that's entirely not okay given that multiple connections could be open at once inside a single client process. Also adjust a bunch of random small discrepancies between the frontend and backend versions of the send/receive functions -- except for error handling, they should be identical, and now they are. Also extend the Kerberos TAP tests to exercise cases where nontrivial amounts of data need to be pushed through encryption. Before, those tests didn't provide any useful coverage at all for the cases of interest here. (They still might not, depending on timing, but at least there's a chance.) Per complaint from pmc@citylink and subsequent investigation. Back-patch to v12 where this code was introduced. Discussion: https://postgr.es/m/20200109181822.GA74698@gate.oper.dinoex.org
* Make lsn argument of walrcv_create_slot() optionalPeter Eisentraut2020-01-11
| | | | | | | Some callers are not using it, so it's wasteful to have to specify it. Reviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/CA+fd4k4BcYrYucNfTnK-CQX3+jsG+PRPEhHAUSo-W4P0Lec57A@mail.gmail.com
* Remove STATUS_FOUNDPeter Eisentraut2020-01-11
| | | | | | | Replace the solitary use with a bool. Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/flat/a6f91ead-0ce4-2a34-062b-7ab9813ea308%402ndquadrant.com
* Maintain valid md.c state when FileClose() fails.Noah Misch2020-01-10
| | | | | | | | | | | | | | | FileClose() failure ordinarily causes a PANIC. Suppose the user disables that PANIC via data_sync_retry=on. After mdclose() issued a FileClose() that failed, calls into md.c raised SIGSEGV. This fix adds repalloc() calls during mdclose(); update a comment about ignoring repalloc() cost. The rate of relation segment count change is a minor factor; more relevant to overall performance is the rate of mdclose() and subsequent re-opening of segments. Back-patch to v10, where commit 45e191e3aa62d47a8bc1a33f784286b2051f45cb introduced the bug. Reviewed by Kyotaro Horiguchi. Discussion: https://postgr.es/m/20191222091930.GA1280238@rfd.leadboat.com
* Clean up representation of flags in struct ReorderBufferTXNAlvaro Herrera2020-01-10
| | | | | | | This simplifies addition of further flags. Author: Nikhil Sontakke Discussion: https://postgr.es/m/CAMGcDxeViP+R-OL7QhzUV9eKCVjURobuY1Zijik4Ay_Ddwo4Cg@mail.gmail.com
* Reconsider the representation of join alias Vars.Tom Lane2020-01-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The core idea of this patch is to make the parser generate join alias Vars (that is, ones with varno pointing to a JOIN RTE) only when the alias Var is actually different from any raw join input, that is a type coercion and/or COALESCE is necessary to generate the join output value. Otherwise just generate varno/varattno pointing to the relevant join input column. In effect, this means that the planner's flatten_join_alias_vars() transformation is already done in the parser, for all cases except (a) columns that are merged by JOIN USING and are transformed in the process, and (b) whole-row join Vars. In principle that would allow us to skip doing flatten_join_alias_vars() in many more queries than we do now, but we don't have quite enough infrastructure to know that we can do so --- in particular there's no cheap way to know whether there are any whole-row join Vars. I'm not sure if it's worth the trouble to add a Query-level flag for that, and in any case it seems like fit material for a separate patch. But even without skipping the work entirely, this should make flatten_join_alias_vars() faster, particularly where there are nested joins that it previously had to flatten recursively. An essential part of this change is to replace Var nodes' varnoold/varoattno fields with varnosyn/varattnosyn, which have considerably more tightly-defined meanings than the old fields: when they differ from varno/varattno, they identify the Var's position in an aliased JOIN RTE, and the join alias is what ruleutils.c should print for the Var. This is necessary because the varno change destroyed ruleutils.c's ability to find the JOIN RTE from the Var's varno. Another way in which this change broke ruleutils.c is that it's no longer feasible to determine, from a JOIN RTE's joinaliasvars list, which join columns correspond to which columns of the join's immediate input relations. (If those are sub-joins, the joinaliasvars entries may point to columns of their base relations, not the sub-joins.) But that was a horrid mess requiring a lot of fragile assumptions already, so let's just bite the bullet and add some more JOIN RTE fields to make it more straightforward to figure that out. I added two integer-List fields containing the relevant column numbers from the left and right input rels, plus a count of how many merged columns there are. This patch depends on the ParseNamespaceColumn infrastructure that I added in commit 5815696bc. The biggest bit of code change is restructuring transformFromClauseItem's handling of JOINs so that the ParseNamespaceColumn data is propagated upward correctly. Other than that and the ruleutils fixes, everything pretty much just works, though some processing is now inessential. I grabbed two pieces of low-hanging fruit in that line: 1. In find_expr_references, we don't need to recurse into join alias Vars anymore. There aren't any except for references to merged USING columns, which are more properly handled when we scan the join's RTE. This change actually fixes an edge-case issue: we will now record a dependency on any type-coercion function present in a USING column's joinaliasvar, even if that join column has no references in the query text. The odds of the missing dependency causing a problem seem quite small: you'd have to posit somebody dropping an implicit cast between two data types, without removing the types themselves, and then having a stored rule containing a whole-row Var for a join whose USING merge depends on that cast. So I don't feel a great need to change this in the back branches. But in theory this way is more correct. 2. markRTEForSelectPriv and markTargetListOrigin don't need to recurse into join alias Vars either, because the cases they care about don't apply to alias Vars for USING columns that are semantically distinct from the underlying columns. This removes the only case in which markVarForSelectPriv could be called with NULL for the RTE, so adjust the comments to describe that hack as being strictly internal to markRTEForSelectPriv. catversion bump required due to changes in stored rules. Discussion: https://postgr.es/m/7115.1577986646@sss.pgh.pa.us
* Add pg_shmem_allocations view.Robert Haas2020-01-09
| | | | | | | | | | | | This tells you about allocations that have been made from the main shared memory segment. The original patch also tried to show information about dynamic shared memory allocation as well, but I decided to leave that problem for another time. Andres Freund and Robert Haas, reviewed by Michael Paquier, Marti Raudsepp, Tom Lane, Álvaro Herrera, and Kyotaro Horiguchi. Discussion: http://postgr.es/m/20140504114417.GM12715@awork2.anarazel.de
* Add support for automatically updating Unicode derived filesPeter Eisentraut2020-01-09
| | | | | | | | | | | | | | | | | | | | | We currently have several sets of files generated from data provided by Unicode. These all have ad hoc rules and instructions for updating when new Unicode versions appear, and it's not done consistently. This patch centralizes and automates the process and makes it part of the release checklist. The Unicode and CLDR versions are specified in Makefile.global.in. There is a new make target "update-unicode" that downloads all the relevant files and runs the generation script. There is also a new script for generating the table of combining characters for ucs_wcwidth(). That table is now in a separate include file rather than hardcoded into the middle of other code. This is based on the script that was used for generating d8594d123c155aeecd47fc2450f62f5100b2fbf0, but the script itself wasn't committed at that time. Reviewed-by: John Naylor <john.naylor@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/c8d05f42-443e-6c23-819b-05b31759a37c@2ndquadrant.com
* Reimplement nullification of walsender timestampAlvaro Herrera2020-01-08
| | | | | | | | | | | | | | | Make the value null only at pg_stat_activity-output time, as suggested by Tom Lane, instead of messing with the internal state. This should appease buildfarm members with force_parallel_mode=regress, which are running parallel queries on logical replication walsenders. The fact that walsenders can run parallel queries should perhaps be studied more carefully, but for the moment let's get rid of the red blots in buildfarm. Backpatch to pg10, like the previous commit. Discussion: https://postgr.es/m/30804.1578438763@sss.pgh.pa.us