aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* CREATE INDEX ... INCLUDING (column[, ...])Teodor Sigaev2016-04-08
| | | | | | | | | | Now indexes (but only B-tree for now) can contain "extra" column(s) which doesn't participate in index structure, they are just stored in leaf tuples. It allows to use index only scan by using single index instead of two or more indexes. Author: Anastasia Lubennikova with minor editorializing by me Reviewers: David Rowley, Peter Geoghegan, Jeff Janes
* Replace printf format %i by %dPeter Eisentraut2016-04-08
| | | | see also ce8d7bb6440710058503d213b2aafcdf56a5b481
* Fix multiple bugs in tablespace symlink removal.Tom Lane2016-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | Don't try to examine S_ISLNK(st.st_mode) after a failed lstat(). It's undefined. Also, if the lstat() reported ENOENT, we do not wish that to be a hard error, but the code might nonetheless treat it as one (giving an entirely misleading error message, too) depending on luck-of-the-draw as to what S_ISLNK() returned. Don't throw error for ENOENT from rmdir(), either. (We're not really expecting ENOENT because we just stat'd the file successfully; but if we're going to allow ENOENT in the symlink code path, surely the directory code path should too.) Generate an appropriate errcode for its-the-wrong-type-of-file complaints. (ERRCODE_SYSTEM_ERROR doesn't seem appropriate, and failing to write errcode() around it certainly doesn't work, and not writing an errcode at all is not per project policy.) Valgrind noticed the undefined S_ISLNK result; the other problems emerged while reading the code in the area. All of this appears to have been introduced in 8f15f74a44f68f9c. Back-patch to 9.5 where that commit appeared.
* Increase maximum number of clog buffers.Andres Freund2016-04-08
| | | | | | | | | | | | | | | | | | | | | | Benchmarking has shown that the current number of clog buffers limits scalability. We've previously increased the number in 33aaa139, but that's not sufficient with a large number of clients. We've benchmarked the cost of increasing the limit by benchmarking worst case scenarios; testing showed that 128 buffers don't cause a regression, even in contrived scenarios, whereas 256 does There are a number of more complex patches flying around to address various clog scalability problems, but this is simple enough that we can get it into 9.6; and is beneficial even after those patches have been applied. It is a bit unsatisfactory to increase this in small steps every few releases, but a better solution seems to require a rewrite of slru.c; not something done quickly. Author: Amit Kapila and Andres Freund Discussion: CAA4eK1+-=18HOrdqtLXqOMwZDbC_15WTyHiFruz7BvVArZPaAw@mail.gmail.com
* Add a 'parallel_degree' reloption.Robert Haas2016-04-08
| | | | | | | | | | The code that estimates what parallel degree should be uesd for the scan of a relation is currently rather stupid, so add a parallel_degree reloption that can be used to override the planner's rather limited judgement. Julien Rouhaud, reviewed by David Rowley, James Sewell, Amit Kapila, and me. Some further hacking by me.
* Attempt to fix breakage due to declaration following code.Robert Haas2016-04-08
| | | | Per Tom Lane and the buildfarm.
* Set PAM_RHOST item for PAM authenticationPeter Eisentraut2016-04-08
| | | | | | | | | The PAM_RHOST item is set to the remote IP address or host name and can be used by PAM modules. A pg_hba.conf option is provided to choose between IP address and resolved host name. From: Grzegorz Sampolski <grzsmp@gmail.com> Reviewed-by: Haribabu Kommi <kommi.haribabu@gmail.com>
* Rename comparePos() to compareWordEntryPos()Teodor Sigaev2016-04-08
| | | | | | | Rename comparePos() to compareWordEntryPos() to prevent export of too generic name. Per gripe from Tom Lane.
* Use quicksort, not replacement selection, for external sorting.Robert Haas2016-04-08
| | | | | | | | | | | We still use replacement selection for the first run of the sort only and only when the number of tuples is relatively small. Otherwise, the first run, and subsequent runs in all cases, are produced using quicksort. This tends to be faster except perhaps for very small amounts of working memory. Peter Geoghegan, reviewed by Tomas Vondra, Jeff Janes, Mithun Cy, Greg Stark, and me.
* Extend relations multiple blocks at a time to improve scalability.Robert Haas2016-04-08
| | | | | | | | | Contention on the relation extension lock can become quite fierce when multiple processes are inserting data into the same relation at the same time at a high rate. Experimentation shows the extending the relation multiple blocks at a time improves scalability. Dilip Kumar, reviewed by Petr Jelinek, Amit Kapila, and me.
* Use Foreign Key relationships to infer multi-column join selectivitySimon Riggs2016-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | In cases where joins use multiple columns we currently assess each join separately causing gross mis-estimates for join cardinality. This patch adds use of FK information for the first time into the planner. When FKs are present and we have multi-column join information, plan estimates will be drastically improved. Cases with multiple FKs are handled, though partial matches are ignored currently. Net effect is substantial performance improvements for joins in many common cases. Additional planning time is isolated to cases that are currently performing poorly, measured at 0.08 - 0.15 ms. Please watch for planner performance regressions; circumstances seem unlikely but the law of unintended consequences may apply somewhen. Additional complex tests welcome to prove this before release. Tests can be performed using SET enable_fkey_estimates = on | off using scripts provided during Hackers discussions, message id: 552335D9.3090707@2ndquadrant.com Authors: Tomas Vondra and David Rowley Reviewed and tested by Simon Riggs, adding comments only
* Zeroing unused parts ducring tsquery construction.Teodor Sigaev2016-04-07
| | | | | Per investigation failure skink buildfarm member and RANDOMIZE_ALLOCATED_MEMORY help
* Refactor join_is_removable() to separate out distinctness-proving logic.Tom Lane2016-04-07
| | | | | | | | | | | | | Extracted from pending unique-join patch, since this is a rather large delta but it's simply moving code out into separately-accessible subroutines. I (tgl) did choose to add a bit more logic to rel_supports_distinctness, so that it verifies that there's at least one potentially usable unique index rather than just checking indexlist != NIL. Otherwise there's no functional change here. David Rowley
* Detect SSI conflicts before reporting constraint violationsKevin Grittner2016-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | While prior to this patch the user-visible effect on the database of any set of successfully committed serializable transactions was always consistent with some one-at-a-time order of execution of those transactions, the presence of declarative constraints could allow errors to occur which were not possible in any such ordering, and developers had no good workarounds to prevent user-facing errors where they were not necessary or desired. This patch adds a check for serialization failure ahead of duplicate key checking so that if a developer explicitly (redundantly) checks for the pre-existing value they will get the desired serialization failure where the problem is caused by a concurrent serializable transaction; otherwise they will get a duplicate key error. While it would be better if the reads performed by the constraints could count as part of the work of the transaction for serialization failure checking, and we will hopefully get there some day, this patch allows a clean and reliable way for developers to work around the issue. In many cases existing code will already be doing the right thing for this to "just work". Author: Thomas Munro, with minor editing of docs by me Reviewed-by: Marko Tiikkaja, Kevin Grittner
* Phrase full text search.Teodor Sigaev2016-04-07
| | | | | | | | | | | | | Patch introduces new text search operator (<-> or <DISTANCE>) into tsquery. On-disk and binary in/out format of tsquery are backward compatible. It has two side effect: - change order for tsquery, so, users, who has a btree index over tsquery, should reindex it - less number of parenthesis in tsquery output, and tsquery becomes more readable Authors: Teodor Sigaev, Oleg Bartunov, Dmitry Ivanov Reviewers: Alexander Korotkov, Artur Zakirov
* Load FK defs into relcache for use by plannerSimon Riggs2016-04-07
| | | | | | | Fastpath ignores this if no triggers defined. Author: Tomas Vondra, with fastpath and comments added by me Reviewers: David Rowley, Simon Riggs
* Standardize GetTokenInformation() error reporting.Noah Misch2016-04-06
| | | | | | | | | | | Commit c22650cd6450854e1a75064b698d7dcbb4a8821a sparked a discussion about diverse interpretations of "token user" in error messages. Expel old and new specimens of that phrase by making all GetTokenInformation() callers report errors the way GetTokenUser() has been reporting them. These error conditions almost can't happen, so users are unlikely to observe this change. Reviewed by Tom Lane and Stephen Frost.
* Use GRANT system to manage access to sensitive functionsStephen Frost2016-04-06
| | | | | | | | | | | | Now that pg_dump will properly dump out any ACL changes made to functions which exist in pg_catalog, switch to using the GRANT system to manage access to those functions. This means removing 'if (!superuser()) ereport()' checks from the functions themselves and then REVOKEing EXECUTE right from 'public' for these functions in system_views.sql. Reviews by Alexander Korotkov, Jose Luis Tallon
* In pg_dump, include pg_catalog and extension ACLs, if changedStephen Frost2016-04-06
| | | | | | | | | | | | | | | | | Now that all of the infrastructure exists, add in the ability to dump out the ACLs of the objects inside of pg_catalog or the ACLs for objects which are members of extensions, but only if they have been changed from their original values. The original values are tracked in pg_init_privs. When pg_dump'ing 9.6-and-above databases, we will dump out the ACLs for all objects in pg_catalog and the ACLs for all extension members, where the ACL has been changed from the original value which was set during either initdb or CREATE EXTENSION. This should not change dumps against pre-9.6 databases. Reviews by Alexander Korotkov, Jose Luis Tallon
* Add new catalog called pg_init_privsStephen Frost2016-04-06
| | | | | | | | | | | | This new catalog holds the privileges which the system was initialized with at initdb time, along with any permissions set by extensions at CREATE EXTENSION time. This allows pg_dump (and any other similar use-cases) to detect when the privileges set on initdb-created or extension-created objects have been changed from what they were set to at initdb/extension-creation time and handle those changes appropriately. Reviews by Alexander Korotkov, Jose Luis Tallon
* Add jsonb_insertTeodor Sigaev2016-04-06
| | | | | | | | It inserts a new value into an jsonb array at arbitrary position or a new key to jsonb object. Author: Dmitry Dolgov Reviewers: Petr Jelinek, Vitaly Burovoy, Andrew Dunstan
* Run pgindent on a batch of (mostly-planner-related) source files.Tom Lane2016-04-06
| | | | | Getting annoyed at the amount of unrelated chatter I get from pgindent'ing Rowley's unique-joins patch. Re-indent all the files it touches.
* Use proper format specifier %X/%X for LSN, again.Fujii Masao2016-04-06
| | | | | | | Commit cee31f5 fixed this problem, but commit 989be08 accidentally reverted the fix. Thomas Munro
* Revert bf08f2292ffca14fd133aa0901d1563b6ecd6894Simon Riggs2016-04-06
| | | | Remove recent changes to logging XLOG_RUNNING_XACTS by request.
* Generic Messages for Logical DecodingSimon Riggs2016-04-06
| | | | | | | | | | | | | | | | API and mechanism to allow generic messages to be inserted into WAL that are intended to be read by logical decoding plugins. This commit adds an optional new callback to the logical decoding API. Messages are either text or bytea. Messages can be transactional, or not, and are identified by a prefix to allow multiple concurrent decoding plugins. (Not to be confused with Generic WAL records, which are intended to allow crash recovery of extensible objects.) Author: Petr Jelinek and Andres Freund Reviewers: Artur Zakirov, Tomas Vondra, Simon Riggs Discussion: 5685F999.6010202@2ndquadrant.com
* Support multiple synchronous standby servers.Fujii Masao2016-04-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously synchronous replication offered only the ability to confirm that all changes made by a transaction had been transferred to at most one synchronous standby server. This commit extends synchronous replication so that it supports multiple synchronous standby servers. It enables users to consider one or more standby servers as synchronous, and increase the level of transaction durability by ensuring that transaction commits wait for replies from all of those synchronous standbys. Multiple synchronous standby servers are configured in synchronous_standby_names which is extended to support new syntax of 'num_sync ( standby_name [ , ... ] )', where num_sync specifies the number of synchronous standbys that transaction commits need to wait for replies from and standby_name is the name of a standby server. The syntax of 'standby_name [ , ... ]' which was used in 9.5 or before is also still supported. It's the same as new syntax with num_sync=1. This commit doesn't include "quorum commit" feature which was discussed in pgsql-hackers. Synchronous standbys are chosen based on their priorities. synchronous_standby_names determines the priority of each standby for being chosen as a synchronous standby. The standbys whose names appear earlier in the list are given higher priority and will be considered as synchronous. Other standby servers appearing later in this list represent potential synchronous standbys. The regression test for multiple synchronous standbys is not included in this commit. It should come later. Authors: Sawada Masahiko, Beena Emerson, Michael Paquier, Fujii Masao Reviewed-By: Kyotaro Horiguchi, Amit Kapila, Robert Haas, Simon Riggs, Amit Langote, Thomas Munro, Sameer Thakur, Suraj Kharage, Abhijit Menon-Sen, Rajeev Rastogi Many thanks to the various individuals who were involved in discussing and developing this feature.
* Support ALTER THING .. DEPENDS ON EXTENSIONAlvaro Herrera2016-04-05
| | | | | | | | | | | | | | This introduces a new dependency type which marks an object as depending on an extension, such that if the extension is dropped, the object automatically goes away; and also, if the database is dumped, the object is included in the dump output. Currently the grammar supports this for indexes, triggers, materialized views and functions only, although the utility code is generic so adding support for more object types is a matter of touching the parser rules only. Author: Abhijit Menon-Sen Reviewed-by: Alexander Korotkov, Álvaro Herrera Discussion: http://www.postgresql.org/message-id/20160115062649.GA5068@toroid.org
* Fix parallel-safety code for parallel aggregation.Robert Haas2016-04-05
| | | | | | | | | has_parallel_hazard() was ignoring the proparallel markings for aggregates, which is no good. Fix that. There was no way to mark an aggregate as actually being parallel-safe, either, so add a PARALLEL option to CREATE AGGREGATE. Patch by me, reviewed by David Rowley.
* Align all shared memory allocations to cache line boundaries.Robert Haas2016-04-05
| | | | | | | | Experimentation shows this only costs about 6kB, which seems well worth it given the major performance effects that can be caused by insufficient alignment, especially on larger systems. Discussion: 14166.1458924422@sss.pgh.pa.us
* Add parallel query support functions for assorted aggregates.Robert Haas2016-04-05
| | | | | | | | | | | | This lets us use parallel aggregate for a variety of useful cases that didn't work before, like sum(int8), sum(numeric), several versions of avg(), and various other functions. Add some regression tests, as well, testing the general sanity of these and future catalog entries. David Rowley, reviewed by Tomas Vondra, with a few further changes by me.
* Implement backup API functions for non-exclusive backupsMagnus Hagander2016-04-05
| | | | | | | | | | | | | | | | | | | | | Previously non-exclusive backups had to be done using the replication protocol and pg_basebackup. With this commit it's now possible to make them using pg_start_backup/pg_stop_backup as well, as long as the backup program can maintain a persistent connection to the database. Doing this, backup_label and tablespace_map are returned as results from pg_stop_backup() instead of being written to the data directory. This makes the server safe from a crash during an ongoing backup, which can be a problem with exclusive backups. The old syntax of the functions remain and work exactly as before, but since the new syntax is safer this should eventually be deprecated and removed. Only reference documentation is included. The main section on backup still needs to be rewritten to cover this, but since that is already scheduled for a separate large rewrite, it's not included in this patch. Reviewed by David Steele and Amit Kapila
* Fix error message from wal_level value renamingPeter Eisentraut2016-04-04
| | | | found by Ian Barwick
* Disallow newlines in parameter values to be set in ALTER SYSTEM.Tom Lane2016-04-04
| | | | | | | | | | | | | | | | | | As noted by Julian Schauder in bug #14063, the configuration-file parser doesn't support embedded newlines in string literals. While there might someday be a good reason to remove that restriction, there doesn't seem to be one right now. However, ALTER SYSTEM SET could accept strings containing newlines, since many of the variable-specific value-checking routines would just see a newline as whitespace. This led to writing a postgresql.auto.conf file that was broken and had to be removed manually. Pending a reason to work harder, just throw an error if someone tries this. In passing, fix several places in the ALTER SYSTEM logic that failed to provide an errcode() for an ereport(), and thus would falsely log the failure as an internal XX000 error. Back-patch to 9.4 where ALTER SYSTEM was introduced.
* Display WAL pointer in rm_redo error callbackAlvaro Herrera2016-04-04
| | | | | This makes it easier to identify the source of a recovery problem in case of a bug or data corruption.
* Add a few comments about ANALYZE's strategy for collecting MCVs.Tom Lane2016-04-04
| | | | | | | Alex Shulgin complained that the underlying strategy wasn't all that apparent, particularly not the fact that we intentionally have two code paths depending on whether we think the column has a limited set of possible values or not. Try to make it clearer.
* Partially revert commit 3d3bf62f30200500637b24fdb7b992a99f9704c3.Tom Lane2016-04-04
| | | | | | | | | | | | | | On reflection, the pre-existing logic in ANALYZE is specifically meant to compare the frequency of a candidate MCV against the estimated frequency of a random distinct value across the whole table. The change to compare it against the average frequency of values actually seen in the sample doesn't seem very principled, and if anything it would make us less likely not more likely to consider a value an MCV. So revert that, but keep the aspect of considering only nonnull values, which definitely is correct. In passing, rename the local variables in these stanzas to "ndistinct_table", to avoid confusion with the "ndistinct" that appears at an outer scope in compute_scalar_stats.
* Silence compiler warningAlvaro Herrera2016-04-04
| | | | Reported by Peter Eisentraut to occur on 32bit systems
* Introduce a LOG_SERVER_ONLY ereport level, which is never sent to client.Tom Lane2016-04-04
| | | | | | | | | | | | | | | | This elevel is useful for logging audit messages and similar information that should not be passed to the client. It's equivalent to LOG in terms of decisions about logging priority in the postmaster log, but messages with this elevel will never be sent to the client. In the current implementation, it's just an alias for the longstanding COMMERROR elevel (or more accurately, we've made COMMERROR an alias for this). At some point it might be interesting to allow a LOG_ONLY flag to be attached to any elevel, but that would be considerably more complicated, and it's not clear there's enough use-cases to justify the extra work. For now, let's just take the easy 90% solution. David Steele, reviewed by Fabien Coelho, Petr Jelínek, and myself
* Fix latent portability issue in pgwin32_dispatch_queued_signals().Tom Lane2016-04-04
| | | | | | | | | | | | | | | | The first iteration of the signal-checking loop would compute sigmask(0) which expands to 1<<(-1) which is undefined behavior according to the C standard. The lack of field reports of trouble suggest that it evaluates to 0 on all existing Windows compilers, but that's hardly something to rely on. Since signal 0 isn't a queueable signal anyway, we can just make the loop iterate from 1 instead, and save a few cycles as well as avoiding the undefined behavior. In passing, avoid evaluating the volatile expression UNBLOCKED_SIGNAL_QUEUE twice in a row; there's no reason to waste cycles like that. Noted by Aleksander Alekseev, though this isn't his proposed fix. Back-patch to all supported branches.
* Improve estimate of distinct values in estimate_num_groups().Dean Rasheed2016-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When adjusting the estimate for the number of distinct values from a rel in a grouped query to take into account the selectivity of the rel's restrictions, use a formula that is less likely to produce under-estimates. The old formula simply multiplied the number of distinct values in the rel by the restriction selectivity, which would be correct if the restrictions were fully correlated with the grouping expressions, but can produce significant under-estimates in cases where they are not well correlated. The new formula is based on the random selection probability, and so assumes that the restrictions are not correlated with the grouping expressions. This is guaranteed to produce larger estimates, and of course risks over-estimating in cases where the restrictions are correlated, but that has less severe consequences than under-estimating, which might lead to a HashAgg that consumes an excessive amount of memory. This could possibly be improved upon in the future by identifying correlated restrictions and using a hybrid of the old and new formulae. Author: Tomas Vondra, with some hacking be me Reviewed-by: Mark Dilger, Alexander Korotkov, Dean Rasheed and Tom Lane Discussion: http://www.postgresql.org/message-id/flat/56CD0381.5060502@2ndquadrant.com
* Avoid archiving XLOG_RUNNING_XACTS on idle serverSimon Riggs2016-04-04
| | | | | | | | | | If archive_timeout > 0 we should avoid logging XLOG_RUNNING_XACTS if idle. Bug 13685 reported by Laurence Rowe, investigated in detail by Michael Paquier, though this is not his proposed fix. 20151016203031.3019.72930@wrigleys.postgresql.org Simple non-invasive patch to allow later backpatch to 9.4 and 9.5
* Avoid pin scan for replay of XLOG_BTREE_VACUUM in all casesSimon Riggs2016-04-03
| | | | | | | | | | | | | | | | Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to require complex interlocking that matched the requirements on the master. This required an O(N) operation that became a significant problem with large indexes, causing replication delays of seconds or in some cases minutes while the XLOG_BTREE_VACUUM was replayed. This commit skips the pin scan that was previously required, by observing in detail when and how it is safe to do so, with full documentation. The pin scan is skipped only in replay; the VACUUM code path on master is not touched here and WAL is identical. The current commit applies in all cases, effectively replacing commit 687f2cd7a0150647794efe432ae0397cb41b60ff.
* Make all the declarations of WaitEventSetWaitBlock be marked "inline".Tom Lane2016-04-02
| | | | | The inconsistency here triggered compiler warnings on some buildfarm members, and it's surely pretty pointless.
* Refer to a TOKEN_USER payload as a "token user," not as a "user token".Noah Misch2016-04-01
| | | | | This corrects messages for can't-happen errors. The corresponding "user token" appears in the HANDLE argument of GetTokenInformation().
* Copyedit comments and documentation.Noah Misch2016-04-01
|
* Omit null rows when setting the threshold for what's a most-common value.Tom Lane2016-04-01
| | | | | | | | | | | | | | | | | | | As with the previous patch, large numbers of null rows could skew this calculation unfavorably, causing us to discard values that have a legitimate claim to be MCVs, since our definition of MCV is that it's most common among the non-null population of the column. Hence, make the numerator of avgcount be the number of non-null sample values not the number of sample rows; likewise for maxmincount in the compute_scalar_stats variant. Also, make the denominator be the number of distinct values actually observed in the sample, rather than reversing it back out of the computed stadistinct. This avoids depending on the accuracy of the Haas-Stokes approximation, and really it's what we want anyway; the threshold should depend only on what we see in the sample, not on what we extrapolate about the contents of the whole column. Alex Shulgin, reviewed by Tomas Vondra and myself
* Omit null rows when applying the Haas-Stokes estimator for ndistinct.Tom Lane2016-04-01
| | | | | | | | | | | | | | | | | | | | Previously, we included null rows in the values of n and N that went into the formula, which amounts to considering null as a value in its own right; but the d and f1 values do not include nulls. This is inconsistent, and it contributes to significant underestimation of ndistinct when the column is mostly nulls. In any case stadistinct is defined as the number of distinct non-null values, so we should exclude nulls when doing this computation. This is an aboriginal bug in our application of the Haas-Stokes formula, but we'll refrain from back-patching for fear of destabilizing plan choices in released branches. While at it, make the code a bit more readable by omitting unnecessary casts and intermediate variables. Observation and original patch by Tomas Vondra, adjusted to fix both uses of the formula by Alex Shulgin, cosmetic improvements by me
* Type names should not be quotedAlvaro Herrera2016-04-01
| | | | | | | | | Our actual convention, contrary to what I said in 59a2111b23f, is not to quote type names, as evidenced by unquoted use of format_type_be() result value in error messages. Remove quotes from recently tweaked messages accordingly. Per note from Tom Lane
* Add Generic WAL interfaceTeodor Sigaev2016-04-01
| | | | | | | | | | | | | | | This interface is designed to give an access to WAL for extensions which could implement new access method, for example. Previously it was impossible because restoring from custom WAL would need to access system catalog to find a redo custom function. This patch suggests generic way to describe changes on page with standart layout. Bump XLOG_PAGE_MAGIC because of new record type. Author: Alexander Korotkov with a help of Petr Jelinek, Markus Nullmeier and minor editorization by my Reviewers: Petr Jelinek, Alvaro Herrera, Teodor Sigaev, Jim Nasby, Michael Paquier
* Support using index-only scans with partial indexes in more cases.Tom Lane2016-03-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, the planner would reject an index-only scan if any restriction clause for its table used a column not available from the index, even if that restriction clause would later be dropped from the plan entirely because it's implied by the index's predicate. This is a fairly common situation for partial indexes because predicates using columns not included in the index are often the most useful kind of predicate, and we have to duplicate (or at least imply) the predicate in the WHERE clause in order to get the index to be considered at all. So index-only scans were essentially unavailable with such partial indexes. To fix, we have to do detection of implied-by-predicate clauses much earlier in the planner. This patch puts it in check_index_predicates (nee check_partial_indexes), meaning it gets done for every partial index, whereas we previously only considered this issue at createplan time, so that the work was only done for an index actually selected for use. That could result in a noticeable planning slowdown for queries against tables with many partial indexes. However, testing suggested that there isn't really a significant cost, especially not with reasonable numbers of partial indexes. We do get a small additional benefit, which is that cost_index is more accurate since it correctly discounts the evaluation cost of clauses that will be removed. We can also avoid considering such clauses as potential indexquals, which saves useless matching cycles in the case where the predicate columns aren't in the index, and prevents generating bogus plans that double-count the clause's selectivity when the columns are in the index. Tomas Vondra and Kyotaro Horiguchi, reviewed by Kevin Grittner and Konstantin Knizhnik, and whacked around a little by me