aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Use full 64-bit XID for checking if a deleted GiST page is old enough.Heikki Linnakangas2019-07-24
| | | | | | | | | | | Otherwise, after a deleted page gets even older, it becomes unrecyclable again. B-tree has the same problem, and has had since time immemorial, but let's at least fix this in GiST, where this is new. Backpatch to v12, where GiST page deletion was introduced. Reviewed-by: Andrey Borodin Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
* Refactor checks for deleted GiST pages.Heikki Linnakangas2019-07-24
| | | | | | | | | | | | The explicit check in gistScanPage() isn't currently really necessary, as a deleted page is always empty, so the loop would fall through without doing anything, anyway. But it's a marginal optimization, and it gives a nice place to attach a comment to explain how it works. Backpatch to v12, where GiST page deletion was introduced. Reviewed-by: Andrey Borodin Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
* Don't assume expr is available in pgbench testsAndrew Dunstan2019-07-24
| | | | | | | | | | Windows hosts do not normally come with expr, so instead of using that to test the \setshell command, use echo instead, which is fairly universally available. Backpatch to release 11, where this came in. Problem found by me, patch by Fabien Coelho.
* Improve stability of TAP test for synchronous replicationMichael Paquier2019-07-24
| | | | | | | | | | | | | | | | | | Slow buildfarm machines have run into issues with this TAP test caused by a race condition related to the startup of a set of standbys, where it is possible to finish with an unexpected order in the WAL sender array of the primary. This closes the race condition by making sure that any standby started is registered into the WAL sender array of the primary before starting the next one based on lookups of pg_stat_replication. Backpatch down to 9.6 where the test has been introduced. Author: Michael Paquier Reviewed-by: Álvaro Herrera, Noah Misch Discussion: https://postgr.es/m/20190617055145.GB18917@paquier.xyz Backpatch-through: 9.6
* Check that partitions are not in use when dropping constraintsAlvaro Herrera2019-07-23
| | | | | | | | | | | | | | | | | | | | | | If the user creates a deferred constraint in a partition, and in a transaction they cause the constraint's trigger execution to be deferred until commit time *and* drop the constraint, then when commit time comes the queued trigger will fail to run because the trigger object will have been dropped. This is explained because when a constraint gets dropped in a partitioned table, the recursion to drop the ones in partitions is done by the dependency mechanism, not by ALTER TABLE traversing the recursion tree as in all other cases. In the non-partitioned case, this problem is avoided by checking that the table is not "in use" by alter-table; other alter-table subcommands that recurse to partitions do that check for each partition. But the dependency mechanism doesn't have a way to do that. Fix the problem by applying the same check to all partitions during ALTER TABLE's "prep" phase, which correctly raises the necessary error. Reported-by: Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> Discussion: https://postgr.es/m/CAKcux6nZiO9-eEpr1ZD84bT1mBoVmeZkfont8iSpcmYrjhGWgA@mail.gmail.com
* Install dependencies to prevent dropping partition key columns.Tom Lane2019-07-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The logic in ATExecDropColumn that rejects dropping partition key columns is quite an inadequate defense, because it doesn't execute in cases where a column needs to be dropped due to cascade from something that only the column, not the whole partitioned table, depends on. That leaves us with a badly broken partitioned table; even an attempt to load its relcache entry will fail. We really need to have explicit pg_depend entries that show that the column can't be dropped without dropping the whole table. Hence, add those entries. In v12 and HEAD, bump catversion to ensure that partitioned tables will have such entries. We can't do that in released branches of course, so in v10 and v11 this patch affords protection only to partitioned tables created after the patch is installed. Given the lack of field complaints (this bug was found by fuzz-testing not by end users), that's probably good enough. In passing, fix ATExecDropColumn and ATPrepAlterColumnType messages to be more specific about which partition key column they're complaining about. Per report from Manuel Rigger. Back-patch to v10 where partitioned tables were added. Discussion: https://postgr.es/m/CA+u7OA4JKCPFrdrAbOs7XBiCyD61XJxeNav4LefkSmBLQ-Vobg@mail.gmail.com Discussion: https://postgr.es/m/31920.1562526703@sss.pgh.pa.us
* Use column collation for extended statisticsTomas Vondra2019-07-20
| | | | | | | | | | | | | | | | | | | The current extended statistics code was a bit confused which collation to use. When building the statistics, the collations defined as default for the data types were used (since commit 5e0928005). The MCV code was however using the column collations for MCV serialization, and then DEFAULT_COLLATION_OID when computing estimates. So overall the code was using all three possible options, inconsistently. This uses the column colation everywhere - this makes it consistent with what 5e0928005 did for regular stats. We however do not track the collations in a catalog, because we can derive them from column-level information. This may need to change in the future, e.g. after allowing statistics on expressions. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu Backpatch-to: 12
* Rework examine_opclause_expression to use varonleftTomas Vondra2019-07-20
| | | | | | | | | | | | | | | | | | | | | | | The examine_opclause_expression function needs to return information on which side of the operator we found the Var, but the variable was called "isgt" which is rather misleading (it assumes the operator is either less-than or greater-than, but it may be equality or something else). Other places in the planner use a variable called "varonleft" for this purpose, so just adopt the same convention here. The code also assumed we don't care about this flag for equality, as (Var = Const) and (Const = Var) should be the same thing. But that does not work for cross-type operators, in which case we need to pass the parameters to the procedure in the right order. So just use the same code for all types of expressions. This means we don't need to care about the selectivity estimation function anymore, at least not in this code. We should only get the supported cases here (thanks to statext_is_compatible_clause). Reviewed-by: Tom Lane Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu Backpatch-to: 12
* Silence compiler warning, hopefully.Tom Lane2019-07-19
| | | | | | | | Absorb commit e5e04c962a5d12eebbf867ca25905b3ccc34cbe0 from upstream IANA code, in hopes of silencing warnings from MSVC about negating a bool value. Discussion: https://postgr.es/m/20190719035347.GJ1859@paquier.xyz
* Fix error in commit e6feef57.Jeff Davis2019-07-18
| | | | | | | I was careless passing a datum directly to DATE_NOT_FINITE without calling DatumGetDateADT() first. Backpatch-through: 9.4
* Fix daterange canonicalization for +/- infinity.Jeff Davis2019-07-18
| | | | | | | | | | | | | | | | | | | The values 'infinity' and '-infinity' are a part of the DATE type itself, so a bound of the date 'infinity' is not the same as an unbounded/infinite range. However, it is still wrong to try to canonicalize such values, because adding or subtracting one has no effect. Fix by treating 'infinity' and '-infinity' the same as unbounded ranges for the purposes of canonicalization (but not other purposes). Backpatch to all versions because it is inconsistent with the documented behavior. Note that this could be an incompatibility for applications relying on the behavior contrary to the documentation. Author: Laurenz Albe Reviewed-by: Thomas Munro Discussion: https://postgr.es/m/77f24ea19ab802bc9bc60ddbb8977ee2d646aec1.camel%40cybertec.at Backpatch-through: 9.4
* Fix nbtree metapage cache upgrade bug.Peter Geoghegan2019-07-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 857f9c36cda, which taught nbtree VACUUM to avoid unnecessary index scans, bumped the nbtree version number from 2 to 3, while adding the ability for nbtree indexes to be upgraded on-the-fly. Various assertions that assumed that an nbtree index was always on version 2 had to be changed to accept any supported version (version 2 or 3 on Postgres 11). However, a few assertions were missed in the initial commit, all of which were in code paths that cache a local copy of the metapage metadata, where the index had been expected to be on the current version (no longer version 2) as a generic sanity check. Rather than simply update the assertions, follow-up commit 0a64b45152b intentionally made the metapage caching code update the per-backend cached metadata version without changing the on-disk version at the same time. This could even happen when the planner needed to determine the height of a B-Tree for costing purposes. The assertions only fail on Postgres v12 when upgrading from v10, because they were adjusted to use the authoritative shared memory metapage by v12's commit dd299df8. To fix, remove the cache-only upgrade mechanism entirely, and update the assertions themselves to accept any supported version (go back to using the cached version in v12). The fix is almost a full revert of commit 0a64b45152b on the v11 branch. VACUUM only considers the authoritative metapage, and never bothers with a locally cached version, whereas everywhere else isn't interested in the metapage fields that were added by commit 857f9c36cda. It seems unlikely that this bug has affected any user on v11. Reported-By: Christoph Berg Bug: #15896 Discussion: https://postgr.es/m/15896-5b25e260fdb0b081%40postgresql.org Backpatch: 11-, where VACUUM was taught to avoid unnecessary index scans.
* Simplify bitmap updates in multivariate MCV codeTomas Vondra2019-07-18
| | | | | | | | | | | | | | | | | When evaluating clauses on a multivariate MCV list, we build a bitmap tracking how the clauses match each item of the MCV list. When updating the bitmap we need to consider the current value (tracking how the item matches preceding clauses), match for the current clause and whether the clauses are connected by AND or OR. Until now the logic was copied on every place updating the bitmap, which was not quite readable. So just move it to a separate function and call it where needed. Backpatch to 12, where the code was introduced. While not a bugfix, this should make maintenance and future backpatches easier. Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
* Fix handling of NULLs in MCV items and constantsTomas Vondra2019-07-18
| | | | | | | | | | | | | | | | | | | There were two issues in how the extended statistics handled NULL values in opclauses. Firstly, the code was oblivious to the possibility that Const may be NULL (constisnull=true) in which case the constvalue is undefined. We need to treat this as a mismatch, and not call the proc. Secondly, the MCV item itself may contain NULL values too - the code already did check that, and updated the match bitmap accordingly, but failed to ensure we won't call the operator procedure anyway. It did work for AND-clauses, because in that case false in the bitmap stops evaluation of further clauses. But for OR-clauses ir was not easy to get incorrect estimates or even trigger a crash. This fixes both issues by extending the existing check so that it looks at constisnull too, and making sure it skips calling the procedure. Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu
* Fix handling of opclauses in extended statisticsTomas Vondra2019-07-18
| | | | | | | | | | | | | | | | | We expect opclauses to have exactly one Var and one Const, but the code was checking the Const by calling is_pseudo_constant_clause() which is incorrect - we need a proper constant. Fixed by using plain IsA(x,Const) to check type of the node. We need to do these checks in two places, so move it into a separate function that can be called in both places. Reported by Andreas Seltenreich, based on crash reported by sqlsmith. Backpatch to v12, where this code was introduced. Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu Backpatch-to: 12
* Remove unnecessary TYPECACHE_GT_OPR lookupTomas Vondra2019-07-18
| | | | | | | | | | | The TYPECACHE_GT_OPR is not needed (it used to be in older version of the MCV code), but the compiler failed to detect this as the result was used in a fmgr_info() call, populating a FmgrInfo entry. Backpatch to v12, where this code was introduced. Discussion: https://postgr.es/m/8736jdhbhc.fsf%40ansel.ydns.eu Backpatch-to: 12
* tableam: comment improvements.Andres Freund2019-07-17
| | | | | | Author: Brad DeJong Discussion: https://postgr.es/m/CAJnrtnxDYOQFsDfWz2iri0T_fFL2ZbbzgCOE=4yaMcszgcsf4A@mail.gmail.com Backpatch: 12-
* Update time zone data files to tzdata release 2019b.Tom Lane2019-07-17
| | | | | Brazil no longer observes DST. Historical corrections for Palestine, Hong Kong, and Italy.
* Sync our copy of the timezone library with IANA release tzcode2019b.Tom Lane2019-07-17
| | | | | | | | | A large fraction of this diff is just due to upstream's somewhat random decision to rename a bunch of internal variables and struct fields. However, there is an interesting new feature in zic: it's grown a "-b slim" option that emits zone files without 32-bit data and other backwards-compatibility hacks. We should consider whether we wish to enable that.
* Fix thinko in construction of old_conpfeqop list.Tom Lane2019-07-16
| | | | | | | | | | | | | | | | | | | | | | | This should lappend the OIDs, not lcons them; the existing code produced a list in reversed order. This is harmless for single-key FKs or FKs where all the key columns are of the same type, which probably explains how it went unnoticed. But if those conditions are not met, ATAddForeignKeyConstraint would make the wrong decision about whether an existing FK needs to be revalidated. I think it would almost always err in the safe direction by revalidating a constraint that didn't need it. You could imagine scenarios where the pfeqop check was fooled by swapping the types of two FK columns in one ALTER TABLE, but that case would probably be rejected by other tests, so it might be impossible to get to the worst-case scenario where an FK should be revalidated and isn't. (And even then, it's likely to be fine, unless there are weird inconsistencies in the equality behavior of the replacement types.) However, this is a performance bug at least. Noted while poking around to see whether lcons calls could be converted to lappend. This bug is old, dating to commit cb3a7c2b9, so back-patch to all supported branches.
* Correct nbtsplitloc.c comment.Peter Geoghegan2019-07-15
| | | | | | | | | | The logic just added by commit e3899ffd falls back on a 50:50 page split in the event of a new item that's just to the right of our provisional "many duplicates" split point. Fix a comment that incorrectly claimed that the new item had to be just to the left of our provisional split point. Backpatch: 12-, just like commit e3899ffd.
* Fix pathological nbtree split point choice issue.Peter Geoghegan2019-07-15
| | | | | | | | | | | | | | | | | | | | | | Specific ever-decreasing insertion patterns could cause successive unbalanced nbtree page splits. Problem cases involve a large group of duplicates to the left, and ever-decreasing insertions to the right. To fix, detect the situation by considering the newitem offset before performing a split using nbtsplitloc.c's "many duplicates" strategy. If the new item was inserted just to the right of our provisional "many duplicates" split point, infer ever-decreasing insertions and fall back on a 50:50 (space delta optimal) split. This seems to barely affect cases that already had acceptable space utilization. An alternative fix also seems possible. Instead of changing nbtsplitloc.c split choice logic, we could instead teach _bt_truncate() to generate a new value for new high keys by interpolating from the lastleft and firstright key values. That would certainly be a more elegant fix, but it isn't suitable for backpatching. Discussion: https://postgr.es/m/CAH2-WznCNvhZpxa__GqAa1fgQ9uYdVc=_apArkW2nc-K3O7_NA@mail.gmail.com Backpatch: 12-, where the nbtree page split enhancements were introduced.
* Revive test of concurrent OID generation.Noah Misch2019-07-13
| | | | | | | | | Commit 578b229718e8f15fa779e20f086c4b6bb3776106 replaced it with a concurrent "nextval" test. That version does not detect PostgreSQL's incompatibility with xlc 13.1.3, so bring back an OID-based test that does. Back-patch to v12, where that commit first appeared. Discussion: https://postgr.es/m/20190707170035.GA1485546@rfd.leadboat.com
* Fix get_actual_variable_range() to cope with broken HOT chains.Tom Lane2019-07-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 3ca930fc3 modified get_actual_variable_range() to use a new "SnapshotNonVacuumable" snapshot type for selecting tuples that it would consider valid. However, because that snapshot type can accept recently-dead tuples, this caused a bug when using a recently-created index: we might accept a recently-dead tuple that is an early member of a broken HOT chain and does not actually match the index entry. Then, the data extracted from the heap tuple would not necessarily be an endpoint value of the column; it could even be NULL, leading to get_actual_variable_range() itself reporting "found unexpected null value in index". Even without an error, this could lead to poor plan choices due to an erroneous notion of the endpoint value. We can improve matters by changing the code to use the index-only scan technique (which didn't exist when get_actual_variable_range was originally written). If any of the tuples in a HOT chain are live enough to satisfy SnapshotNonVacuumable, we take the data from the index entry, ignoring what is in the heap. This fixes the problem without changing the live-vs-dead-tuple behavior from what was intended by commit 3ca930fc3. A side benefit is that for static tables we might not have to touch the heap at all (when the extremal value is in an all-visible page). In addition, we can save some overhead by not having to create a complete ExecutorState, and we don't need to run FormIndexDatum, avoiding more cycles as well as the possibility of failure for indexes on expressions. (I'm not sure that this code would ever be used to determine the extreme value of an expression, in the current state of the planner; but it's definitely possible that lower-order columns of the selected index could be expressions. So one could construct perhaps-artificial examples in which the old code unexpectedly failed due to trying to compute an expression's value for a now-dead row.) Per report from Manuel Rigger. Back-patch to v11 where commit 3ca930fc3 came in. Discussion: https://postgr.es/m/CA+u7OA7W4NWEhCvftdV6_8bbm2vgypi5nuxfnSEJQqVKFSUoMg@mail.gmail.com
* Fix RANGE partition pruning with multiple boolean partition keysDavid Rowley2019-07-12
| | | | | | | | | | | | | | | | | | | | | | | | | | match_clause_to_partition_key incorrectly would return PARTCLAUSE_UNSUPPORTED if a bool qual could not be matched to the current partition key. This was a problem, as it causes the calling function to discard the qual and not try to match it to any other partition key. If there was another partition key which did match this qual, then the qual would not be checked again and we could fail to prune some partitions. The worst this could do was to cause partitions not to be pruned when they could have been, so there was no danger of incorrect query results here. Fix this by changing match_boolean_partition_clause to have it return a PartClauseMatchStatus rather than a boolean value. This allows it to communicate if the qual is unsupported or if it just does not match this particular partition key, previously these two cases were treated the same. Now, if match_clause_to_partition_key is unable to match the qual to any other qual type then we can simply return the value from the match_boolean_partition_clause call so that the calling function properly treats the qual as either unmatched or unsupported. Reported-by: Rares Salcudean Reviewed-by: Amit Langote Backpatch-through: 11 where partition pruning was introduced Discussion: https://postgr.es/m/CAHp_FN2xwEznH6oyS0hNTuUUZKp5PvegcVv=Co6nBXJ+mC7Y5w@mail.gmail.com
* Fix variable initialization when using buffering build with GiSTMichael Paquier2019-07-10
| | | | | | | | | | | | | | | | This can cause valgrind to complain, as the flag marking a buffer as a temporary copy was not getting initialized. While on it, fill in with zeros newly-created buffer pages. This does not matter when loading a block from a temporary file, but it makes the push of an index tuple into a new buffer page safer. This has been introduced by 1d27dcf, so backpatch all the way down to 9.4. Author: Alexander Lakhin Discussion: https://postgr.es/m/15899-0d24fb273b3dd90c@postgresql.org Backpatch-through: 9.4
* Fix missing calls to table_finish_bulk_insert during COPY, take 2David Rowley2019-07-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | 86b85044e abstracted calls to heap functions in COPY FROM to support a generic table AM. However, when performing a copy into a partitioned table, this commit neglected to call table_finish_bulk_insert for each partition. Before 86b85044e, when we always called the heap functions, there was no need to call heapam_finish_bulk_insert for partitions since it only did any work when performing a copy without WAL. For partitioned tables, this was unsupported anyway, so there was no issue. With pluggable storage, we can't make any assumptions about what the table AM might want to do in its equivalent function, so we'd better ensure we always call table_finish_bulk_insert each partition that's received a row. For now, we make the table_finish_bulk_insert call whenever we evict a CopyMultiInsertBuffer out of the CopyMultiInsertInfo. This does mean that it's possible that we call table_finish_bulk_insert multiple times per partition, which is not a problem other than being an inefficiency. Improving this requires a more invasive patch, so let's leave that for another day. This also changes things so that we no longer needlessly call table_finish_bulk_insert when performing a COPY FROM for a non-partitioned table when not using multi-inserts. Reported-by: Robert Haas Backpatch-through: 12 Discussion: https://postgr.es/m/CA+TgmoYK=6BpxiJ0tN-p9wtH0BTAfbdxzHhwou0mdud4+BkYuQ@mail.gmail.com
* Fix few typos and minor word smithing in tableam comments.Amit Kapila2019-07-10
| | | | | | | | Reported-by: Ashwin Agrawal Author: Ashwin Agrawal Reviewed-by: Amit Kapila Backpatch-through: 12, where it was introduced Discussion: https://postgr.es/m/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA@mail.gmail.com
* Pass QueryEnvironment down to EvalPlanQual's EState.Thomas Munro2019-07-10
| | | | | | | | | | | Otherwise the executor can't see trigger transition tables during EPQ evaluation. Fixes bug #15900 and almost certainly also #15720. Back-patch to 10, where trigger transition tables landed. Author: Alex Aktsipetrov Reviewed-by: Thomas Munro, Tom Lane Discussion: https://postgr.es/m/15900-bc482754fe8d7415%40postgresql.org Discussion: https://postgr.es/m/15720-38c2b29e5d720187%40postgresql.org
* Propagate trigger arguments to partitionsAlvaro Herrera2019-07-09
| | | | | | | | | | | | | We were creating the cloned triggers with an empty list of arguments, losing the ones that had been specified by the user when creating the trigger in the partitioned table. Repair. This was forgotten in commit 86f575948c77. Author: Patrick McHardy Reviewed-by: Tomas Vondra Discussion: https://postgr.es/m/20190709130027.amr2cavjvo7rdvac@access1.trash.net Discussion: https://postgr.es/m/15752-123bc90287986de4@postgresql.org
* Message style improvementsPeter Eisentraut2019-07-09
|
* Force hash joins to be enabled in the hash join regression tests.Thomas Munro2019-07-09
| | | | | | | | Otherwise the regressplans.sh tests generate extremely slow nested loop joins. Back-patch to 11 where the hash join tests came in. Reported-by: Michael Paquier Discussion: https://postgr.es/m/20190708055256.GB2709%40paquier.xyz
* Fix small memory leak in ecpglib ecpg_update_declare_statement() is called theMichael Meskes2019-07-08
| | | | | | second time. Author: "Zhang, Jie" <zhangjie2@cn.fujitsu.com>
* In pg_log_generic(), be more paranoid about preserving errno.Tom Lane2019-07-06
| | | | | | | | | | This code failed to account for the possibility that malloc() would change errno, resulting in wrong output for %m, not to mention the possibility of message truncation. Such a change is obviously expected when malloc fails, but there's reason to fear that on some platforms even a successful malloc call can modify errno. Discussion: https://postgr.es/m/2576.1527382833@sss.pgh.pa.us
* Add missing source files to nls.mkPeter Eisentraut2019-07-06
|
* psql: Fix logging output formatPeter Eisentraut2019-07-06
| | | | | | | | | | In normal interactive mode, psql's log messages accidentally got a "psql:" prefix that was not supposed to be there. This only happened if there was no .psqlrc file being read, so it wasn't discovered for a while. Fix this by adding the appropriate logging format configuration call in the right code path. Discussion: https://www.postgresql.org/message-id/7586.1560540361@sss.pgh.pa.us
* Add missing assertions for required table am callbacks.Amit Kapila2019-07-06
| | | | | | | | Reported-by: Ashwin Agrawal Author: Ashwin Agrawal Reviewed-by: Amit Kapila Backpatch-through: 12, where it was introduced Discussion: https://postgr.es/m/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA@mail.gmail.com
* Remove unused variable in statext_mcv_serialize()Tomas Vondra2019-07-05
| | | | | | | | | | | The itemlen variable used to be referenced in multiple places, but since reworking the serialization code it's used only in one assert. Fixed by removing the variable and calling the macro from the assert directly. Backpatch to 12, where this code was introduced. Reported-by: Jeff Janes Discussion: https://postgr.es/m/CAMkU=1zc_ovH9NZd_9ovuiEWkF9yX06URUDdXCmgDydf-bqB5A@mail.gmail.com
* Simplify pg_mcv_list (de)serializationTomas Vondra2019-07-05
| | | | | | | | | | | | | | | | | | | | The serialization format of multivariate MCV lists included alignment in order to allow direct access to part of the serialized data, but despite multiple fixes (see for example commits d85e0f366a and ea4e1c0e8f) this proved to be problematic. This commit abandons alignment in the serialized format, and just copies everything during deserialization. We now also track amount of memory needed after deserialization (including alignment), which allows us to deserialize the MCV list in a single pass. Bump catversion, as this affects contents of pg_statistic_ext_data. Backpatch to 12, where multi-column MCV lists were introduced. Author: Tomas Vondra Reviewed-by: Tom Lane Discussion: https://postgr.es/m/2201.1561521148@sss.pgh.pa.us
* Fix pg_mcv_list_items() to produce text[]Tomas Vondra2019-07-05
| | | | | | | | | | | | | | | | | | The function pg_mcv_list_items() returns values stored in MCV items. The items may contain columns with different data types, so the function was generating text array-like representation, but in an ad-hoc way without properly escaping various characters etc. Fixed by simply building a text[] array, which also makes it easier to use from queries etc. Requires changes to pg_proc entry, so bump catversion. Backpatch to 12, where multi-column MCV lists were introduced. Author: Tomas Vondra Reviewed-by: Dean Rasheed Discussion: https://postgr.es/m/20190618205920.qtlzcu73whfpfqne@development
* Speed-up build of MCV lists with many distinct valuesTomas Vondra2019-07-05
| | | | | | | | | | | | | | | | | When building multi-column MCV lists, we compute base frequency for each item, i.e. a product of per-column frequencies for values from the item. As a value may be in multiple groups, the code was scanning the whole array of groups while adding items to the MCV list. This works fine as long as the number of distinct groups is small, but it's easy to trigger trigger O(N^2) behavior, especially after increasing statistics target. This commit precomputes frequencies for values in all columns, so that when computing the base frequency it's enough to make a simple bsearch lookup in the array. Backpatch to 12, where multi-column MCV lists were introduced. Discussion: https://postgr.es/m/20190618205920.qtlzcu73whfpfqne@development
* Ensure plpgsql result tuples have the right composite type marking.Tom Lane2019-07-03
| | | | | | | | | | | | | | | | | | | | | | | A function that is declared to return a named composite type must return tuple datums that are physically marked as having that type. The plpgsql code path that allowed directly returning an expanded-record datum forgot to check that, so that an expanded record marked as type RECORDOID could be returned if it had a physically-compatible tupdesc. This'd be harmless, I think, if the record value never escaped the current session --- but it's possible for it to get stored into a table, and then subsequent sessions can't interpret the anonymous record type. Fix by flattening the record into a tuple datum and overwriting its type/typmod fields, if its declared type doesn't match the function's declared type. (In principle it might be possible to just change the expanded record's stored type ID info, but there are enough tricky consequences that I didn't want to mess with that, especially not in a back-patched bug fix.) Per bug report from Steve Rogerson. Back-patch to v11 where the bug was introduced. Discussion: https://postgr.es/m/cbaecae6-7b87-584e-45f6-4d047b92ca2a@yewtc.demon.co.uk
* Don't remove surplus columns from GROUP BY for inheritance parentsDavid Rowley2019-07-03
| | | | | | | | | | | | | | | | | | | d4c3a156c added code to remove columns that were not part of a table's PRIMARY KEY constraint from the GROUP BY clause when all the primary key columns were present in the group by. This is fine to do since we know that there will only be one row per group coming from this relation. However, the logic failed to consider inheritance parent relations. These can have child relations without a primary key, but even if they did, they could duplicate one of the parent's rows or one from another child relation. In this case, those additional GROUP BY columns are required. Fix this by disabling the optimization for inheritance parent tables. In v11 and beyond, partitioned tables are fine since partitions cannot overlap and before v11 partitioned tables could not have a primary key. Reported-by: Manuel Rigger Discussion: http://postgr.es/m/CA+u7OA7VLKf_vEr6kLF3MnWSA9LToJYncgpNX2tQ-oWzYCBQAw@mail.gmail.com Backpatch-through: 9.6
* Add support for Visual Studio 2019 in build scriptsMichael Paquier2019-07-03
| | | | | | | | | | | | This fixes at the same time a set of inconsistencies in the documentation and the scripts related to the versions of Windows SDK supported. Author: Haribabu Kommi Reviewed-by: Andrew Dunstan, Juan José Santamaría Flecha, Michael Paquier Discussion: https://postgr.es/m/CAJrrPGcfqXhfPyMrny9apoDU7M1t59dzVAvoJ9AeAh5BJi+UzA@mail.gmail.com Backpatch-through: 9.4
* Fix accidentally swapped error message argumentsPeter Eisentraut2019-07-02
| | | | Author: Alexey Kondratov <a.kondratov@postgrespro.ru>
* Remove redundant newlines from error messagesPeter Eisentraut2019-07-02
| | | | These are no longer needed/allowed with the new logging API.
* Don't treat complete_from_const as equivalent to complete_from_list.Tom Lane2019-07-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 4f3b38fe2 supposed that complete_from_const() is equivalent to the one-element-list case of complete_from_list(), but that's not really true at all. complete_from_const() supposes that the completion is certain enough to justify wiping out whatever the user typed, while complete_from_list() will only provide completions that match the word-so-far. In practice, given the lame parsing technology used by tab-complete.c, it's fairly hard to believe that we're *ever* certain enough about a completion to justify auto-correcting user input that doesn't match. Hence, remove the inappropriate unification of the two cases. As things now stand, complete_from_const() is used only for the situation where we have no matches and we need to keep readline from applying its default complete-with-file-names behavior. This (mis?) behavior actually exists much further back, but I'm hesitant to change it in released branches. It's not too late for v12, though, especially seeing that the aforesaid commit is new in v12. Per gripe from Ken Tanzer. Discussion: https://postgr.es/m/CAD3a31XpXzrZA9TT3BqLSHghdTK+=cXjNCE+oL2Zn4+oWoc=qA@mail.gmail.com
* Fix tab completion of "SET variable TO|=" to not offer bogus completions.Tom Lane2019-07-02
| | | | | | | | | | | | | | | | Don't think that the context "UPDATE tab SET var =" is a GUC-setting command. If we have "SET var =" but the "var" is not a known GUC variable, don't offer any completions. The most likely explanation is that we've misparsed the context and it's not really a GUC-setting command. Per gripe from Ken Tanzer. Back-patch to 9.6. The issue exists further back, but before 9.6 the code looks very different and it doesn't actually know whether the "var" name matches anything, so I desisted from trying to fix it. Discussion: https://postgr.es/m/CAD3a31XpXzrZA9TT3BqLSHghdTK+=cXjNCE+oL2Zn4+oWoc=qA@mail.gmail.com
* Revert "Insert temporary debugging output in regression tests."Tom Lane2019-07-01
| | | | | | | | | | This reverts commit f03a9ca4366d064d89b7cf7ed75d4e43f2ed0667, in the v12 branch only. We don't want to ship v12 with that, since it causes occasional test failures (as a result of statistics transmission not being entirely reliable). I'll leave it in HEAD though, in hopes that we'll eventually capture an instance of the original problematic behavior.
* pgindent run prior to branching v12.Tom Lane2019-07-01
| | | | | pgperltidy and reformat-dat-files too, though the latter didn't find anything to change.