aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Allow table AM's to use rd_amcache, too.Heikki Linnakangas2019-07-30
| | | | | | | | | | | The rd_amcache allows an index AM to cache arbitrary information in a relcache entry. This commit moves the cleanup of rd_amcache so that it can also be used by table AMs. Nothing takes advantage of that yet, but I'm sure it'll come handy for anyone writing new table AMs. Backpatch to v12, where table AM interface was introduced. Reviewed-by: Julien Rouhaud
* Print WAL position correctly in pg_rewind error message.Heikki Linnakangas2019-07-30
| | | | | | | | | | This has been wrong ever since pg_rewind was added. The if-branch just above this, where we print the same error with an extra message supplied by XLogReadRecord() got this right, but the variable name was wrong in the else-branch. As a consequence, the error printed the WAL position as 0/0 if there was an error reading a WAL file. Backpatch to 9.5, where pg_rewind was added.
* Don't build extended statistics on inheritance treesTomas Vondra2019-07-30
| | | | | | | | | | | | | | | | | | | | | | | | | | When performing ANALYZE on inheritance trees, we collect two samples for each relation - one for the relation alone, and one for the inheritance subtree (relation and its child relations). And then we build statistics on each sample, so for each relation we get two sets of statistics. For regular (per-column) statistics this works fine, because the catalog includes a flag differentiating statistics built from those two samples. But we don't have such flag in the extended statistics catalogs, and we ended up updating the same row twice, triggering this error: ERROR: tuple already updated by self The simplest solution is to disable extended statistics on inheritance trees, which is what this commit is doing. In the future we may need to do something similar to per-column statistics, but that requires adding a flag to the catalog - and that's not backpatchable. Moreover, the current selectivity estimation code only works with individual relations, so building statistics on inheritance trees would be pointless anyway. Author: Tomas Vondra Backpatch-to: 10- Discussion: https://postgr.es/m/20190618231233.GA27470@telsasoft.com Reported-by: Justin Pryzby
* Fix memory leak coming from simple lists built in reindexdbMichael Paquier2019-07-30
| | | | | | | | | | | | | | | | | | | | | | When building a list of relations for a parallel processing of a schema or a database (or just a single-entry list for the non-parallel case with the database name), the list is allocated and built on-the-fly for each database processed, leaking after one database-level reindex is done. This accumulates leaks when processing all databases, and could become a visible issue with thousands of relations. This is fixed by introducing a new routine in simple_list.c to free all the elements in a simple list made of strings or OIDs. The header of the list may be using a variable declaration or an allocated pointer, so we don't have a routine to free this part to keep the interface simple. Per report from coverity for an issue introduced by 5ab892c, and valgrind complains about the leak as well. The idea to introduce a new routine in simple_list.c is from Tom Lane. Author: Michael Paquier Reviewed-by: Tom Lane
* Fix busted logic for parallel lock grouping in TopoSort().Tom Lane2019-07-29
| | | | | | | | | | | | | | | | | | | | | A "break" statement erroneously left behind by commit a1c1af2a1 caused TopoSort to do the wrong thing if a lock's wait list contained multiple members of the same locking group. Because parallel workers don't normally need any locks not already taken by their leader, this is very hard --- maybe impossible --- to hit in production. Still, if it did happen, the queries involved in an otherwise-resolvable deadlock would block until canceled. In addition to removing the bogus "break", add an Assert showing that the conflicting uses of the beforeConstraints[] array (for both counts and flags) don't overlap, and add some commentary explaining why not; because it's not obvious without explanation, IMHO. Original report and patch from Rui Hai Jiang; additional assert and commentary by me. Back-patch to 9.6 where the bug came in. Discussion: https://postgr.es/m/CAEri+mLd3bpHLyW+a9pSe1y=aEkeuJpwBSwvo-+m4n7-ceRmXw@mail.gmail.com
* Handle fsync failures in pg_receivewal and pg_recvlogicalPeter Eisentraut2019-07-29
| | | | | | | | | It is not safe to simply report an fsync error and continue. We must exit the program instead. Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Sehrope Sarkuni <sehrope@jackdb.com> Discussion: https://www.postgresql.org/message-id/flat/9b49fe44-8f3e-eca9-5914-29e9e99030bf@2ndquadrant.com
* Fix inconsistencies and typos in the treeMichael Paquier2019-07-29
| | | | | | | | This is numbered take 8, and addresses again a set of issues with code comments, variable names and unreferenced variables. Author: Alexander Lakhin Discussion: https://postgr.es/m/b137b5eb-9c95-9c2f-586e-38aba7d59788@gmail.com
* Fix handling of expressions and predicates in REINDEX CONCURRENTLYMichael Paquier2019-07-29
| | | | | | | | | | | | | | | | | | | | | | | When copying the definition of an index rebuilt concurrently for the new entry, the index information was taken directly from the old index using the relation cache. In this case, predicates and expressions have some post-processing to prepare things for the planner, which loses some information including the collations added in any of them. This inconsistency can cause issues when attempting for example a table rewrite, and makes the new indexes rebuilt concurrently inconsistent with the old entries. In order to fix the problem, fetch expressions and predicates directly from the catalog of the old entry, and fill in IndexInfo for the new index with that. This makes the process more consistent with DefineIndex(), and the code is refactored with the addition of a routine to create an IndexInfo node. Reported-by: Manuel Rigger Author: Michael Paquier Discussion: https://postgr.es/m/CA+u7OA5Hp0ra235F3czPom_FyAd-3+XwSJmX95r1+sRPOJc9VQ@mail.gmail.com Backpatch-through: 12
* Avoid macro clash with LLVM 9.Thomas Munro2019-07-29
| | | | | | | | | | Early previews of LLVM 9 reveal that our Min() macro causes compiler errors in LLVM headers reached by the #include directives in llvmjit_inline.cpp. Let's just undefine it. Per buildfarm animal seawasp. Back-patch to 11. Reviewed-by: Fabien Coelho, Tom Lane Discussion: https://postgr.es/m/20190606173216.GA6306%40alvherre.pgsql
* Improve test coverage for LISTEN/NOTIFY.Tom Lane2019-07-28
| | | | | | | | | | | | | | | | | | | | | We had no actual end-to-end test of NOTIFY message delivery. In the core async.sql regression test, testing this is problematic because psql traditionally prints the PID of the sending backend, making the output unstable. We also have an isolation test script, but it likewise failed to prove that delivery worked, because isolationtester.c had no provisions for detecting/reporting NOTIFY messages. Hence, add such provisions to isolationtester.c, and extend async-notify.spec to include direct tests of basic NOTIFY functionality. I also added tests showing that NOTIFY de-duplicates messages normally, but not across subtransaction boundaries. (That's the historical behavior since we introduced subtransactions, though perhaps we ought to change it.) Patch by me, with suggestions/review by Andres Freund. Discussion: https://postgr.es/m/31304.1564246011@sss.pgh.pa.us
* Fix typo in fd.cMichael Paquier2019-07-28
| | | | | | | | The frontend version of walkdir() is defined in file_utils.c, and not initdb.c. Author: Sehrope Sarkuni Discussion: https://postgr.es/m/CAH7T-artawnBt4=KODNCD8Mt2ZX4CCjJT8c=_=950xjutcRZ4Q@mail.gmail.com
* Fix isolationtester race condition for notices sent before blocking.Tom Lane2019-07-27
| | | | | | | | | | | | | | | | | | | If a test sends a notice just before blocking, it's possible on slow machines for isolationtester to detect the blocked state before it's consumed the notice. (For this to happen, the notice would have to arrive after isolationtester has waited for data for 10ms, so on fast/lightly-loaded machines it's hard to reproduce the failure.) But, if we have seen the backend as blocked, it's certainly already sent any notices it's going to send. Therefore, one more round of PQconsumeInput and PQisBusy should be enough to collect and process any such notices. This appears to explain the instability noted in commit ebd499282, so undo the hack therein to not print notices from insert-conflict-specconflict. Patch by me, diagnosis by Andres Freund. Discussion: https://postgr.es/m/14616.1564251339@sss.pgh.pa.us
* Don't drop NOTICE messages in isolation tests.Tom Lane2019-07-27
| | | | | | | | | | | | | | | | | | | | | | | For its entire existence, isolationtester.c has forced client_min_messages to WARNING, but that seems like a very poor choice of test design. It should be up to individual test scripts to manage whether they emit notices and to ensure that the results are stable. (There were no NOTICE messages in the original set of isolation tests, so this was certainly dead code when committed, but perhaps it was needed at some earlier point.) It's possible that the original motivation was due to platform-dependent variations in the timing of stdout vs. stderr output. That should be moot since commits 73bcb76b7/6eda3e9c2, but just in case, adjust isotesterNoticeProcessor to print to stdout not stderr. (stderr seems like the wrong thing anyway: it should be for error printouts not expected test output.) Testing shows that the notices in insert-conflict-specconflict are indeed a bit timing-unstable on very slow machines, so hide them; maybe we can improve that later. Also, make the notices in plpgsql-toast a bit less verbose than the original code would've had them. Discussion: https://postgr.es/m/14616.1564251339@sss.pgh.pa.us
* Add support for --jobs in reindexdbMichael Paquier2019-07-27
| | | | | | | | | | | | | | | | | | When doing a schema-level or a database-level operation, a list of relations to build is created which gets processed in parallel using multiple connections, based on the recent refactoring for parallel slots in src/bin/scripts/. System catalogs are processed first in a serialized fashion to prevent deadlocks, followed by the rest done in parallel. This new option is not compatible with --system as reindexing system catalogs in parallel can lead to deadlocks, and with --index as there is no conflict handling for indexes rebuilt in parallel depending in the same relation. Author: Julien Rouhaud Reviewed-by: Sergei Kornilov, Michael Paquier Discussion: https://postgr.es/m/CAOBaU_YrnH_Jqo46NhaJ7uRBiWWEcS40VNRQxgFbqYo9kApUsg@mail.gmail.com
* pg_upgrade: Default new bindir to pg_upgrade locationPeter Eisentraut2019-07-27
| | | | | | | | | | | | | | | | | | | | | Make the directory where the pg_upgrade binary resides the default for new bindir, as running the pg_upgrade binary from where the new cluster is installed is a very common scenario. Setting this as the defauly bindir for the new cluster will remove the need to provide it explicitly via -B in many cases. To support directories being missing from option parsing, extend the directory check with a missingOk mode where the path must be filled at a later point before being used. Also move the exec_path check to earlier in setup to make sure we know the new cluster bindir when we scan for required executables. This removes the exec_path from the OSInfo struct as it is not used anywhere. Author: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/9328.1552952117@sss.pgh.pa.us
* pg_upgrade: Check all used executablesPeter Eisentraut2019-07-27
| | | | | | | | Expand the validate_exec() calls to cover all the used binaries. Author: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/9328.1552952117@sss.pgh.pa.us
* Fix typo in pg_upgrade file headerPeter Eisentraut2019-07-27
| | | | Author: Daniel Gustafsson <daniel@yesql.se>
* Don't uselessly escape a string that doesn't need escapingAlvaro Herrera2019-07-26
| | | | | | | Per gripe from Ian Barwick Co-authored-by: Ian Barwick <ian@2ndquadrant.com> Discussion: https://postgr.es/m/CABvVfJWNnNKb8cHsTLhkTsvL1+G6BVcV+57+w1JZ61p8YGPdWQ@mail.gmail.com
* Tweak our special-case logic for the IANA "Factory" timezone.Tom Lane2019-07-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_timezone_names() tries to avoid showing the "Factory" zone in the view, mainly because that has traditionally had a very long "abbreviation" such as "Local time zone must be set--see zic manual page", so that showing it messes up psql's formatting of the whole view. Since tzdb version 2016g, IANA instead uses the abbreviation "-00", which is sane enough that there's no reason to discriminate against it. On the other hand, it emerges that FreeBSD and possibly other packagers are so wedded to backwards compatibility that they hack the IANA data to keep the old spelling --- and not just that old spelling, but even older spellings that IANA used back in the stone age. This caused the filter logic to fail to suppress "Factory" at all on such platforms, though the formatting problem is definitely real in that case. To solve both problems, get rid of the hard-wired assumption about exactly what Factory's abbreviation is, and instead reject abbreviations exceeding 31 characters. This will allow Factory to appear in the view if and only if it's using the modern abbreviation. In passing, simplify the code we add to zic.c to support "zic -P" to remove its now-obsolete hacks to not print the Factory zone's abbreviation. Unlike pg_timezone_names(), there's no reason for that code to support old/nonstandard timezone data. Since we generally prefer to keep timezone-related behavior the same in all branches, and since this is arguably a bug fix, back-patch to all supported branches. Discussion: https://postgr.es/m/3961.1564086915@sss.pgh.pa.us
* Avoid choosing "localtime" or "posixrules" as TimeZone during initdb.Tom Lane2019-07-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some platforms create a file named "localtime" in the system timezone directory, making it a copy or link to the active time zone file. If Postgres is built with --with-system-tzdata, initdb will see that file as an exact match to localtime(3)'s behavior, and it may decide that "localtime" is the most preferred spelling of the active zone. That's a very bad choice though, because it's neither informative, nor portable, nor stable if someone changes the system timezone setting. Extend the preference logic added by commit e3846a00c so that we will prefer any other zone file that matches localtime's behavior over "localtime". On the same logic, also discriminate against "posixrules", which is another not-really-a-zone file that is often present in the timezone directory. (Since we install "posixrules" but not "localtime", this change can affect the behavior of Postgres with or without --with-system-tzdata.) Note that this change doesn't prevent anyone from choosing these pseudo-zones if they really want to (i.e., by setting TZ for initdb, or modifying the timezone GUC later on). It just prevents initdb from preferring these zone names when there are multiple matches to localtime's behavior. Since we generally prefer to keep timezone-related behavior the same in all branches, and since this is arguably a bug fix, back-patch to all supported branches. Discussion: https://postgr.es/m/CADT4RqCCnj6FKLisvT8tTPfTP4azPhhDFJqDF1JfBbOH5w4oyQ@mail.gmail.com Discussion: https://postgr.es/m/27991.1560984458@sss.pgh.pa.us
* Fix loss of fractional digits for large values in cash_numeric().Tom Lane2019-07-26
| | | | | | | | | | | | | | | | Money values exceeding about 18 digits (depending on lc_monetary) could be inaccurately converted to numeric, due to select_div_scale() deciding it didn't need to compute any fractional digits. Force its hand by setting the dscale of one division input to equal the number of fractional digits we need. In passing, rearrange the logic to not do useless work in locales where money values are considered integral. Per bug #15925 from Slawomir Chodnicki. Back-patch to all supported branches. Discussion: https://postgr.es/m/15925-da9953e2674bb5c8@postgresql.org
* Fix LDAP test instability.Thomas Munro2019-07-26
| | | | | | | | | | After starting slapd, wait until it can accept a connection before beginning the real test work. This avoids occasional test failures. Back-patch to 11, where the LDAP tests arrived. Author: Thomas Munro Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/20190719033013.GI1859%40paquier.xyz
* Add missing (COSTS OFF) to EXPLAIN added in previous commit.Andres Freund2019-07-25
| | | | Backpatch: 12-, like the previous commit
* Fix slot type handling for Agg nodes performing internal sorts.Andres Freund2019-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 15d8f8312 we assert that - and since 7ef04e4d2cb2, 4da597edf1 rely on - the slot type for an expression's ecxt_{outer,inner,scan}tuple not changing, unless explicitly flagged as such. That allows to either skip deforming (for a virtual tuple slot) or optimize the code for JIT accelerated deforming appropriately (for other known slot types). This assumption was sometimes violated for grouping sets, when nodeAgg.c internally uses tuplesorts, and the child node doesn't return a TTSOpsMinimalTuple type slot. Detect that case, and flag that the outer slot might not be "fixed". It's probably worthwhile to optimize this further in the future, and more granularly determine whether the slot is fixed. As we already instantiate per-phase transition and equal expressions, we could cheaply set the slot type appropriately for each phase. But that's a separate change from this bugfix. This commit does include a very minor optimization by avoiding to create a slot for handling tuplesorts, if no such sorts are performed. Previously we created that slot unnecessarily in the common case of computing all grouping sets via hashing. The code looked too confusing without that, as the conditions for needing a sort slot and flagging that the slot type isn't fixed, are the same. Reported-By: Ashutosh Sharma Author: Andres Freund Discussion: https://postgr.es/m/CAE9k0PmNaMD2oHTEAhRyxnxpaDaYkuBYkLa1dpOpn=RS0iS2AQ@mail.gmail.com Backpatch: 12-, where the bug was introduced in 15d8f8312
* Fix syntax error in commit 20e99cddd.Tom Lane2019-07-25
| | | | Per buildfarm.
* Fix failures to ignore \r when reading Windows-style newlines.Tom Lane2019-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | libpq failed to ignore Windows-style newlines in connection service files. This normally wasn't a problem on Windows itself, because fgets() would convert \r\n to just \n. But if libpq were running inside a program that changes the default fopen mode to binary, it would see the \r's and think they were data. In any case, it's project policy to ignore \r in text files unconditionally, because people sometimes try to use files with DOS-style newlines on Unix machines, where the C library won't hide that from us. Hence, adjust parseServiceFile() to ignore \r as well as \n at the end of the line. In HEAD, go a little further and make it ignore all trailing whitespace, to match what it's always done with leading whitespace. In HEAD, also run around and fix up everyplace where we have newline-chomping code to make all those places look consistent and uniformly drop \r. It is not clear whether any of those changes are fixing live bugs. Most of the non-cosmetic changes are in places that are reading popen output, and the jury is still out as to whether popen on Windows can return \r\n. (The Windows-specific code in pipe_read_line seems to think so, but our lack of support for this elsewhere suggests maybe it's not a problem in practice.) Hence, I desisted from applying those changes to back branches, except in run_ssl_passphrase_command() which is new enough and little-tested enough that we'd probably not have heard about any problems there. Tom Lane and Michael Paquier, per bug #15827 from Jorge Gustavo Rocha. Back-patch the parseServiceFile() change to all supported branches, and the run_ssl_passphrase_command() change to v11 where that was added. Discussion: https://postgr.es/m/15827-e6ba53a3a7ed543c@postgresql.org
* Honor MSVC WindowsSDKVersion if setAndrew Dunstan2019-07-25
| | | | | | | | | | | | Add a line to the project file setting the target SDK. Otherwise, in for example VS2017, if the default but optional 8.1 SDK is not installed the build will fail. Patch from Peifeng Qiu, slightly edited by me. Discussion: https://postgr.es/m/CABmtVJhw1boP_bd4=b3Qv5YnqEdL696NtHFi2ruiyQ6mFHkeQQ@mail.gmail.com Backpatch to all live branches.
* Fix system column accesses in ON CONFLICT ... RETURNING.Andres Freund2019-07-24
| | | | | | | | | | | | | | | | | After 277cb789836 ON CONFLICT ... SET ... RETURNING failed with ERROR: virtual tuple table slot does not have system attributes when taking the update path, as the slot used to insert into the table (and then process RETURNING) was defined to be a virtual slot in that commit. Virtual slots don't support system columns except for tableoid and ctid, as the other system columns are AM dependent. Fix that by using a slot of the table's type. Add tests for system column accesses in ON CONFLICT ... RETURNING. Reported-By: Roby, bisected to the relevant commit by Jeff Janes Author: Andres Freund Discussion: https://postgr.es/m/73436355-6432-49B1-92ED-1FE4F7E7E100@finefun.com.au Backpatch: 12-, where the bug was introduced in 277cb789836
* Fix failure with pgperlcritic from the TAP test of synchronous replicationMichael Paquier2019-07-25
| | | | | | | | | | | Oversight in 7d81bdc, which introduced a new routine in perl lacking a return clause. Per buildfarm member crake. Backpatch down to 9.6 like its parent. Reported-by: Andrew Dunstan Discussion: https://postgr.es/m/16da29fa-d504-1380-7095-40de586dc038@2ndQuadrant.com Backpatch-through: 9.6
* Fix infelicities in describeOneTableDetails' partitioned-table handling.Tom Lane2019-07-24
| | | | | | | | | | | | | | | | | | | | | | | | | describeOneTableDetails issued a partition-constraint-fetching query for every table, even ones it knows perfectly well are not partitions. To add insult to injury, it then proceeded to leak the empty PGresult if the table wasn't a partition. Doing that a lot of times might amount to a meaningful leak, so this seems like a back-patchable bug. Fix that, and also fix a related PGresult leak in the partition-parent case (though that leak would occur only if we got no row, which is unexpected). Minor code beautification too, to make this code look more like the pre-existing code around it. Back-patch the whole change into v12. However, the fact that we already know whether the table is a partition dates only to commit 1af25ca0c; back-patching the relevant changes from that is probably more churn than is justified in released branches. Hence, in v11 and v10, just do the minimum to fix the PGresult leaks. Noted while messing around with adjacent code for yesterday's \d improvements.
* Use full 64-bit XID for checking if a deleted GiST page is old enough.Heikki Linnakangas2019-07-24
| | | | | | | | | | | Otherwise, after a deleted page gets even older, it becomes unrecyclable again. B-tree has the same problem, and has had since time immemorial, but let's at least fix this in GiST, where this is new. Backpatch to v12, where GiST page deletion was introduced. Reviewed-by: Andrey Borodin Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
* Refactor checks for deleted GiST pages.Heikki Linnakangas2019-07-24
| | | | | | | | | | | | The explicit check in gistScanPage() isn't currently really necessary, as a deleted page is always empty, so the loop would fall through without doing anything, anyway. But it's a marginal optimization, and it gives a nice place to attach a comment to explain how it works. Backpatch to v12, where GiST page deletion was introduced. Reviewed-by: Andrey Borodin Discussion: https://www.postgresql.org/message-id/835A15A5-F1B4-4446-A711-BF48357EB602%40yandex-team.ru
* Don't assume expr is available in pgbench testsAndrew Dunstan2019-07-24
| | | | | | | | | | Windows hosts do not normally come with expr, so instead of using that to test the \setshell command, use echo instead, which is fairly universally available. Backpatch to release 11, where this came in. Problem found by me, patch by Fabien Coelho.
* Improve stability of TAP test for synchronous replicationMichael Paquier2019-07-24
| | | | | | | | | | | | | | | | | | Slow buildfarm machines have run into issues with this TAP test caused by a race condition related to the startup of a set of standbys, where it is possible to finish with an unexpected order in the WAL sender array of the primary. This closes the race condition by making sure that any standby started is registered into the WAL sender array of the primary before starting the next one based on lookups of pg_stat_replication. Backpatch down to 9.6 where the test has been introduced. Author: Michael Paquier Reviewed-by: Álvaro Herrera, Noah Misch Discussion: https://postgr.es/m/20190617055145.GB18917@paquier.xyz Backpatch-through: 9.6
* Check that partitions are not in use when dropping constraintsAlvaro Herrera2019-07-23
| | | | | | | | | | | | | | | | | | | | | | If the user creates a deferred constraint in a partition, and in a transaction they cause the constraint's trigger execution to be deferred until commit time *and* drop the constraint, then when commit time comes the queued trigger will fail to run because the trigger object will have been dropped. This is explained because when a constraint gets dropped in a partitioned table, the recursion to drop the ones in partitions is done by the dependency mechanism, not by ALTER TABLE traversing the recursion tree as in all other cases. In the non-partitioned case, this problem is avoided by checking that the table is not "in use" by alter-table; other alter-table subcommands that recurse to partitions do that check for each partition. But the dependency mechanism doesn't have a way to do that. Fix the problem by applying the same check to all partitions during ALTER TABLE's "prep" phase, which correctly raises the necessary error. Reported-by: Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> Discussion: https://postgr.es/m/CAKcux6nZiO9-eEpr1ZD84bT1mBoVmeZkfont8iSpcmYrjhGWgA@mail.gmail.com
* Improve psql's \d output for partitioned indexes.Tom Lane2019-07-23
| | | | | | | | | | | | | | | | Include partitioning information much as we do for partitioned tables. (However, \d+ doesn't show the partition bounds, because those are not stored for indexes.) In passing, fix a couple of queries to look less messy in -E output. Also, add some tests for \d on tables with nondefault tablespaces. (Somebody previously added a rather silly number of tests for \d on partitioned indexes, yet completely neglected other cases.) Justin Pryzby, reviewed by Fabien Coelho Discussion: https://postgr.es/m/20190422154902.GH14223@telsasoft.com
* Improve psql's \d output for TOAST tables.Tom Lane2019-07-23
| | | | | | | | | | Add the name of the owning table to the footers for a TOAST table. Also, show all the same footers as for a regular table (in practice, this adds the index and perhaps the tablespace and access method). Justin Pryzby, reviewed by Fabien Coelho Discussion: https://postgr.es/m/20190422154902.GH14223@telsasoft.com
* Add CREATE DATABASE LOCALE optionPeter Eisentraut2019-07-23
| | | | | | | | | This sets both LC_COLLATE and LC_CTYPE with one option. Similar behavior is already supported in initdb, CREATE COLLATION, and createdb. Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr> Discussion: https://www.postgresql.org/message-id/flat/d9d5043a-dc70-da8a-0166-1e218e6e34d4%402ndquadrant.com
* Remove more progname references in vacuumdb.cMichael Paquier2019-07-23
| | | | | | | Oversight in 5f384037. Author: Álvaro Herrera Discussion: https://postgr.es/m/20190722151806.GA22634@alvherre.pgsql
* Install dependencies to prevent dropping partition key columns.Tom Lane2019-07-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The logic in ATExecDropColumn that rejects dropping partition key columns is quite an inadequate defense, because it doesn't execute in cases where a column needs to be dropped due to cascade from something that only the column, not the whole partitioned table, depends on. That leaves us with a badly broken partitioned table; even an attempt to load its relcache entry will fail. We really need to have explicit pg_depend entries that show that the column can't be dropped without dropping the whole table. Hence, add those entries. In v12 and HEAD, bump catversion to ensure that partitioned tables will have such entries. We can't do that in released branches of course, so in v10 and v11 this patch affords protection only to partitioned tables created after the patch is installed. Given the lack of field complaints (this bug was found by fuzz-testing not by end users), that's probably good enough. In passing, fix ATExecDropColumn and ATPrepAlterColumnType messages to be more specific about which partition key column they're complaining about. Per report from Manuel Rigger. Back-patch to v10 where partitioned tables were added. Discussion: https://postgr.es/m/CA+u7OA4JKCPFrdrAbOs7XBiCyD61XJxeNav4LefkSmBLQ-Vobg@mail.gmail.com Discussion: https://postgr.es/m/31920.1562526703@sss.pgh.pa.us
* Revert "initdb: Change authentication defaults"Peter Eisentraut2019-07-22
| | | | | | This reverts commit 09f08930f0f6fd4a7350ac02f29124b919727198. The buildfarm client needs some adjustments first.
* initdb: Change authentication defaultsPeter Eisentraut2019-07-22
| | | | | | | | | | | | | | | Change the defaults for the pg_hba.conf generated by initdb to "peer" for local (if supported, else "md5") and "md5" for host. (Changing from "md5" to SCRAM is left as a separate exercise.) "peer" is currently not supported on AIX, HP-UX, and Windows. Users on those operating systems will now either have to provide a password to initdb or choose a different authentication method when running initdb. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/bec17f0a-ddb1-8b95-5e69-368d9d0a3390%40postgresql.org
* Use appendBinaryStringInfo in more places where the length is knownDavid Rowley2019-07-23
| | | | | | | | | | When we already know the length that we're going to append, then it makes sense to use appendBinaryStringInfo instead of appendStringInfoString so that the append can be performed with a simple memcpy() using a known length rather than having to first perform a strlen() call to obtain the length. Discussion: https://postgr.es/m/CAKJS1f8+FRAM1s5+mAa3isajeEoAaicJ=4e0WzrH3tAusbbiMQ@mail.gmail.com
* Make identity sequence management more robustPeter Eisentraut2019-07-22
| | | | | | | | | | | | | | | | | | | | | | Some code could get confused when certain catalog state involving both identity and serial sequences was present, perhaps during an attempt to upgrade the latter to the former. Specifically, dropping the default of a serial column maintains the ownership of the sequence by the column, and so it would then be possible to afterwards make the column an identity column that would now own two sequences. This causes the code that looks up the identity sequence to error out, making the new identity column inoperable until the ownership of the previous sequence is released. To fix this, make the identity sequence lookup only consider sequences with the appropriate dependency type for an identity sequence, so it only ever finds one (unless something else is broken). In the above example, the old serial sequence would then be ignored. Reorganize the various owned-sequence-lookup functions a bit to make this clearer. Reported-by: Laurenz Albe <laurenz.albe@cybertec.at> Discussion: https://www.postgresql.org/message-id/flat/470c54fc8590be4de0f41b0d295fd6390d5e8a6c.camel@cybertec.at
* Make better use of the new List implementation in a couple of placesDavid Rowley2019-07-22
| | | | | | | | | | | | | | | | | | | | | | | In nodeAppend.c and nodeMergeAppend.c there were some foreach loops which looped over the list of subplans and only performed any work if the subplan index was found in a Bitmapset. With the old linked list implementation of List, this form made sense as accessing the Nth list element was O(N). However, thanks to 1cff1b95a we now have array-based lists, so accessing the Nth element has become O(1). Here we make the most of the O(1) lookups and just loop over the set members of the Bitmapset with bms_next_member(). This performs slightly better when a small number of the list items are in the Bitmapset. Micro benchmarks show that when the Bitmapset contains all or most of the list items then the new code is ever so slightly slower. In practice, the cost is so small that it's drowned out by various other things such as locking the relations belonging to each subplan, etc. The primary goal here is to leave better code examples around which benefit better from the new list implementation. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAKJS1f8ZcsLVgkF4wOfRyMYTcPgLFiUAOedFC+U2vK_aFZk-BA@mail.gmail.com
* Fix inconsistencies and typos in the treeMichael Paquier2019-07-22
| | | | | | | | This is numbered take 7, and addresses a set of issues with code comments, variable names and unreferenced variables. Author: Alexander Lakhin Discussion: https://postgr.es/m/dff75442-2468-f74f-568c-6006e141062f@gmail.com
* Adjust overly strict AssertDavid Rowley2019-07-22
| | | | | | | | | | | | | 3373c7155 changed how we determine EquivalenceClasses for relations and added an Assert to ensure all relations mentioned in each EC's ec_relids was a RELOPT_BASEREL. However, the join removal code may remove a LEFT JOIN and since it does not clean up EC members belonging to the removed relations it can leave RELOPT_DEADREL rels in ec_relids. Fix this by adjusting the Assert to allow RELOPT_DEADREL rels too. Reported-by: sqlsmith via Andreas Seltenreich Discussion: https://postgr.es/m/87y30r8sls.fsf@ansel.ydns.eu
* Remove no-longer-helpful reliance on fixed-size local array.Tom Lane2019-07-21
| | | | | | | | | | | | Coverity complained about this code, apparently because it uses a local array of size FUNC_MAX_ARGS without a guard that the input argument list is no longer than that. (Not sure why it complained today, since this code's been the same for a long time; possibly it re-analyzed everything the List API change touched?) Rather than add a guard, though, let's just get rid of the local array altogether. It was only there to avoid list_nth() calls, and those are no longer expensive.
* Fix compilation warning of pg_basebackup with MinGWMichael Paquier2019-07-21
| | | | | | | | | | Several buildfarm members have been complaining about that with gcc, like jacana. Weirdly enough, Visual Studio's compilers do not find this issue. Author: Michael Paquier Reviewed-by: Andrew Dunstan Discussion: https://postgr.es/m/20190719050830.GK1859@paquier.xyz
* Speed up finding EquivalenceClasses for a given set of relsDavid Rowley2019-07-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously in order to determine which ECs a relation had members in, we had to loop over all ECs stored in PlannerInfo's eq_classes and check if ec_relids mentioned the relation. For the most part, this was fine, as generally, unless queries were fairly complex, the overhead of performing the lookup would have not been that significant. However, when queries contained large numbers of joins and ECs, the overhead to find the set of classes matching a given set of relations could become a significant portion of the overall planning effort. Here we allow a much more efficient method to access the ECs which match a given relation or set of relations. A new Bitmapset field in RelOptInfo now exists to store the indexes into PlannerInfo's eq_classes list which each relation is mentioned in. This allows very fast lookups to find all ECs belonging to a single relation. When we need to lookup ECs belonging to a given pair of relations, we can simply bitwise-AND the Bitmapsets from each relation and use the result to perform the lookup. We also take the opportunity to write a new implementation of generate_join_implied_equalities which makes use of the new indexes. generate_join_implied_equalities_for_ecs must remain as is as it can be given a custom list of ECs, which we can't easily determine the indexes of. This was originally intended to fix the performance penalty of looking up foreign keys matching a join condition which was introduced by 100340e2d. However, we're speeding up much more than just that here. Author: David Rowley, Tom Lane Reviewed-by: Tom Lane, Tomas Vondra Discussion: https://postgr.es/m/6970.1545327857@sss.pgh.pa.us