aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Clean up duplicate role and schema names in regression tests.Tom Lane2018-03-15
| | | | | | | | | | | | | | | | | | | | | | | Since these names are global, using the same ones in different regression tests creates a hazard of test failures if any two such scripts run concurrently. Let's establish a policy of not doing that. In the cases where a conflict existed, I chose to rename both sides: in principle one script or the other could've been left in possession of the common name, but that seems to just invite more trouble of the same sort. There are a number of places where scripts are using names that seem unduly generic, but in the absence of actual conflicts I left them alone. In addition, fix insert.sql's use of "someone_else" as a role name. That's a flat out violation of longstanding project policy, so back-patch that change to v10 where the usage appeared. The rest of this is just future-proofing, as no two of these scripts are actually run concurrently in the existing parallel_schedule. Conflicts of schema-qualified names also exist, but will be dealt with separately. Discussion: https://postgr.es/m/4627.1521070268@sss.pgh.pa.us
* test_ddl_deparse: Don't use pg_class as source for a matviewAlvaro Herrera2018-03-15
| | | | | | | | Doing so causes a pg_upgrade of a database containing these objects to fail whenever pg_class changes. And it's pointless anyway: we have more interesting tables anyhow. Discussion: https://postgr.es/m/CAD5tBc+s8pW9WvH2+_z=B4x95FD4QuzZKcaMpff_9H4rS0VU1A@mail.gmail.com
* logical replication: fix OID type mapping mechanismAlvaro Herrera2018-03-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The logical replication type map seems to have been misused by its only caller -- it would try to use the remote OID as input for local type routines, which unsurprisingly could result in bogus "cache lookup failed for type XYZ" errors, or random other type names being picked up if they happened to use the right OID. Fix that, changing Oid logicalrep_typmap_getid(Oid remoteid) to char *logicalrep_typmap_gettypname(Oid remoteid) which is more useful. If the remote type is not part of the typmap, this simply prints "unrecognized type" instead of choking trying to figure out -- a pointless exercise (because the only input for that comes from replication messages, which are not under the local node's control) and dangerous to boot, when called from within an error context callback. Once that is done, it comes to light that the local OID in the typmap entry was not being used for anything; the type/schema names are what we need, so remove local type OID from that struct. Once you do that, it becomes pointless to attach a callback to regular syscache invalidation. So remove that also. Reported-by: Dang Minh Huong Author: Masahiko Sawada Reviewed-by: Álvaro Herrera, Petr Jelínek, Dang Minh Huong, Atsushi Torikoshi Discussion: https://postgr.es/m/75DB81BEEA95B445AE6D576A0A5C9E936A6BE964@BPXM05GP.gisp.nec.co.jp Discussion: https://postgr.es/m/75DB81BEEA95B445AE6D576A0A5C9E936A6C4B0A@BPXM05GP.gisp.nec.co.jp
* Log when a BRIN autosummarization request failsAlvaro Herrera2018-03-14
| | | | | | | | | | | | | | | | Autovacuum's 'workitem' request queue is of limited size, so requests can fail if they arrive more quickly than autovacuum can process them. Emit a log message when this happens, to provide better visibility of this. Backpatch to 10. While this represents an API change for AutoVacuumRequestWork, that function is not yet prepared to deal with external modules calling it, so there doesn't seem to be any risk (other than log spam, that is.) Author: Masahiko Sawada Reviewed-by: Fabrízio Mello, Ildar Musin, Álvaro Herrera Discussion: https://postgr.es/m/CAD21AoB1HrQhp6_4rTyHN5kWEJCEsG8YzsjZNt-ctoXSn5Uisw@mail.gmail.com
* Fix double frees in ecpg.Michael Meskes2018-03-14
| | | | Patch by Patrick Krecker <patrick@judicata.com>
* When updating reltuples after ANALYZE, just extrapolate from our sample.Tom Lane2018-03-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing logic for updating pg_class.reltuples trusted the sampling results only for the pages ANALYZE actually visited, preferring to believe the previous tuple density estimate for all the unvisited pages. While there's some rationale for doing that for VACUUM (first that VACUUM is likely to visit a very nonrandom subset of pages, and second that we know for sure that the unvisited pages did not change), there's no such rationale for ANALYZE: by assumption, it's looked at an unbiased random sample of the table's pages. Furthermore, in a very large table ANALYZE will have examined only a tiny fraction of the table's pages, meaning it cannot slew the overall density estimate very far at all. In a table that is physically growing, this causes reltuples to increase nearly proportionally to the change in relpages, regardless of what is actually happening in the table. This has been observed to cause reltuples to become so much larger than reality that it effectively shuts off autovacuum, whose threshold for doing anything is a fraction of reltuples. (Getting to the point where that would happen seems to require some additional, not well understood, conditions. But it's undeniable that if reltuples is seriously off in a large table, ANALYZE alone will not fix it in any reasonable number of iterations, especially not if the table is continuing to grow.) Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone, and in ANALYZE, just extrapolate from the sample pages on the assumption that they provide an accurate model of the whole table. If, by very bad luck, they don't, at least another ANALYZE will fix it; in the old logic a single bad estimate could cause problems indefinitely. In HEAD, let's remove vac_estimate_reltuples' is_analyze argument altogether; it was never used for anything and now it's totally pointless. But keep it in the back branches, in case any third-party code is calling this function. Per bug #15005. Back-patch to all supported branches. David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me Discussion: https://postgr.es/m/20180117164916.3fdcf2e9@engels
* Avoid holding AutovacuumScheduleLock while rechecking table statistics.Tom Lane2018-03-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | In databases with many tables, re-fetching the statistics takes some time, so that this behavior seriously decreases the available concurrency for multiple autovac workers. There's discussion afoot about more complete fixes, but a simple and back-patchable amelioration is to claim the table and release the lock before rechecking stats. If we find out there's no longer a reason to process the table, re-taking the lock to un-claim the table is cheap enough. (This patch is quite old, but got lost amongst a discussion of more aggressive fixes. It's not clear when or if such a fix will be accepted, but in any case it'd be unlikely to get back-patched. Let's do this now so we have some improvement for the back branches.) In passing, make the normal un-claim step take AutovacuumScheduleLock not AutovacuumLock, since that is what is documented to protect the wi_tableoid field. This wasn't an actual bug in view of the fact that readers of that field hold both locks, but it creates some concurrency penalty against operations that need only AutovacuumLock. Back-patch to all supported versions. Jeff Janes Discussion: https://postgr.es/m/26118.1520865816@sss.pgh.pa.us
* Set connection back to NULL after freeing it.Michael Meskes2018-03-13
| | | | Patch by Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>
* Fix CREATE TABLE / LIKE with bigint identity columnPeter Eisentraut2018-03-13
| | | | | | | | | | | CREATE TABLE / LIKE with a bigint identity column would fail on platforms where long is 32 bits. Copying the sequence values used makeInteger(), which would truncate the 64-bit sequence data to 32 bits. To fix, use makeFloat() instead, like the parser. (This does not actually make use of floats, but stores the values as strings.) Bug: #15096 Reviewed-by: Michael Paquier <michael@paquier.xyz>
* Fix improper uses of canonicalize_qual().Tom Lane2018-03-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | One of the things canonicalize_qual() does is to remove constant-NULL subexpressions of top-level AND/OR clauses. It does that on the assumption that what it's given is a top-level WHERE clause, so that NULL can be treated like FALSE. Although this is documented down inside a subroutine of canonicalize_qual(), it wasn't mentioned in the documentation of that function itself, and some callers hadn't gotten that memo. Notably, commit d007a9505 caused get_relation_constraints() to apply canonicalize_qual() to CHECK constraints. That allowed constraint exclusion to misoptimize situations in which a CHECK constraint had a provably-NULL subclause, as seen in the regression test case added here, in which a child table that should be scanned is not. (Although this thinko is ancient, the test case doesn't fail before 9.2, for reasons I've not bothered to track down in detail. There may be related cases that do fail before that.) More recently, commit f0e44751d added an independent bug by applying canonicalize_qual() to index expressions, which is even sillier since those might not even be boolean. If they are, though, I think this could lead to making incorrect index entries for affected index expressions in v10. I haven't attempted to prove that though. To fix, add an "is_check" parameter to canonicalize_qual() to specify whether it should assume WHERE or CHECK semantics, and make it perform NULL-elimination accordingly. Adjust the callers to apply the right semantics, or remove the call entirely in cases where it's not known that the expression has one or the other semantics. I also removed the call in some cases involving partition expressions, where it should be a no-op because such expressions should be canonical already ... and was a no-op, independently of whether it could in principle have done something, because it was being handed the qual in implicit-AND format which isn't what it expects. In HEAD, add an Assert to catch that type of mistake in future. This represents an API break for external callers of canonicalize_qual(). While that's intentional in HEAD to make such callers think about which case applies to them, it seems like something we probably wouldn't be thanked for in released branches. Hence, in released branches, the extra parameter is added to a new function canonicalize_qual_ext(), and canonicalize_qual() is a wrapper that retains its old behavior. Patch by me with suggestions from Dean Rasheed. Back-patch to all supported branches. Discussion: https://postgr.es/m/24475.1520635069@sss.pgh.pa.us
* In initdb, don't bother trying max_connections = 10.Tom Lane2018-03-08
| | | | | | | | | | | | | | The server won't actually start with that setting anymore, not since we raised the default max_wal_senders to 10. Per discussion, we don't wish to back down on that default, so instead raise the effective floor for max_connections (to 20). It's still possible to configure a smaller setting manually, but initdb won't set it that way. Since that change happened in v10, back-patch to v10. Kyotaro Horiguchi Discussion: https://postgr.es/m/20180209.170823.42008365.horiguchi.kyotaro@lab.ntt.co.jp
* Fix typoAlvaro Herrera2018-03-07
| | | | | Author: Kyotaro HORIGUCHI Discussion: https://postgr.es/m/20180307.163428.209919771.horiguchi.kyotaro@lab.ntt.co.jp
* Refrain from duplicating data in reorderbuffersAlvaro Herrera2018-03-06
| | | | | | | | | | | | | | If a walsender exits leaving data in reorderbuffers, the next walsender that tries to decode the same transaction would append its decoded data in the same spill files without truncating it first, which effectively duplicate the data. Avoid that by removing any leftover reorderbuffer spill files when a walsender starts. Backpatch to 9.4; this bug has been there from the very beginning of logical decoding. Author: Craig Ringer, revised by me Reviewed by: Álvaro Herrera, Petr Jelínek, Masahiko Sawada
* Fix bogus Name assignment in CreateStatisticsAlvaro Herrera2018-03-06
| | | | | | | | | | | Apparently, it doesn't work to use a plain cstring as a Name datum: you may end up having random bytes because of failing to zero the bytes after the terminating \0, as indicated by valgrind. I introduced this bug in 5564c1181548, so backpatch this fix to REL_10_STABLE, like that commit. While at it, fix a slightly misleading comment, pointed out by David Rowley.
* Clone extended stats in CREATE TABLE (LIKE INCLUDING ALL)Alvaro Herrera2018-03-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The LIKE INCLUDING ALL clause to CREATE TABLE intuitively indicates cloning of extended statistics on the source table, but it failed to do so. Patch it up so that it does. Also include an INCLUDING STATISTICS option to the LIKE clause, so that the behavior can be requested individually, or excluded individually. While at it, reorder the INCLUDING options, both in code and in docs, in alphabetical order which makes more sense than feature-implementation order that was previously used. Backpatch this to Postgres 10, where extended statistics were introduced, because this is seen as an oversight in a fresh feature which is better to get consistent from the get-go instead of changing only in pg11. In pg11, comments on statistics objects are cloned too. In pg10 they are not, because I (Álvaro) was too coward to change the parse node as required to support it. Also, in pg10 I chose not to renumber the parser symbols for the various INCLUDING options in LIKE, for the same reason. Any corresponding user-visible changes (docs) are backpatched, though. Reported-by: Stephen Froehlich Author: David Rowley Reviewed-by: Álvaro Herrera, Tomas Vondra Discussion: https://postgr.es/m/CY1PR0601MB1927315B45667A1B679D0FD5E5EF0@CY1PR0601MB1927.namprd06.prod.outlook.com
* Fix pg_rewind to handle relation data files in tablespaces properly.Fujii Masao2018-03-06
| | | | | | | | | | | | | pg_rewind checks whether each file is a relation data file, from its path. Previously this check logic had the bug which made pg_rewind fail to recognize any relation data files in tablespaces. Which also caused an assertion failure in pg_rewind. Back-patch to 9.5 where pg_rewind was added. Author: Takayuki Tsunakawa Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F8D6C7A@G01JPEXMBYT05
* Fix assorted issues in convert_to_scalar().Tom Lane2018-03-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If convert_to_scalar is passed a pair of datatypes it can't cope with, its former behavior was just to elog(ERROR). While this is OK so far as the core code is concerned, there's extension code that would like to use scalarltsel/scalargtsel/etc as selectivity estimators for operators that work on non-core datatypes, and this behavior is a show-stopper for that use-case. If we simply allow convert_to_scalar to return FALSE instead of outright failing, then the main logic of scalarltsel/scalargtsel will work fine for any operator that behaves like a scalar inequality comparison. The lack of conversion capability will mean that we can't estimate to better than histogram-bin-width precision, since the code will effectively assume that the comparison constant falls at the middle of its bin. But that's still a lot better than nothing. (Someday we should provide a way for extension code to supply a custom version of convert_to_scalar, but today is not that day.) While poking at this issue, we noted that the existing code for handling type bytea in convert_to_scalar is several bricks shy of a load. It assumes without checking that if the comparison value is type bytea, the bounds values are too; in the worst case this could lead to a crash. It also fails to detoast the input values, so that the comparison result is complete garbage if any input is toasted out-of-line, compressed, or even just short-header. I'm not sure how often such cases actually occur --- the bounds values, at least, are probably safe since they are elements of an array and hence can't be toasted. But that doesn't make this code OK. Back-patch to all supported branches, partly because author requested that, but mostly because of the bytea bugs. The change in API for the exposed routine convert_network_to_scalar() is theoretically a back-patch hazard, but it seems pretty unlikely that any third-party code is calling that function directly. Tomas Vondra, with some adjustments by me Discussion: https://postgr.es/m/b68441b6-d18f-13ab-b43b-9a72188a4e02@2ndquadrant.com
* Fix VM buffer pin management in heap_lock_updated_tuple_rec().Tom Lane2018-03-02
| | | | | | | | | | | | | Sloppy coding in this function could lead to leaking a VM buffer pin, or to attempting to free the same pin twice. Repair. While at it, reduce the code's tendency to free and reacquire the same page pin. Back-patch to 9.6; before that, this routine did not concern itself with VM pages. Amit Kapila and Tom Lane Discussion: https://postgr.es/m/CAA4eK1KJKwhc=isgTQHjM76CAdVswzNeAuZkh_cx-6QgGkSEgA@mail.gmail.com
* Make gistvacuumcleanup() count the actual number of index tuples.Tom Lane2018-03-02
| | | | | | | | | | | | | | | Previously, it just returned the heap tuple count, which might be only an estimate, and would be completely the wrong thing if the index is partial. Since this function scans every index page anyway to find free pages, it's practically free to count the surviving index tuples. Let's do that and return an accurate count. This is easily visible as a wrong reltuples value for a partial GiST index following VACUUM, so back-patch to all supported branches. Andrey Borodin, reviewed by Michail Nikolaev Discussion: https://postgr.es/m/151956654251.6915.675951950408204404.pgcf@coridan.postgresql.org
* Use ereport not elog for some corrupt-HOT-chain reports.Tom Lane2018-03-01
| | | | | | | | | | | | | | | | These errors have been seen in the field in corrupted-data situations. It seems worthwhile to report them with ERRCODE_DATA_CORRUPTED, rather than the generic ERRCODE_INTERNAL_ERROR, for the benefit of log monitoring and tools like amcheck. However, use errmsg_internal so that the text strings still aren't translated; it seems unlikely to be worth translators' time to do so. Back-patch to 9.3, like the predecessor commit d70cf811f that introduced these elog calls originally (replacing Asserts). Peter Geoghegan Discussion: https://postgr.es/m/CAH2-Wzmn4-Pg-UGFwyuyK-wiTih9j32pwg_7T9iwqXpAUZr=Mg@mail.gmail.com
* Relax overly strict sanity check for upgraded ancient databasesAlvaro Herrera2018-03-01
| | | | | | | | | | | | | | | | Commit 4800f16a7ad0 added some sanity checks to ensure we don't accidentally corrupt data, but in one of them we failed to consider the effects of a database upgraded from 9.2 or earlier, where a tuple exclusively locked prior to the upgrade has a slightly different bit pattern. Fix that by using the macro that we fixed in commit 74ebba84aeb6 for similar situations. Reported-by: Alexandre Garcia Reviewed-by: Andres Freund Discussion: https://postgr.es/m/CAPYLKR6yxV4=pfW0Gwij7aPNiiPx+3ib4USVYnbuQdUtmkMaEA@mail.gmail.com Andres suspects that this bug may have wider ranging consequences, but I couldn't find anything.
* Fix IOS planning when only some index columns can return an attribute.Tom Lane2018-03-01
| | | | | | | | | | | | | | | | | | | | | | | Since 9.5, it's possible that some but not all columns of an index support returning the indexed value for index-only scans. If the same indexed column appears in index columns that behave both ways, check_index_only() supposed that it'd be OK to do an index-only scan testing that column; but that fails if we have to recheck the indexed condition on one of the columns that doesn't support this. In principle we could make this work by remapping the recheck expressions to pull the value from a column that does support returning the indexed value. But such cases are so weird and rare that, at least for now, it doesn't seem worth the trouble. Instead, just teach check_index_only that a value is returnable only if all the index columns containing it are returnable, rather than any of them. Per report from David Pereiro Lagares. Back-patch to 9.5 where the possibility of this situation appeared. Kyotaro Horiguchi Discussion: https://postgr.es/m/1516210494.1798.16.camel@nlpgo.com
* Rename base64 routines to avoid conflict with Solaris built-in functions.Tom Lane2018-02-28
| | | | | | | | | | | | | | | | Solaris 11.4 has built-in functions named b64_encode and b64_decode. Rename ours to something else to avoid the conflict (fortunately, ours are static so the impact is limited). One could wish for less duplication of code in this area, but that would be a larger patch and not very suitable for back-patching. Since this is a portability fix, we want to put it into all supported branches. Report and initial patch by Rainer Orth, reviewed and adjusted a bit by Michael Paquier Discussion: https://postgr.es/m/ydd372wk28h.fsf@CeBiTec.Uni-Bielefeld.DE
* Remove restriction on SQL block length in isolationtester scanner.Tom Lane2018-02-28
| | | | | | | | | | | | | | specscanner.l had a fixed limit of 1024 bytes on the length of individual SQL stanzas in an isolation test script. People are starting to run into that, so fix it by making the buffer resizable. Once we allow this in HEAD, it seems inevitable that somebody will try to back-patch a test that exceeds the old limit, so back-patch this change as a preventive measure. Daniel Gustafsson Discussion: https://postgr.es/m/8D628BE4-6606-4FF6-A3FF-8B2B0E9B43D0@yesql.se
* Fix up ecpg's configuration so it handles "long long int" in MSVC builds.Tom Lane2018-02-27
| | | | | | | | | | | | | | | | | | | | | Although configure-based builds correctly define HAVE_LONG_LONG_INT when appropriate (in both pg_config.h and ecpg_config.h), builds using the MSVC scripts failed to do so. This currently has no impact on the backend, since it uses that symbol nowhere; but it does prevent ecpg from supporting "long long int". Fix that. Also, adjust Solution.pm so that in the constructed ecpg_config.h file, the "#if (_MSC_VER > 1200)" covers only the LONG_LONG_INT-related #defines, not the whole file. AFAICS this was a thinko on somebody's part: ENABLE_THREAD_SAFETY should always be defined in Windows builds, and in branches using USE_INTEGER_DATETIMES, the setting of that shouldn't depend on the compiler version either. If I'm wrong, I imagine the buildfarm will say so. Per bug #15080 from Jonathan Allen; issue diagnosed by Michael Meskes and Andrew Gierth. Back-patch to all supported branches. Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
* Use the correct tuplestore read pointer in a NamedTuplestoreScan.Tom Lane2018-02-27
| | | | | | | | | | | Tom Kazimiers reported that transition tables don't work correctly when they are scanned by more than one executor node. That's because commit 18ce3a4ab allocated separate read pointers for each executor node, as it must, but failed to make them active at the appropriate times. Repair. Thomas Munro Discussion: https://postgr.es/m/20180224034748.bixarv6632vbxgeb%40dewberry.localdomain
* Remove regression tests' CREATE FUNCTION commands for unused C functions.Tom Lane2018-02-27
| | | | | | | | | | I removed these functions altogether in HEAD, in commit db3af9feb, and it emerges that that causes trouble for cross-branch upgrade testing. We could put back stub functions but that seems pretty silly. Instead, back-patch a minimal subset of db3af9feb, namely just removing the CREATE FUNCTION commands. Discussion: https://postgr.es/m/11927.1519756619@sss.pgh.pa.us
* Prevent dangling-pointer access when update trigger returns old tuple.Tom Lane2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | A before-update row trigger may choose to return the "new" or "old" tuple unmodified. ExecBRUpdateTriggers failed to consider the second possibility, and would proceed to free the "old" tuple even if it was the one returned, leading to subsequent access to already-deallocated memory. In debug builds this reliably leads to an "invalid memory alloc request size" failure; in production builds it might accidentally work, but data corruption is also possible. This is a very old bug. There are probably a couple of reasons it hasn't been noticed up to now. It would be more usual to return NULL if one wanted to suppress the update action; returning "old" is significantly less efficient since the update will occur anyway. Also, none of the standard PLs would ever cause this because they all returned freshly-manufactured tuples even if they were just copying "old". But commit 4b93f5799 changed that for plpgsql, making it possible to see the bug with a plpgsql trigger. Still, this is certainly legal behavior for a trigger function, so it's ExecBRUpdateTriggers's fault not plpgsql's. It seems worth creating a test case that exercises returning "old" directly with a C-language trigger; testing this through plpgsql seems unreliable because its behavior might change again. Report and fix by Rushabh Lathia; regression test case by me. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAGPqQf1P4pjiNPrMof=P_16E-DFjt457j+nH2ex3=nBTew7tXw@mail.gmail.com
* Stamp 10.3.REL_10_3Tom Lane2018-02-26
|
* Schema-qualify references in test_ddl_deparse test script.Tom Lane2018-02-26
| | | | | | This omission seems to be what is causing buildfarm failures on crake. Security: CVE-2018-1058
* Document security implications of search_path and the public schema.Noah Misch2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ability to create like-named objects in different schemas opens up the potential for users to change the behavior of other users' queries, maliciously or accidentally. When you connect to a PostgreSQL server, you should remove from your search_path any schema for which a user other than yourself or superusers holds the CREATE privilege. If you do not, other users holding CREATE privilege can redefine the behavior of your commands, causing them to perform arbitrary SQL statements under your identity. "SET search_path = ..." and "SELECT pg_catalog.set_config(...)" are not vulnerable to such hijacking, so one can use either as the first command of a session. As special exceptions, the following client applications behave as documented regardless of search_path settings and schema privileges: clusterdb createdb createlang createuser dropdb droplang dropuser ecpg (not programs it generates) initdb oid2name pg_archivecleanup pg_basebackup pg_config pg_controldata pg_ctl pg_dump pg_dumpall pg_isready pg_receivewal pg_recvlogical pg_resetwal pg_restore pg_rewind pg_standby pg_test_fsync pg_test_timing pg_upgrade pg_waldump reindexdb vacuumdb vacuumlo. Not included are core client programs that run user-specified SQL commands, namely psql and pgbench. PostgreSQL encourages non-core client applications to do likewise. Document this in the context of libpq connections, psql connections, dblink connections, ECPG connections, extension packaging, and schema usage patterns. The principal defense for applications is "SELECT pg_catalog.set_config('search_path', '', false)", and the principal defense for databases is "REVOKE CREATE ON SCHEMA public FROM PUBLIC". Either one is sufficient to prevent attack. After a REVOKE, consider auditing the public schema for objects named like pg_catalog objects. Authors of SECURITY DEFINER functions use some of the same defenses, and the CREATE FUNCTION reference page already covered them thoroughly. This is a good opportunity to audit SECURITY DEFINER functions for robust security practice. Back-patch to 9.3 (all supported versions). Reviewed by Michael Paquier and Jonathan S. Katz. Reported by Arseniy Sharoglazov. Security: CVE-2018-1058
* Empty search_path in Autovacuum and non-psql/pgbench clients.Noah Misch2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes the client programs behave as documented regardless of the connect-time search_path and regardless of user-created objects. Today, a malicious user with CREATE permission on a search_path schema can take control of certain of these clients' queries and invoke arbitrary SQL functions under the client identity, often a superuser. This is exploitable in the default configuration, where all users have CREATE privilege on schema "public". This changes behavior of user-defined code stored in the database, like pg_index.indexprs and pg_extension_config_dump(). If they reach code bearing unqualified names, "does not exist" or "no schema has been selected to create in" errors might appear. Users may fix such errors by schema-qualifying affected names. After upgrading, consider watching server logs for these errors. The --table arguments of src/bin/scripts clients have been lax; for example, "vacuumdb -Zt pg_am\;CHECKPOINT" performed a checkpoint. That now fails, but for now, "vacuumdb -Zt 'pg_am(amname);CHECKPOINT'" still performs a checkpoint. Back-patch to 9.3 (all supported versions). Reviewed by Tom Lane, though this fix strategy was not his first choice. Reported by Arseniy Sharoglazov. Security: CVE-2018-1058
* Avoid using unsafe search_path settings during dump and restore.Tom Lane2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | Historically, pg_dump has "set search_path = foo, pg_catalog" when dumping an object in schema "foo", and has also caused that setting to be used while restoring the object. This is problematic because functions and operators in schema "foo" could capture references meant to refer to pg_catalog entries, both in the queries issued by pg_dump and those issued during the subsequent restore run. That could result in dump/restore misbehavior, or in privilege escalation if a nefarious user installs trojan-horse functions or operators. This patch changes pg_dump so that it does not change the search_path dynamically. The emitted restore script sets the search_path to what was used at dump time, and then leaves it alone thereafter. Created objects are placed in the correct schema, regardless of the active search_path, by dint of schema-qualifying their names in the CREATE commands, as well as in subsequent ALTER and ALTER-like commands. Since this change requires a change in the behavior of pg_restore when processing an archive file made according to this new convention, bump the archive file version number; old versions of pg_restore will therefore refuse to process files made with new versions of pg_dump. Security: CVE-2018-1058
* Translation updatesPeter Eisentraut2018-02-26
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: cf38d09075039ec3151f5226b4002569d05c489c
* Fix filtering of unsupported relations in logical replicationPeter Eisentraut2018-02-23
| | | | | | | | | | | | | | | | | In the pgoutput plugin, skip changes for relations that are not publishable, per is_publishable_class(). This concerns in particular materialized views and information_schema tables. While those relations cannot be part of a publication, per existing checks, they will be considered by a FOR ALL TABLES publication. A subscription would not actually apply changes for those relations, again per existing checks, but trying to match incoming changes to local tables on the subscriber would lead to errors if no matching local table exists. Skipping those changes on the publisher avoids sending useless changes and eliminates the error. Bug: #15044 Reported-by: Chad Trabant <chad@iris.washington.edu> Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
* Synchronize doc/ copies of src/test/examples/.Noah Misch2018-02-23
| | | | | | | This is mostly cosmetic, but it might fix build failures, on some platform, when copying from the documentation. Back-patch to 9.3 (all supported versions).
* Fix planner failures with overlapping mergejoin clauses in an outer join.Tom Lane2018-02-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Given overlapping or partially redundant join clauses, for example t1 JOIN t2 ON t1.a = t2.x AND t1.b = t2.x the planner's EquivalenceClass machinery will ordinarily refactor the clauses as "t1.a = t1.b AND t1.a = t2.x", so that join processing doesn't see multiple references to the same EquivalenceClass in a list of join equality clauses. However, if the join is outer, it's incorrect to derive a restriction clause on the outer side from the join conditions, so the clause refactoring does not happen and we end up with overlapping join conditions. The code that attempted to deal with such cases had several subtle bugs, which could result in "left and right pathkeys do not match in mergejoin" or "outer pathkeys do not match mergeclauses" planner errors, if the selected join plan type was a mergejoin. (It does not appear that any actually incorrect plan could have been emitted.) The core of the problem really was failure to recognize that the outer and inner relations' pathkeys have different relationships to the mergeclause list. A join's mergeclause list is constructed by reference to the outer pathkeys, so it will always be ordered the same as the outer pathkeys, but this cannot be presumed true for the inner pathkeys. If the inner sides of the mergeclauses contain multiple references to the same EquivalenceClass ({t2.x} in the above example) then a simplistic rendering of the required inner sort order is like "ORDER BY t2.x, t2.x", but the pathkey machinery recognizes that the second sort column is redundant and throws it away. The mergejoin planning code failed to account for that behavior properly. One error was to try to generate cut-down versions of the mergeclause list from cut-down versions of the inner pathkeys in the same way as the initial construction of the mergeclause list from the outer pathkeys was done; this could lead to choosing a mergeclause list that fails to match the outer pathkeys. The other problem was that the pathkey cross-checking code in create_mergejoin_plan treated the inner and outer pathkey lists identically, whereas actually the expectations for them must be different. That led to false "pathkeys do not match" failures in some cases, and in principle could have led to failure to detect bogus plans in other cases, though there is no indication that such bogus plans could be generated. Reported by Alexander Kuzmenkov, who also reviewed this patch. This has been broken for years (back to around 8.3 according to my testing), so back-patch to all supported branches. Discussion: https://postgr.es/m/5dad9160-4632-0e47-e120-8e2082000c01@postgrespro.ru
* Backport: Mark assorted GUC variables as PGDLLIMPORT.Andres Freund2018-02-22
| | | | | | | | | This backpatches 935dee9ad5a8d12f4d3b772a6e6c99d245e5ad44 to the the branches requested by extension authors. Original-Author: Metin Doslu Original-Committer: Robert Haas Author: Brian Cloutier
* Repair pg_upgrade's failure to preserve relfrozenxid for matviews.Tom Lane2018-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | This oversight led to data corruption in matviews, manifesting as "could not access status of transaction" before our most recent releases, and "found xmin from before relfrozenxid" errors since then. The proximate cause of the problem seems to have been confusion between the task of preserving dropped-column status and the task of preserving frozenxid status. Those are required for distinct sets of relkinds, and the reasoning was entirely undocumented in the source code. In hopes of forestalling future errors of the same kind, try to improve the commentary in this area. In passing, also improve the remarkably unhelpful comments around pg_upgrade's set_frozenxids(). That's not actually buggy AFAICS, but good luck figuring out what it does from the old comments. Per report from Claudio Freire. It appears that bug #14852 from Alexey Ermakov is an earlier report of the same issue, and there may be other cases that we failed to identify at the time. Patch by me based on analysis by Andres Freund. The bug dates back to the introduction of matviews, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAGTBQpbrY9CdRGGhyBZ9yqY4jWaGC85rUF4X+R7d-aim=mBNsw@mail.gmail.com Discussion: https://postgr.es/m/20171013115320.28049.86457@wrigleys.postgresql.org
* Fix pg_dump's logic for eliding sequence limits that match the defaults.Tom Lane2018-02-20
| | | | | | | | | | | | | | | | The previous coding here applied atoi() to strings that could represent values too large to fit in an int. If the overflowed value happened to match one of the cases it was looking for, it would drop that limit value from the output, leading to incorrect restoration of the sequence. Avoid the unsafe behavior, and also make the logic cleaner by explicitly calculating the default min/max values for the appropriate kind of sequence. Reported and patched by Alexey Bashtanov, though I whacked his patch around a bit. Back-patch to v10 where the faulty logic was added. Discussion: https://postgr.es/m/cb85a9a5-946b-c7c4-9cf2-6cd6e25d7a33@imap.cc
* Fix misbehavior of CTE-used-in-a-subplan during EPQ rechecks.Tom Lane2018-02-19
| | | | | | | | | | | | | | | | | | An updating query that reads a CTE within an InitPlan or SubPlan could get incorrect results if it updates rows that are concurrently being modified. This is caused by CteScanNext supposing that nothing inside its recursive ExecProcNode call could change which read pointer is selected in the CTE's shared tuplestore. While that's normally true because of scoping considerations, it can break down if an EPQ plan tree gets built during the call, because EvalPlanQualStart builds execution trees for all subplans whether they're going to be used during the recheck or not. And it seems like a pretty shaky assumption anyway, so let's just reselect our own read pointer here. Per bug #14870 from Andrei Gorita. This has been broken since CTEs were implemented, so back-patch to all supported branches. Discussion: https://postgr.es/m/20171024155358.1471.82377@wrigleys.postgresql.org
* Fix broken logic for reporting PL/Python function names in errcontext.Tom Lane2018-02-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | plpython_error_callback() reported the name of the function associated with the topmost PL/Python execution context. This was not merely wrong if there were nested PL/Python contexts, but it risked a core dump if the topmost one is an inline code block rather than a named function. That will have proname = NULL, and so we were passing a NULL pointer to snprintf("%s"). It seems that none of the PL/Python-testing machines in the buildfarm will dump core for that, but some platforms do, as reported by Marina Polyakova. Investigation finds that there actually is an existing regression test that used to prove that the behavior was wrong, though apparently no one had noticed that it was printing the wrong function name. It stopped showing the problem in 9.6 when we adjusted psql to not print CONTEXT by default for NOTICE messages. The problem is masked (if your platform avoids the core dump) in error cases, because PL/Python will throw away the originally generated error info in favor of a new traceback produced at the outer level. Repair by using ErrorContextCallback.arg to pass the correct context to the error callback. Add a regression test illustrating correct behavior. Back-patch to all supported branches, since they're all broken this way. Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
* Fix assorted errors in pg_dump's handling of extended statistics objects.Tom Lane2018-02-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_dump supposed that a stats object necessarily shares the same schema as its underlying table, and that it doesn't have a separate owner. These things may have been true during early development of the feature, but they are not true as of v10 release. Failure to track the object's schema separately turns out to have only limited consequences, because pg_get_statisticsobjdef() always schema- qualifies the target object name in the generated CREATE STATISTICS command (a decision out of step with the rest of ruleutils.c, but I digress). Therefore the restored object would be in the right schema, so that the only problem is that the TOC entry would be mislabeled as to schema. That could lead to wrong decisions for schema-selective restores, for example. The ownership issue is a bit more serious: not only was the TOC entry potentially mislabeled as to owner, but pg_dump didn't bother to issue an ALTER OWNER command at all, so that after restore the stats object would continue to be owned by the restoring superuser. A final point is that decisions as to whether to dump a stats object or not were driven by whether the underlying table was dumped or not. While that's not wrong on its face, it won't scale nicely to the planned future extension to cross-table statistics. Moreover, that design decision comes out of the view of stats objects as being auxiliary to a particular table, like a rule or trigger, which is exactly where the above problems came from. Since we're now treating stats objects more like independent objects in their own right, they ought to behave like standalone objects for this purpose too. So change to using the generic selectDumpableObject() logic for them (which presently amounts to "dump if containing schema is to be dumped"). Along the way to fixing this, restructure so that getExtendedStatistics collects the identity info (only) for all extended stats objects in one query, and then for each object actually being dumped, we retrieve the definition in dumpStatisticsExt. This is necessary to ensure that schema-qualification in the generated CREATE STATISTICS command happens with respect to the search path that pg_dump will now be using at restore time (ie, the schema the stats object is in, not that of the underlying table). It's probably also significantly faster in the typical scenario where only a minority of tables have extended stats. Back-patch to v10 where extended stats were introduced. Discussion: https://postgr.es/m/18272.1518328606@sss.pgh.pa.us
* Stamp 10.2.REL_10_2Tom Lane2018-02-05
|
* Translation updatesPeter Eisentraut2018-02-05
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 8d2416b5dc311bbf942125650a2d2c162a4f5453
* Ensure that all temp files made during pg_upgrade are non-world-readable.Tom Lane2018-02-05
| | | | | | | | | | | | | | | | | | | | | | | pg_upgrade has always attempted to ensure that the transient dump files it creates are inaccessible except to the owner. However, refactoring in commit 76a7650c4 broke that for the file containing "pg_dumpall -g" output; since then, that file was protected according to the process's default umask. Since that file may contain role passwords (hopefully encrypted, but passwords nonetheless), this is a particularly unfortunate oversight. Prudent users of pg_upgrade on multiuser systems would probably run it under a umask tight enough that the issue is moot, but perhaps some users are depending only on pg_upgrade's umask changes to protect their data. To fix this in a future-proof way, let's just tighten the umask at process start. There are no files pg_upgrade needs to write at a weaker security level; and if there were, transiently relaxing the umask around where they're created would be a safer approach. Report and patch by Tom Lane; the idea for the fix is due to Noah Misch. Back-patch to all supported branches. Security: CVE-2018-1053
* Fix RelationBuildPartitionKey's processing of partition key expressions.Tom Lane2018-02-05
| | | | | | | | | | | | | Failure to advance the list pointer while reading partition expressions from a list results in invoking an input function with inappropriate data, possibly leading to crashes or, with carefully crafted input, disclosure of arbitrary backend memory. Bug discovered independently by Álvaro Herrera and David Rowley. This patch is by Álvaro but owes something to David's proposed fix. Back-patch to v10 where the issue was introduced. Security: CVE-2018-1052
* Be more wary about shm_toc_lookup failure.Tom Lane2018-02-02
| | | | | | | | | | | | | | | | Commit 445dbd82a basically missed the point of commit d46633506, which was that we shouldn't allow shm_toc_lookup() failure to lead to a core dump or assertion crash, because the odds of such a failure should never be considered negligible. It's correct that we can't expect the PARALLEL_KEY_ERROR_QUEUE TOC entry to be there if we have no workers. But if we have no workers, we're not going to do anything in this function with the lookup result anyway, so let's just skip it. That lets the code use the easy-to-prove-safe noError=false case, rather than anything requiring effort to review. Back-patch to v10, like the previous commit. Discussion: https://postgr.es/m/3647.1517601675@sss.pgh.pa.us
* Fix application of identity values in some casesPeter Eisentraut2018-02-02
| | | | | | | | | | | | | | | | | | | | Investigation of 2d2d06b7e27e3177d5bef0061801c75946871db3 revealed that identity values were not applied in some further cases, including logical replication subscribers, VALUES RTEs, and ALTER TABLE ... ADD COLUMN. To fix all that, apply the identity column expression in build_column_default() instead of repeating the same logic at each call site. For ALTER TABLE ... ADD COLUMN ... IDENTITY, the previous coding completely ignored that existing rows for the new column should have values filled in from the identity sequence. The coding using build_column_default() fails for this because the sequence ownership isn't registered until after ALTER TABLE, and we can't do it before because we don't have the column in the catalog yet. So we specially remember in ColumnDef the sequence name that we decided on and build a custom NextValueExpr using that. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Fix possible failure to mark hash metapage dirty.Robert Haas2018-02-01
| | | | | | | Report and suggested fix by Lixian Zou. Amit Kapila put it in the form of a patch and reviewed. Discussion: http://postgr.es/m/151739848647.1239.12528851873396651946@wrigleys.postgresql.org