aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Update FSM on WAL replay of page all-visible/frozenAlvaro Herrera2018-08-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We aren't very strict about keeping FSM up to date on WAL replay, because per-page freespace values aren't critical in replicas (can't write to heap in a replica; and if the replica is promoted, the values would be updated by VACUUM anyway). However, VACUUM since 9.6 can skip processing pages marked all-visible or all-frozen, and if such pages are recorded in FSM with wrong values, those values are blindly propagated to FSM's upper layers by VACUUM's FreeSpaceMapVacuum. (This rationale assumes that crashes are not very frequent, because those would cause outdated FSM to occur in the primary.) Even when the FSM is outdated in standby, things are not too bad normally, because, most per-page FSM values will be zero (other than those propagated with the base-backup that created the standby); only once the remaining free space is less than 0.2*BLCKSZ the per-page value is maintained by WAL replay of heap ins/upd/del. However, if wal_log_hints=on causes complete FSM pages to be propagated to a standby via full-page images, many too-optimistic per-page values can end up being registered in the standby. Incorrect per-page values aren't critical in most cases, since an inserter that is given a page that doesn't actually contain the claimed free space will update FSM with the correct value, and retry until it finds a usable page. However, if there are many such updates to do, an inserter can spend a long time doing them before a usable page is found; in a heavily trafficked insert-only table with many concurrent inserters this has been observed to cause several second stalls, causing visible application malfunction. To fix this problem, it seems sufficient to have heap_xlog_visible (replay of setting all-visible and all-frozen VM bits for a heap page) update the FSM value for the page being processed. This fixes the per-page counters together with making the page skippable to vacuum, so when vacuum does FreeSpaceMapVacuum, the values propagated to FSM upper layers are the correct ones, avoiding the problem. While at it, apply the same fix to heap_xlog_clean (replay of tuple removal by HOT pruning and vacuum). This makes any space freed by the cleaning available earlier than the next vacuum in the promoted replica. Backpatch to 9.6, where this problem was diagnosed on an insert-only table with all-frozen pages, which were introduced as a concept in that release. Theoretically it could apply with all-visible pages to older branches, but there's been no report of that and it doesn't backpatch cleanly anyway. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20180802172857.5skoexsilnjvgruk@alvherre.pgsql
* Clean up assorted misuses of snprintf()'s result value.Tom Lane2018-08-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a small number of places that were testing the result of snprintf() but doing so incorrectly. The right test for buffer overrun, per C99, is "result >= bufsize" not "result > bufsize". Some places were also checking for failure with "result == -1", but the standard only says that a negative value is delivered on failure. (Note that this only makes these places correct if snprintf() delivers C99-compliant results. But at least now these places are consistent with all the other places where we assume that.) Also, make psql_start_test() and isolation_start_test() check for buffer overrun while constructing their shell commands. There seems like a higher risk of overrun, with more severe consequences, here than there is for the individual file paths that are made elsewhere in the same functions, so this seemed like a worthwhile change. Also fix guc.c's do_serialize() to initialize errno = 0 before calling vsnprintf. In principle, this should be unnecessary because vsnprintf should have set errno if it returns a failure indication ... but the other two places this coding pattern is cribbed from don't assume that, so let's be consistent. These errors are all very old, so back-patch as appropriate. I think that only the shell command overrun cases are even theoretically reachable in practice, but there's not much point in erroneous error checks. Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
* Remove obsolete netbsd dynloader codePeter Eisentraut2018-08-13
| | | | dlopen() has been documented since NetBSD 1.1 (1995).
* Remove obsolete openbsd dynloader codePeter Eisentraut2018-08-13
| | | | dlopen() has been documented since OpenBSD 2.0 (1996).
* Remove obsolete freebsd dynloader codePeter Eisentraut2018-08-13
| | | | dlopen() has been documented since FreeBSD 3.0 (1989).
* Remove obsolete linux dynloader codePeter Eisentraut2018-08-13
| | | | This has been obsolete probably since the late 1990s.
* Remove obsolete darwin dynloader codePeter Eisentraut2018-08-13
| | | | not needed since macOS 10.3 (2003)
* Remove obsolete commentPeter Eisentraut2018-08-13
| | | | | The sequence name is no longer stored in the sequence relation, since 1753b1b027035029c2a2a1649065762fafbf63f3.
* Make autovacuum more aggressive to remove orphaned temp tablesMichael Paquier2018-08-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit dafa084, added in 10, made the removal of temporary orphaned tables more aggressive. This commit makes an extra step into the aggressiveness by adding a flag in each backend's MyProc which tracks down any temporary namespace currently in use. The flag is set when the namespace gets created and can be reset if the temporary namespace has been created in a transaction or sub-transaction which is aborted. The flag value assignment is assumed to be atomic, so this can be done in a lock-less fashion like other flags already present in PGPROC like databaseId or backendId, still the fact that the temporary namespace and table created are still locked until the transaction creating those commits acts as a barrier for other backends. This new flag gets used by autovacuum to discard more aggressively orphaned tables by additionally checking for the database a backend is connected to as well as its temporary namespace in-use, removing orphaned temporary relations even if a backend reuses the same slot as one which created temporary relations in a past session. The base idea of this patch comes from Robert Haas, has been written in its first version by Tsunakawa Takayuki, then heavily reviewed by me. Author: Tsunakawa Takayuki Reviewed-by: Michael Paquier, Kyotaro Horiguchi, Andres Freund Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F8A4DC6@G01JPEXMBYT05 Backpatch: 11-, as PGPROC gains a new flag and we don't want silent ABI breakages on already released versions.
* Adjust comment atop ExecShutdownNode.Amit Kapila2018-08-13
| | | | | | | | | After commits a315b967cc and b805b63ac2, part of the comment atop ExecShutdownNode is redundant. Adjust it. Author: Amit Kapila Backpatch-through: 10 where both the mentioned commits are present. Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
* Prohibit shutting down resources if there is a possibility of back up.Amit Kapila2018-08-13
| | | | | | | | | | | | | | | Currently, we release the asynchronous resources as soon as it is evident that no more rows will be needed e.g. when a Limit is filled. This can be problematic especially for custom and foreign scans where we can scan backward. Fix that by disallowing the shutting down of resources in such cases. Reported-by: Robert Haas Analysed-by: Robert Haas and Amit Kapila Author: Amit Kapila Reviewed-by: Robert Haas Backpatch-through: 9.6 where this code was introduced Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
* Avoid query-lifetime memory leaks in XMLTABLE (bug #15321)Andrew Gierth2018-08-13
| | | | | | | | | | | | | | | | | | | Multiple calls to XMLTABLE in a query (e.g. laterally applying it to a table with an xml column, an important use-case) were leaking large amounts of memory into the per-query context, blowing up memory usage. Repair by reorganizing memory context usage in nodeTableFuncscan; use the usual per-tuple context for row-by-row evaluations instead of perValueCxt, and use the explicitly created context -- renamed from perValueCxt to perTableCxt -- for arguments and state for each individual table-generation operation. Backpatch to PG10 where this code was introduced. Original report by IRC user begriffs; analysis and patch by me. Reviewed by Tom Lane and Pavel Stehule. Discussion: https://postgr.es/m/153394403528.10284.7530399040974170549@wrigleys.postgresql.org
* Fix wrong order of operations in inheritance_planner.Tom Lane2018-08-11
| | | | | | | | | | | | | | | | When considering a partitioning parent rel, we should stop processing that subroot as soon as we've done adjust_appendrel_attrs and any securityQuals updates. The rest of this is unnecessary, and indeed adding duplicate subquery RTEs to the subroot is *wrong*. As the code stood, the children of that partition ended up with two sets of copied subquery RTEs, confusing matters greatly. Even more hilarity ensued if all of the children got excluded by constraint exclusion, so that the extra RTEs didn't make it back into the parent rtable. Per fuzz testing by Andreas Seltenreich. Back-patch to v11 where this got broken (by commit 0a480502b, it looks like). Discussion: https://postgr.es/m/87va8g7vq0.fsf@ansel.ydns.eu
* Revert changes in execMain.c from commit 16828d5c0273bAndrew Dunstan2018-08-10
| | | | | | | | | | These changes were put in at some stage of the development process, but are unnecessary and should not have made it into the final patch. Mea culpa. Per gripe from Andreas Freund Backpatch to REL_11_STABLE
* Handle parallel index builds on mapped relations.Peter Geoghegan2018-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 9da0cc35284, which introduced parallel CREATE INDEX, failed to propagate relmapper.c backend local cache state to parallel worker processes. This could result in parallel index builds against mapped catalog relations where the leader process (participating as a worker) scans the new, pristine relfilenode, while worker processes scan the obsolescent relfilenode. When this happened, the final index structure was typically not consistent with the owning table's structure. The final index structure could contain entries formed from both heap relfilenodes. Only rebuilds on mapped catalog relations that occur as part of a VACUUM FULL or CLUSTER could become corrupt in practice, since their mapped relation relfilenode swap is what allows the inconsistency to arise. On master, fix the problem by propagating the required relmapper.c backend state as part of standard parallel initialization (Cf. commit 29d58fd3). On v11, simply disallow builds against mapped catalog relations by deeming them parallel unsafe. Author: Peter Geoghegan Reported-By: "death lock" Reviewed-By: Tom Lane, Amit Kapila Bug: #15309 Discussion: https://postgr.es/m/153329671686.1405.18298309097348420351@wrigleys.postgresql.org Backpatch: 11-, where parallel CREATE INDEX was introduced.
* Improve TRUNCATE by avoiding early lock queueMichael Paquier2018-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | A caller of TRUNCATE could previously queue for an access exclusive lock on a relation it may not have permission to truncate, potentially interfering with users authorized to work on it. This can be very intrusive depending on the lock attempted to be taken. For example, pg_authid could be blocked, preventing any authentication attempt to happen on a PostgreSQL instance. This commit fixes the case of TRUNCATE so as RangeVarGetRelidExtended is used with a callback doing the necessary ACL checks at an earlier stage, avoiding lock queuing issues, so as an immediate failure happens for unprivileged users instead of waiting on a lock that would not be taken. This is rather similar to the type of work done in cbe24a6 for CLUSTER, and the code of TRUNCATE is this time refactored so as there is no user-facing changes. As the commit for CLUSTER, no back-patch is done. Reported-by: Lloyd Albin, Jeremy Schneider Author: Michael Paquier Reviewed by: Nathan Bossart, Kyotaro Horiguchi Discussion: https://postgr.es/m/152512087100.19803.12733865831237526317@wrigleys.postgresql.org Discussion: https://postgr.es/m/20180806165816.GA19883@paquier.xyz
* Fix typo in SP-GiST error messageAlexander Korotkov2018-08-10
| | | | | | | | | Error message didn't match the actual check. Fix that. Compression of leaf SP-GiST values was introduced in 11. So, backpatch. Discussion: https://postgr.es/m/20180810.100742.15469435.horiguchi.kyotaro%40lab.ntt.co.jp Author: Kyotaro Horiguchi Backpatch-through: 11
* Spell "partitionwise" consistently.Heikki Linnakangas2018-08-09
| | | | | | | | | I'm not sure which spelling is better, "partitionwise" or "partition-wise", but everywhere else we spell it "partitionwise", so be consistent. Tatsuro Yamada reported the one in README, I found the other one with grep. Discussion: https://www.postgresql.org/message-id/d25ebf36-5a6d-8b2c-1ff3-d6f022a56000@lab.ntt.co.jp
* Restrict access to reindex of shared catalogs for non-privileged usersMichael Paquier2018-08-09
| | | | | | | | | | | | | | | | | | | | | | | | | A database owner running a database-level REINDEX has the possibility to also do the operation on shared system catalogs without being an owner of them, which allows him to block resources it should not have access to. The same goes for a schema owner. For example, PostgreSQL would go unresponsive and even block authentication if a lock is waited for pg_authid. This commit makes sure that a user running a REINDEX SYSTEM, DATABASE or SCHEMA only works on the following relations: - The user is a superuser - The user is the table owner - The user is the database/schema owner, only if the relation worked on is not shared. Robert has worded most the documentation changes, and I have coded the core part. Reported-by: Lloyd Albin, Jeremy Schneider Author: Michael Paquier, Robert Haas Reviewed by: Nathan Bossart, Kyotaro Horiguchi Discussion: https://postgr.es/m/152512087100.19803.12733865831237526317@wrigleys.postgresql.org Discussion: https://postgr.es/m/20180805211059.GA2185@paquier.xyz Backpatch-through: 11- as the current behavior has been around for a very long time and could be disruptive for already released branches.
* Remove bogus Assert in make_partitionedrel_pruneinfo().Tom Lane2018-08-08
| | | | | | | | | | | | | | | This Assert thought that a given rel couldn't be both leaf and non-leaf, but it turns out that in some unusual plan trees that's wrong, so remove it. The lack of testing for cases like that is quite concerning --- there is little reason for confidence that there aren't other bugs in the area. But developing a stable test case seems rather difficult, and in any case we don't need this Assert. David Rowley Discussion: https://postgr.es/m/CAJGNTeOkdk=UVuMugmKL7M=owgt4nNr1wjxMg1F+mHsXyLCzFA@mail.gmail.com
* Don't run atexit callbacks in quickdie signal handlers.Heikki Linnakangas2018-08-08
| | | | | | | | | | | | | | | | | exit() is not async-signal safe. Even if the libc implementation is, 3rd party libraries might have installed unsafe atexit() callbacks. After receiving SIGQUIT, we really just want to exit as quickly as possible, so we don't really want to run the atexit() callbacks anyway. The original report by Jimmy Yih was a self-deadlock in startup_die(). However, this patch doesn't address that scenario; the signal handling while waiting for the startup packet is more complicated. But at least this alleviates similar problems in the SIGQUIT handlers, like that reported by Asim R P later in the same thread. Backpatch to 9.3 (all supported versions). Discussion: https://www.postgresql.org/message-id/CAOMx_OAuRUHiAuCg2YgicZLzPVv5d9_H4KrL_OFsFP%3DVPekigA%40mail.gmail.com
* Match RelOptInfos by relids not pointer equality.Tom Lane2018-08-08
| | | | | | | | | | | | | | Commit 1c2cb2744 added some code that tried to detect whether two RelOptInfos were the "same" rel by pointer comparison; but it turns out that inheritance_planner breaks that, through its shenanigans with copying some relations forward into new subproblems. Compare relid sets instead. Add a regression test case to exercise this area. Problem reported by Rushabh Lathia; diagnosis and fix by Amit Langote, modified a bit by me. Discussion: https://postgr.es/m/CAGPqQf3anJGj65bqAQ9edDr8gF7qig6_avRgwMT9MsZ19COUPw@mail.gmail.com
* Don't record FDW user mappings as members of extensions.Tom Lane2018-08-07
| | | | | | | | | | | | | | | | | CreateUserMapping has a recordDependencyOnCurrentExtension call that's been there since extensions were introduced (very possibly my fault). However, there's no support anywhere else for user mappings as members of extensions, nor are they listed as a possible member object type in the documentation. Nor does it really seem like a good idea for user mappings to belong to extensions when roles don't. Hence, remove the bogus call. (As we saw in bug #15310, the lack of any pg_dump support for this case ensures that any such membership record would silently disappear during pg_upgrade. So there's probably no need for us to do anything else about cleaning up after this mistake.) Discussion: https://postgr.es/m/27952.1533667213@sss.pgh.pa.us
* Fix incorrect initialization of BackendActivityBuffer.Tom Lane2018-08-07
| | | | | | | | | | | | | Since commit c8e8b5a6e, this has been zeroed out using the wrong length. In practice the length would always be too small, leading to not zeroing the whole buffer rather than clobbering additional memory; and that's pretty harmless, both because shmem would likely start out as zeroes and because we'd reinitialize any given entry before use. Still, it's bogus, so fix it. Reported by Petru-Florin Mihancea (bug #15312) Discussion: https://postgr.es/m/153363913073.1303.6518849192351268091@wrigleys.postgresql.org
* Remove support for tls-unique channel binding.Heikki Linnakangas2018-08-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are some problems with the tls-unique channel binding type. It's not supported by all SSL libraries, and strictly speaking it's not defined for TLS 1.3 at all, even though at least in OpenSSL, the functions used for it still seem to work with TLS 1.3 connections. And since we had no mechanism to negotiate what channel binding type to use, there would be awkward interoperability issues if a server only supported some channel binding types. tls-server-end-point seems feasible to support with any SSL library, so let's just stick to that. This removes the scram_channel_binding libpq option altogether, since there is now only one supported channel binding type. This also removes all the channel binding tests from the SSL test suite. They were really just testing the scram_channel_binding option, which is now gone. Channel binding is used if both client and server support it, so it is used in the existing tests. It would be good to have some tests specifically for channel binding, to make sure it really is used, and the different combinations of a client and a server that support or doesn't support it. The current set of settings we have make it hard to write such tests, but I did test those things manually, by disabling HAVE_BE_TLS_GET_CERTIFICATE_HASH and/or HAVE_PGTLS_GET_PEER_CERTIFICATE_HASH. I also removed the SCRAM_CHANNEL_BINDING_TLS_END_POINT constant. This is a matter of taste, but IMO it's more readable to just use the "tls-server-end-point" string. Refactor the checks on whether the SSL library supports the functions needed for tls-server-end-point channel binding. Now the server won't advertise, and the client won't choose, the SCRAM-SHA-256-PLUS variant, if compiled with an OpenSSL version too old to support it. In the passing, add some sanity checks to check that the chosen SASL mechanism, SCRAM-SHA-256 or SCRAM-SHA-256-PLUS, matches whether the SCRAM exchange used channel binding or not. For example, if the client selects the non-channel-binding variant SCRAM-SHA-256, but in the SCRAM message uses channel binding anyway. It's harmless from a security point of view, I believe, and I'm not sure if there are some other conditions that would cause the connection to fail, but it seems better to be strict about these things and check explicitly. Discussion: https://www.postgresql.org/message-id/ec787074-2305-c6f4-86aa-6902f98485a4%40iki.fi
* Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *.Tom Lane2018-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When expanding an updatable view that is an INSERT's target, the rewriter failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally worked if the view was just "SELECT * FROM ...", as the transformation would be a no-op in that case. With more complicated view targetlists, this omission would often lead to "attribute ... has the wrong type" errors or even crashes, as reported by Mario De Frutos Dieguez. Fix by adding code to rewriteTargetView to fix up the data structure correctly. The easiest way to update the exclRelTlist list is to rebuild it from scratch looking at the new target relation, so factor the code for that out of transformOnConflictClause to make it sharable. In passing, avoid duplicate permissions checks against the EXCLUDED pseudo-relation, and prevent useless view expansion of that relation's dummy RTE. The latter is only known to happen (after this patch) in cases where the query would fail later due to not having any INSTEAD OF triggers for the view. But by exactly that token, it would create an unintended and very poorly tested state of the query data structure, so it seems like a good idea to prevent it from happening at all. This has been broken since ON CONFLICT was introduced, so back-patch to 9.5. Dean Rasheed, based on an earlier patch by Amit Langote; comment-kibitzing and back-patching by me Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
* Reset properly errno before calling write()Michael Paquier2018-08-05
| | | | | | | | | | | | 6cb3372 enforces errno to ENOSPC when less bytes than what is expected have been written when it is unset, though it forgot to properly reset errno before doing a system call to write(), causing errno to potentially come from a previous system call. Reported-by: Tom Lane Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/31797.1533326676@sss.pgh.pa.us
* Add table relcache invalidation to index builds.Peter Geoghegan2018-08-03
| | | | | | | | | | | | | | | | | | | | | | | | It's necessary to make sure that owning tables have a relcache invalidation prior to advancing the command counter to make newly-entered catalog tuples for the index visible. inval.c must be able to maintain the consistency of the local caches in the event of transaction abort. There is usually only a problem when CREATE INDEX transactions abort, since there is a generic invalidation once we reach index_update_stats(). This bug is of long standing. Problems were made much more likely by the addition of parallel CREATE INDEX (commit 9da0cc35284), but it is strongly suspected that similar problems can be triggered without involving plan_create_index_workers(). (plan_create_index_workers() triggers a relcache build or rebuild, which previously only happened in rare edge cases.) Author: Peter Geoghegan Reported-By: Luca Ferrari Diagnosed-By: Andres Freund Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAKoxK+5fVodiCtMsXKV_1YAKXbzwSfp7DgDqUmcUAzeAhf=HEQ@mail.gmail.com Backpatch: 9.3-
* Fix buffer usage stats for parallel nodes.Amit Kapila2018-08-03
| | | | | | | | | | | | | | | | | | | | | | | | The buffer usage stats is accounted only for the execution phase of the node. For Gather and Gather Merge nodes, such stats are accumulated at the time of shutdown of workers which is done after execution of node due to which we missed to account them for such nodes. Fix it by treating nodes as running while we shut down them. We can also miss accounting for a Limit node when Gather or Gather Merge is beneath it, because it can finish the execution before shutting down such nodes. So we allow a Limit node to shut down the resources before it completes the execution. In the passing fix the gather node code to allow workers to shut down as soon as we find that all the tuples from the workers have been retrieved. The original code use to do that, but is accidently removed by commit 01edb5c7fc. Reported-by: Adrien Nayrat Author: Amit Kapila and Robert Haas Reviewed-by: Robert Haas and Andres Freund Backpatch-through: 9.6 where this code was introduced Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
* Match the buffer usage tracking for leader and worker backends.Amit Kapila2018-08-03
| | | | | | | | | | | | | In the leader backend, we don't track the buffer usage for ExecutorStart phase whereas in worker backend we track it for ExecutorStart phase as well. This leads to different value for buffer usage stats for the parallel and non-parallel query. Change the code so that worker backend also starts tracking buffer usage after ExecutorStart. Author: Amit Kapila and Robert Haas Reviewed-by: Robert Haas and Andres Freund Backpatch-through: 9.6 where this code was introduced Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
* Fix run-time partition pruning for appends with multiple source rels.Tom Lane2018-08-01
| | | | | | | | | | | | | | | | | | | | | | The previous coding here supposed that if run-time partitioning applied to a particular Append/MergeAppend plan, then all child plans of that node must be members of a single partitioning hierarchy. This is totally wrong, since an Append could be formed from a UNION ALL: we could have multiple hierarchies sharing the same Append, or child plans that aren't part of any hierarchy. To fix, restructure the related plan-time and execution-time data structures so that we can have a separate list or array for each partitioning hierarchy. Also track subplans that are not part of any hierarchy, and make sure they don't get pruned. Per reports from Phil Florent and others. Back-patch to v11, since the bug originated there. David Rowley, with a lot of cosmetic adjustments by me; thanks also to Amit Langote for review. Discussion: https://postgr.es/m/HE1PR03MB17068BB27404C90B5B788BCABA7B0@HE1PR03MB1706.eurprd03.prod.outlook.com
* Fix logical replication slot initializationAlvaro Herrera2018-08-01
| | | | | | | | | | | | | | This was broken in commit 9c7d06d60680, which inadvertently gave the wrong value to fast_forward in one StartupDecodingContext call. Fix by flipping the value. Add a test for the obvious error, namely trying to initialize a replication slot with an nonexistent output plugin. While at it, move the CreateDecodingContext call earlier, so that any errors are reported before sending the CopyBoth message. Author: Dave Cramer <davecramer@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/CADK3HHLVkeRe1v4P02-5hj55H3_yJg3AEtpXyEY5T3wuzO2jSg@mail.gmail.com
* Fix unnoticed variable shadowing in previous commitAlvaro Herrera2018-08-01
| | | | Per buildfarm.
* Fix per-tuple memory leak in partition tuple routingAlvaro Herrera2018-08-01
| | | | | | | | | | | Some operations were being done in a longer-lived memory context, causing intra-query leaks. It's not noticeable unless you're doing a large COPY, but if you are, it eats enough memory to cause a problem. Co-authored-by: Kohei KaiGai <kaigai@heterodb.com> Co-authored-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/CAOP8fzYtVFWZADq4c=KoTAqgDrHWfng+AnEPEZccyxqxPVbbWQ@mail.gmail.com
* Allow multi-inserts during COPY into a partitioned tablePeter Eisentraut2018-08-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CopyFrom allows multi-inserts to be used for non-partitioned tables, but this was disabled for partitioned tables. The reason for this appeared to be that the tuple may not belong to the same partition as the previous tuple did. Not allowing multi-inserts here greatly slowed down imports into partitioned tables. These could take twice as long as a copy to an equivalent non-partitioned table. It seems wise to do something about this, so this change allows the multi-inserts by flushing the so-far inserted tuples to the partition when the next tuple does not belong to the same partition, or when the buffer fills. This improves performance when the next tuple in the stream commonly belongs to the same partition as the previous tuple. In cases where the target partition changes on every tuple, using multi-inserts slightly slows the performance. To get around this we track the average size of the batches that have been inserted and adaptively enable or disable multi-inserts based on the size of the batch. Some testing was done and the regression only seems to exist when the average size of the insert batch is close to 1, so let's just enable multi-inserts when the average size is at least 1.3. More performance testing might reveal a better number for, this, but since the slowdown was only 1-2% it does not seem critical enough to spend too much time calculating it. In any case it may depend on other factors rather than just the size of the batch. Allowing multi-inserts for partitions required a bit of work around the per-tuple memory contexts as we must flush the tuples when the next tuple does not belong the same partition. In which case there is no good time to reset the per-tuple context, as we've already built the new tuple by this time. In order to work around this we maintain two per-tuple contexts and just switch between them every time the partition changes and reset the old one. This does mean that the first of each batch of tuples is not allocated in the same memory context as the others, but that does not matter since we only reset the context once the previous batch has been inserted. Author: David Rowley <david.rowley@2ndquadrant.com> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
* Further fixes for quoted-list GUC values in pg_dump and ruleutils.c.Tom Lane2018-07-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits 742869946 et al turn out to be a couple bricks shy of a load. We were dumping the stored values of GUC_LIST_QUOTE variables as they appear in proconfig or setconfig catalog columns. However, although that quoting rule looks a lot like SQL-identifier double quotes, there are two critical differences: empty strings ("") are legal, and depending on which variable you're considering, values longer than NAMEDATALEN might be valid too. So the current technique fails altogether on empty-string list entries (as reported by Steven Winfield in bug #15248) and it also risks truncating file pathnames during dump/reload of GUC values that are lists of pathnames. To fix, split the stored value without any downcasing or truncation, and then emit each element as a SQL string literal. This is a tad annoying, because we now have three copies of the comma-separated-string splitting logic in varlena.c as well as a fourth one in dumputils.c. (Not to mention the randomly-different-from-those splitting logic in libpq...) I looked at unifying these, but it would be rather a mess unless we're willing to tweak the API definitions of SplitIdentifierString, SplitDirectoriesString, or both. That might be worth doing in future; but it seems pretty unsafe for a back-patched bug fix, so for now accept the duplication. Back-patch to all supported branches, as the previous fix was. Discussion: https://postgr.es/m/7585.1529435872@sss.pgh.pa.us
* Remove dead code left behind by 1b6801051.Tom Lane2018-07-30
|
* Verify range bounds to bms_add_range when necessaryAlvaro Herrera2018-07-30
| | | | | | | | Now that the bms_add_range boundary protections are gone, some alternative ones are needed in a few places. Author: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> Discussion: https://postgr.es/m/3437ccf8-a144-55ff-1e2f-fc16b437823b@lab.ntt.co.jp
* Change bms_add_range to be a no-op for empty rangesAlvaro Herrera2018-07-30
| | | | | | | | | | | | | | | | In commit 84940644de93, bms_add_range was added with an API to fail with an error if an empty range was specified. This seems arbitrary and unhelpful, so turn that case into a no-op instead. Callers that require further verification on the arguments or result can apply them by themselves. This fixes the bug that partition pruning throws an API error for a case involving the default partition of a default partition, as in the included test case. Reported-by: Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/16590.1532622503@sss.pgh.pa.us
* Set ActiveSnapshot when logically replaying insertsAlvaro Herrera2018-07-30
| | | | | | | | | | | | | Input functions for the inserted tuples may require a snapshot, when they are replayed by native logical replication. An example is a domain with a constraint using a SQL-language function, which prior to this commit failed to apply on the subscriber side. Reported-by: Mai Peng <maily.peng@webedia-group.com> Co-authored-by: Minh-Quan TRAN <qtran@itscaro.me> Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/4EB4BD78-BFC3-4D04-B8DA-D53DF7160354@webedia-group.com Discussion: https://postgr.es/m/153211336163.1404.11721804383024050689@wrigleys.postgresql.org
* Add ssl_library preset parameterPeter Eisentraut2018-07-30
| | | | | | | This allows querying the SSL implementation used on the server side. It's analogous to using PQsslAttribute(conn, "library") in libpq. Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
* Mark variable used only in assertion with PG_USED_FOR_ASSERTS_ONLYTomas Vondra2018-07-29
| | | | | | Perpendicular lines always intersect, so the line_interpt_line() return value in line_closept_point() was used only in an assertion, triggering compiler warnings in non-assert builds.
* Restore handling of -0 in the C field of lines in line_construct().Tomas Vondra2018-07-29
| | | | | | Commit a7dc63d904 inadvertedly removed this bit originally introduced by 43fe90f66a, causing regression test failures on some platforms, due to producing {1,-1,-0} instead of {1,-1,0}.
* Fix two oversights from 9ebe0572 which refactored cluster_relMichael Paquier2018-07-29
| | | | | | | | The recheck option became a no-op as ClusterOption failed to set proper values for each element. There was a second code path where local options got overwritten. Both issues have been spotted by Coverity.
* Document security implications of qualified names.Noah Misch2018-07-28
| | | | | | | | | | | Commit 5770172cb0c9df9e6ce27c507b449557e5b45124 documented secure schema usage, and that advice suffices for using unqualified names securely. Document, in typeconv-func primarily, the additional issues that arise with qualified names. Back-patch to 9.3 (all supported versions). Reviewed by Jonathan S. Katz. Discussion: https://postgr.es/m/20180721012446.GA1840594@rfd.leadboat.com
* Provide separate header file for built-in float typesTomas Vondra2018-07-29
| | | | | | | | | | | | | | | | | | Some data types under adt/ have separate header files, but most simple ones do not, and their public functions are defined in builtins.h. As the patches improving geometric types will require making additional functions public, this seems like a good opportunity to create a header for floats types. Commit 1acf757255 made _cmp functions public to solve NaN issues locally for GiST indexes. This patch reworks it in favour of a more widely applicable API. The API uses inline functions, as they are easier to use compared to macros, and avoid double-evaluation hazards. Author: Emre Hasegeli Reviewed-by: Kyotaro Horiguchi Discussion: https://www.postgresql.org/message-id/CAE2gYzxF7-5djV6-cEvqQu-fNsnt%3DEqbOURx7ZDg%2BVv6ZMTWbg%40mail.gmail.com
* Refactor geometric functions and operatorsTomas Vondra2018-07-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The primary goal of this patch is to eliminate duplicate code and share code between different geometric data types more often, to prepare the ground for additional patches. Until now the code reuse was limited, probably because the simpler types (line and point) were implemented after the more complex ones. The changes are quite extensive and can be summarised as: * Eliminate SQL-level function calls. * Re-use more functions to implement others. * Unify internal function names and signatures. * Remove private functions from geo_decls.h. * Replace should-not-happen checks with assertions. * Add comments describe for various functions. * Remove some unreachable code. * Define delimiter symbols of line datatype like the other ones. * Remove the GEODEBUG macro and printf() calls. * Unify code style of a few oddly formatted lines. While the goal was to cause minimal user-visible changes, it was not possible to keep the original behavior in all cases - for example when handling NaN values, or when reusing code makes the functions return consistent results. Author: Emre Hasegeli Reviewed-by: Kyotaro Horiguchi, me Discussion: https://www.postgresql.org/message-id/CAE2gYzxF7-5djV6-cEvqQu-fNsnt%3DEqbOURx7ZDg%2BVv6ZMTWbg%40mail.gmail.com
* Reduce path length for locking leaf B-tree pages during insertionAlexander Korotkov2018-07-28
| | | | | | | | | | | | | | | | | In our B-tree implementation appropriate leaf page for new tuple insertion is acquired using _bt_search() function. This function always returns leaf page locked in shared mode. In order to obtain exclusive lock, caller have to relock the page. This commit makes _bt_search() function lock leaf page immediately in exclusive mode when needed. That removes unnecessary relock and, in turn reduces lock contention for B-tree leaf pages. Our experiments on multi-core systems showed acceleration up to 4.5 times in corner case. Discussion: https://postgr.es/m/CAPpHfduAMDFMNYTCN7VMBsFg_hsf0GqiqXnt%2BbSeaJworwFoig%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Yoshikazu Imai, Simon Riggs, Peter Geoghegan
* Fix grammar in README.tuplockAlvaro Herrera2018-07-27
| | | | | Author: Brad DeJong Discussion: https://postgr.es/m/CAJnrtnxrA4FqZi0Z6kGPQKMiZkWv2xxgSDQ+hv1jDrf8WCKjjw@mail.gmail.com
* Use key and partdesc from PartitionDispatch where possible.Robert Haas2018-07-27
| | | | | | | | | | | | Instead of repeatedly fishing the data out of the relcache entry, let's use the version that we cached in the PartitionDispatch. We could alternatively rip out the PartitionDispatch fields altogether, but it doesn't make much sense to have them and not use them; before this patch, partdesc was set but altogether unused. Amit Langote and I both thought using them was a litle better than removing them, so this patch takes that approach. Discussion: http://postgr.es/m/CA+TgmobFnxcaW-Co-XO8=yhJ5pJXoNkCj6Z7jm9Mwj9FGv-D7w@mail.gmail.com