aboutsummaryrefslogtreecommitdiff
path: root/contrib/postgres_fdw
Commit message (Collapse)AuthorAge
...
* Revert changes for SSL compression in libpqMichael Paquier2021-03-10
| | | | | | | | | | | | | | | This partially reverts 096bbf7 and 9d2d457, undoing the libpq changes as it could cause breakages in distributions that share one single libpq version across multiple major versions of Postgres for extensions and applications linking to that. Note that the backend is unchanged here, and it still disables SSL compression while simplifying the underlying catalogs that tracked if compression was enabled or not for a SSL connection. Per discussion with Tom Lane and Daniel Gustafsson. Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
* Switch back sslcompression to be a normal input field in libpqMichael Paquier2021-03-09
| | | | | | | | | | | | Per buildfarm member crake, any servers including a postgres_fdw server with this option set would fail to do a pg_upgrade properly as the option got hidden in f9264d1 by becoming a debug option, making the restore of the FDW server fail. This changes back the option in libpq to be visible, but still inactive to fix this upgrade issue. Discussion: https://postgr.es/m/YEbq15JKJwIX+S6m@paquier.xyz
* Remove support for SSL compressionMichael Paquier2021-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | | PostgreSQL disabled compression as of e3bdb2d and the documentation recommends against using it since. Additionally, SSL compression has been disabled in OpenSSL since version 1.1.0, and was disabled in many distributions long before that. The most recent TLS version, TLSv1.3, disallows compression at the protocol level. This commit removes the feature itself, removing support for the libpq parameter sslcompression (parameter still listed for compatibility reasons with existing connection strings, just ignored), and removes the equivalent field in pg_stat_ssl and de facto PgBackendSSLStatus. Note that, on top of removing the ability to activate compression by configuration, compression is actively disabled in both frontend and backend to avoid overrides from local configurations. A TAP test is added for deprecated SSL parameters to check after backwards compatibility. Bump catalog version. Author: Daniel Gustafsson Reviewed-by: Peter Eisentraut, Magnus Hagander, Michael Paquier Discussion: https://postgr.es/m/7E384D48-11C5-441B-9EC3-F7DB1F8518F6@yesql.se
* Allow specifying CRL directoryPeter Eisentraut2021-02-18
| | | | | | | | | | | | Add another method to specify CRLs, hashed directory method, for both server and client side. This offers a means for server or libpq to load only CRLs that are required to verify a certificate. The CRL directory is specifed by separate GUC variables or connection options ssl_crl_dir and sslcrldir, alongside the existing ssl_crl_file and sslcrl, so both methods can be used at the same time. Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/20200731.173911.904649928639357911.horikyota.ntt@gmail.com
* Fix tuple routing to initialize batching only for insertsTomas Vondra2021-02-18
| | | | | | | | | | | | | | | | | | A cross-partition update on a partitioned table is implemented as a delete followed by an insert. With foreign partitions, this was however causing issues, because the FDW and core may disagree on when to enable batching. postgres_fdw was only allowing batching for plain inserts (CMD_INSERT) while core was trying to batch the insert component of the cross-partition update. Fix by restricting core to apply batching only to plain CMD_INSERT queries. It's possible to allow batching for cross-partition updates, but that will require more extensive changes, so better to leave that for a separate patch. Author: Amit Langote Reviewed-by: Tomas Vondra, Takayuki Tsunakawa Discussion: https://postgr.es/m/20200628151002.7x5laxwpgvkyiu3q@development
* Fix permission checks on constraint violation errors on partitions.Heikki Linnakangas2021-02-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a cross-partition UPDATE violates a constraint on the target partition, and the columns in the new partition are in different physical order than in the parent, the error message can reveal columns that the user does not have SELECT permission on. A similar bug was fixed earlier in commit 804b6b6db4. The cause of the bug is that the callers of the ExecBuildSlotValueDescription() function got confused when constructing the list of modified columns. If the tuple was routed from a parent, we converted the tuple to the parent's format, but the list of modified columns was grabbed directly from the child's RTE entry. ExecUpdateLockMode() had a similar issue. That lead to confusion on which columns are key columns, leading to wrong tuple lock being taken on tables referenced by foreign keys, when a row is updated with INSERT ON CONFLICT UPDATE. A new isolation test is added for that corner case. With this patch, the ri_RangeTableIndex field is no longer set for partitions that don't have an entry in the range table. Previously, it was set to the RTE entry of the parent relation, but that was confusing. NOTE: This modifies the ResultRelInfo struct, replacing the ri_PartitionRoot field with ri_RootResultRelInfo. That's a bit risky to backpatch, because it breaks any extensions accessing the field. The change that ri_RangeTableIndex is not set for partitions could potentially break extensions, too. The ResultRelInfos are visible to FDWs at least, and this patch required small changes to postgres_fdw. Nevertheless, this seem like the least bad option. I don't think these fields widely used in extensions; I don't think there are FDWs out there that uses the FDW "direct update" API, other than postgres_fdw. If there is, you will get a compilation error, so hopefully it is caught quickly. Backpatch to 11, where support for both cross-partition UPDATEs, and unique indexes on partitioned tables, were added. Reviewed-by: Amit Langote Security: CVE-2021-3393
* postgres_fdw: Fix assertion in estimate_path_cost_size().Etsuro Fujita2021-02-05
| | | | | | | | | | | | | | | | Commit 08d2d58a2 added an assertion assuming that the retrieved_rows estimate for a foreign relation, which is re-used to cost pre-sorted foreign paths with local stats, is set to at least one row in estimate_path_cost_size(), which isn't correct because if the relation is a foreign table with tuples=0, the estimate would be set to 0 there when not using remote estimates. Per bug #16807 from Alexander Lakhin. Back-patch to v13 where the aforementioned commit went in. Author: Etsuro Fujita Reviewed-by: Kyotaro Horiguchi Discussion: https://postgr.es/m/16807-9fe4e08fbaa5c7ce%40postgresql.org
* postgres_fdw: Fix tests for CLOBBER_CACHE_ALWAYS.Fujii Masao2021-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The regression tests added in commits 708d165ddb and 411ae64997 caused buildfarm failures when CLOBBER_CACHE_ALWAYS was enabled. This commit stabilizes those tests. The foreign server connections established by postgres_fdw behaves differently depending on whether CLOBBER_CACHE_ALWAYS is enabled or not. If it's not enabled, those connections are cached. On the other hand, if it's enabled, when the connections are established outside transaction block, they are not cached (i.e., they are immediately closed at the end of query that established them). So the subsequent postgres_fdw_get_connections() cannot list those connections and postgres_fdw_disconnect() cannot close them (because they are already closed). When the connections are established inside transaction block, they are cached whether CLOBBER_CACHE_ALWAYS was enabled or not. But if it's enabled, they are immediately marked as invalid, otherwise not. This causes the subsequent postgres_fdw_get_connections() to return different result in "valid" column depending on whether CLOBBER_CACHE_ALWAYS was enabled or not. This commit prevents the above differences of behavior from affecting the regression tests. Per buildfarm failure on trilobite. Original patch by Bharath Rupireddy. I (Fujii Masao) extracted the regression test fix from that and revised it a bit. Reported-by: Tom Lane Author: Bharath Rupireddy Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/2688508.1611865371@sss.pgh.pa.us
* Fix memory leak when deallocating prepared statement in postgres_fdwMichael Paquier2021-01-26
| | | | | | The leak is minor, so no backpatch is done. Oversight in 21734d2. Reported-by: Tom Lane
* postgres_fdw: Fix test failure with -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONSFujii Masao2021-01-26
| | | | | | | | | The roles created by regression test should have names starting with "regress_", and the test introduced in commit 411ae64997 did not do that. Per buildfarm member longfin. Discussion: https://postgr.es/m/73fc5ae4-3c54-1262-4533-f8c547de2e60@oss.nttdata.com
* postgres_fdw: Stabilize regression test for postgres_fdw_disconnect_all().Fujii Masao2021-01-26
| | | | | | | | | | | The regression test added in commit 411ae64997 caused buildfarm failures. The cause of them was that the order of warning messages output in the test was not stable. To fix this, this commit sets client_min_messages to ERROR temporarily when performing the test generating those warnings. Per buildfarm failures. Discussion: https://postgr.es/m/2147113.1611644754@sss.pgh.pa.us
* postgres_fdw: Add functions to discard cached connections.Fujii Masao2021-01-26
| | | | | | | | | | | | | | | | | This commit introduces two new functions postgres_fdw_disconnect() and postgres_fdw_disconnect_all(). The former function discards the cached connections to the specified foreign server. The latter discards all the cached connections. If the connection is used in the current transaction, it's not closed and a warning message is emitted. For example, these functions are useful when users want to explicitly close the foreign server connections that are no longer necessary and then to prevent them from eating up the foreign servers connections capacity. Author: Bharath Rupireddy, tweaked a bit by Fujii Masao Reviewed-by: Alexey Kondratov, Zhijie Hou, Zhihong Yu, Fujii Masao Discussion: https://postgr.es/m/CALj2ACVvrp5=AVp2PupEm+nAC8S4buqR3fJMmaCoc7ftT0aD2A@mail.gmail.com
* Fix pull_varnos' miscomputation of relids set for a PlaceHolderVar.Tom Lane2021-01-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, pull_varnos() took the relids of a PlaceHolderVar as being equal to the relids in its contents, but that fails to account for the possibility that we have to postpone evaluation of the PHV due to outer joins. This could result in a malformed plan. The known cases end up triggering the "failed to assign all NestLoopParams to plan nodes" sanity check in createplan.c, but other symptoms may be possible. The right value to use is the join level we actually intend to evaluate the PHV at. We can get that from the ph_eval_at field of the associated PlaceHolderInfo. However, there are some places that call pull_varnos() before the PlaceHolderInfos have been created; in that case, fall back to the conservative assumption that the PHV will be evaluated at its syntactic level. (In principle this might result in missing some legal optimization, but I'm not aware of any cases where it's an issue in practice.) Things are also a bit ticklish for calls occurring during deconstruct_jointree(), but AFAICS the ph_eval_at fields should have reached their final values by the time we need them. The main problem in making this work is that pull_varnos() has no way to get at the PlaceHolderInfos. We can fix that easily, if a bit tediously, in HEAD by passing it the planner "root" pointer. In the back branches that'd cause an unacceptable API/ABI break for extensions, so leave the existing entry points alone and add new ones with the additional parameter. (If an old entry point is called and encounters a PHV, it'll fall back to using the syntactic level, again possibly missing some valid optimization.) Back-patch to v12. The computation is surely also wrong before that, but it appears that we cannot reach a bad plan thanks to join order restrictions imposed on the subquery that the PlaceHolderVar came from. The error only became reachable when commit 4be058fe9 allowed trivial subqueries to be collapsed out completely, eliminating their join order restrictions. Per report from Stephan Springl. Discussion: https://postgr.es/m/171041.1610849523@sss.pgh.pa.us
* Implement support for bulk inserts in postgres_fdwTomas Vondra2021-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Extends the FDW API to allow batching inserts into foreign tables. That is usually much more efficient than inserting individual rows, due to high latency for each round-trip to the foreign server. It was possible to implement something similar in the regular FDW API, but it was inconvenient and there were issues with reporting the number of actually inserted rows etc. This extends the FDW API with two new functions: * GetForeignModifyBatchSize - allows the FDW picking optimal batch size * ExecForeignBatchInsert - inserts a batch of rows at once Currently, only INSERT queries support batching. Support for DELETE and UPDATE may be added in the future. This also implements batching for postgres_fdw. The batch size may be specified using "batch_size" option both at the server and table level. The initial patch version was written by me, but it was rewritten and improved in many ways by Takayuki Tsunakawa. Author: Takayuki Tsunakawa Reviewed-by: Tomas Vondra, Amit Langote Discussion: https://postgr.es/m/20200628151002.7x5laxwpgvkyiu3q@development
* postgres_fdw: Add function to list cached connections to foreign servers.Fujii Masao2021-01-18
| | | | | | | | | | | | | | | | | | This commit adds function postgres_fdw_get_connections() to return the foreign server names of all the open connections that postgres_fdw established from the local session to the foreign servers. This function also returns whether each connection is valid or not. This function is useful when checking all the open foreign server connections. If we found some connection to drop, from the result of function, probably we can explicitly close them by the function that upcoming commit will add. This commit bumps the version of postgres_fdw to 1.1 since it adds new function. Author: Bharath Rupireddy, tweaked by Fujii Masao Reviewed-by: Zhijie Hou, Alexey Kondratov, Zhihong Yu, Fujii Masao Discussion: https://postgr.es/m/2d5cb0b3-a6e8-9bbb-953f-879f47128faa@oss.nttdata.com
* postgres_fdw: Save foreign server OID in connection cache entry.Fujii Masao2021-01-15
| | | | | | | | | | | | | | | | The foreign server OID stored in the connection cache entry is used as a lookup key to directly get the server name. Previously since the connection cache entry did not have the server OID, postgres_fdw had to get the server OID at first from user mapping before getting the server name. So if the corresponding user mapping was dropped, postgres_fdw could raise the error "cache lookup failed for user mapping" while looking up user mapping and fail to get the server name even though the server had not been dropped yet. Author: Bharath Rupireddy Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CALj2ACVRZPUB7ZwqLn-6DY8C_UmPs6084gSpHA92YBv++1AJXA@mail.gmail.com
* Replace remaining uses of "whitelist".Thomas Munro2021-01-05
| | | | | | | | Instead describe the action that the list effects, or just use "list" where the meaning is obvious from context. Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Discussion: https://postgr.es/m/20200615182235.x7lch5n6kcjq4aue%40alap3.anarazel.de
* Update copyright for 2021Bruce Momjian2021-01-02
| | | | Backpatch-through: 9.5
* postgres_fdw: Fix connection leak.Fujii Masao2020-12-28
| | | | | | | | | | | | | | | | | | | | | In postgres_fdw, the cached connections to foreign servers will not be closed until the local session exits if the user mappings or foreign servers that those connections depend on are dropped. Those connections can be leaked. To fix that connection leak issue, after a change to a pg_foreign_server or pg_user_mapping catalog entry, this commit makes postgres_fdw close the connections depending on that entry immediately if current transaction has not used those connections yet. Otherwise, mark those connections as invalid and then close them at the end of current transaction, since they cannot be closed in the midst of the transaction using them. Closed connections will be remade at the next opportunity if necessary. Back-patch to all supported branches. Author: Bharath Rupireddy Reviewed-by: Zhihong Yu, Zhijie Hou, Fujii Masao Discussion: https://postgr.es/m/CALj2ACVNcGH_6qLY-4_tXz8JLvA+4yeBThRfxMz7Oxbk1aHcpQ@mail.gmail.com
* Improve hash_create()'s API for some added robustness.Tom Lane2020-12-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Invent a new flag bit HASH_STRINGS to specify C-string hashing, which was formerly the default; and add assertions insisting that exactly one of the bits HASH_STRINGS, HASH_BLOBS, and HASH_FUNCTION be set. This is in hopes of preventing recurrences of the type of oversight fixed in commit a1b8aa1e4 (i.e., mistakenly omitting HASH_BLOBS). Also, when HASH_STRINGS is specified, insist that the keysize be more than 8 bytes. This is a heuristic, but it should catch accidental use of HASH_STRINGS for integer or pointer keys. (Nearly all existing use-cases set the keysize to NAMEDATALEN or more, so there's little reason to think this restriction should be problematic.) Tweak hash_create() to insist that the HASH_ELEM flag be set, and remove the defaults it had for keysize and entrysize. Since those defaults were undocumented and basically useless, no callers omitted HASH_ELEM anyway. Also, remove memset's zeroing the HASHCTL parameter struct from those callers that had one. This has never been really necessary, and while it wasn't a bad coding convention it was confusing that some callers did it and some did not. We might as well save a few cycles by standardizing on "not". Also improve the documentation for hash_create(). In passing, improve reinit.c's usage of a hash table by storing the key as a binary Oid rather than a string; and, since that's a temporary hash table, allocate it in CurrentMemoryContext for neatness. Discussion: https://postgr.es/m/590625.1607878171@sss.pgh.pa.us
* Support subscripting of arbitrary types, not only arrays.Tom Lane2020-12-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch generalizes the subscripting infrastructure so that any data type can be subscripted, if it provides a handler function to define what that means. Traditional variable-length (varlena) arrays all use array_subscript_handler(), while the existing fixed-length types that support subscripting use raw_array_subscript_handler(). It's expected that other types that want to use subscripting notation will define their own handlers. (This patch provides no such new features, though; it only lays the foundation for them.) To do this, move the parser's semantic processing of subscripts (including coercion to whatever data type is required) into a method callback supplied by the handler. On the execution side, replace the ExecEvalSubscriptingRef* layer of functions with direct calls to callback-supplied execution routines. (Thus, essentially no new run-time overhead should be caused by this patch. Indeed, there is room to remove some overhead by supplying specialized execution routines. This patch does a little bit in that line, but more could be done.) Additional work is required here and there to remove formerly hard-wired assumptions about the result type, collation, etc of a SubscriptingRef expression node; and to remove assumptions that the subscript values must be integers. One useful side-effect of this is that we now have a less squishy mechanism for identifying whether a data type is a "true" array: instead of wiring in weird rules about typlen, we can look to see if pg_type.typsubscript == F_ARRAY_SUBSCRIPT_HANDLER. For this to be bulletproof, we have to forbid user-defined types from using that handler directly; but there seems no good reason for them to do so. This patch also removes assumptions that the number of subscripts is limited to MAXDIM (6), or indeed has any hard-wired limit. That limit still applies to types handled by array_subscript_handler or raw_array_subscript_handler, but to discourage other dependencies on this constant, I've moved it from c.h to utils/array.h. Dmitry Dolgov, reviewed at various times by Tom Lane, Arthur Zakirov, Peter Eisentraut, Pavel Stehule Discussion: https://postgr.es/m/CA+q6zcVDuGBv=M0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w@mail.gmail.com Discussion: https://postgr.es/m/CA+q6zcVovR+XY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA@mail.gmail.com
* Move per-agg and per-trans duplicate finding to the planner.Heikki Linnakangas2020-11-24
| | | | | | | | | | This has the advantage that the cost estimates for aggregates can count the number of calls to transition and final functions correctly. Bump catalog version, because views can contain Aggrefs. Reviewed-by: Andres Freund Discussion: https://www.postgresql.org/message-id/b2e3536b-1dbc-8303-c97e-89cb0b4a9a48%40iki.fi
* Fix and simplify some usages of TimestampDifference().Tom Lane2020-11-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce TimestampDifferenceMilliseconds() to simplify callers that would rather have the difference in milliseconds, instead of the select()-oriented seconds-and-microseconds format. This gets rid of at least one integer division per call, and it eliminates some apparently-easy-to-mess-up arithmetic. Two of these call sites were in fact wrong: * pg_prewarm's autoprewarm_main() forgot to multiply the seconds by 1000, thus ending up with a delay 1000X shorter than intended. That doesn't quite make it a busy-wait, but close. * postgres_fdw's pgfdw_get_cleanup_result() thought it needed to compute microseconds not milliseconds, thus ending up with a delay 1000X longer than intended. Somebody along the way had noticed this problem but misdiagnosed the cause, and imposed an ad-hoc 60-second limit rather than fixing the units. This was relatively harmless in context, because we don't care that much about exactly how long this delay is; still, it's wrong. There are a few more callers of TimestampDifference() that don't have a direct need for seconds-and-microseconds, but can't use TimestampDifferenceMilliseconds() either because they do need microsecond precision or because they might possibly deal with intervals long enough to overflow 32-bit milliseconds. It might be worth inventing another API to improve that, but that seems outside the scope of this patch; so those callers are untouched here. Given the fact that we are fixing some bugs, and the likelihood that future patches might want to back-patch code that uses this new API, back-patch to all supported branches. Alexey Kondratov and Tom Lane Discussion: https://postgr.es/m/3b1c053a21c07c1ed5e00be3b2b855ef@postgrespro.ru
* In security-restricted operations, block enqueue of at-commit user code.Noah Misch2020-11-09
| | | | | | | | | | | | | | | | | | Specifically, this blocks DECLARE ... WITH HOLD and firing of deferred triggers within index expressions and materialized view queries. An attacker having permission to create non-temp objects in at least one schema could execute arbitrary SQL functions under the identity of the bootstrap superuser. One can work around the vulnerability by disabling autovacuum and not manually running ANALYZE, CLUSTER, REINDEX, CREATE INDEX, VACUUM FULL, or REFRESH MATERIALIZED VIEW. (Don't restore from pg_dump, since it runs some of those commands.) Plain VACUUM (without FULL) is safe, and all commands are fine when a trusted user owns the target object. Performance may degrade quickly under this workaround, however. Back-patch to 9.5 (all supported versions). Reviewed by Robert Haas. Reported by Etienne Stalmans. Security: CVE-2020-25695
* postgres_fdw: Restructure connection retry logic.Fujii Masao2020-10-16
| | | | | | | | | | | | | | | | | | | | | Commit 32a9c0bdf introduced connection retry logic into postgres_fdw. Previously it used goto statement for retry. This commit gets rid of that goto from the logic to make the code simpler and easier-to-read. When getting out of PG_CATCH() for the retry, the error state should be cleaned up and the memory context should be reset. But commit 32a9c0bdf forgot to do that. This commit also fixes this bug. Previously only PQstatus()==CONNECTION_BAD was verified to detect connection failure. But this could cause false detection in the case where any error other than connection failure (e.g., out-of-memory) was thrown after a broken connection was detected in libpq and CONNECTION_BAD is set. To fix this issue, this commit changes the logic so that it also checks the error's sqlstate is ERRCODE_CONNECTION_FAILURE. Author: Fujii Masao Reviewed-by: Tom Lane Discussion: https://postgr.es/m/2943611.1602375376@sss.pgh.pa.us
* Fixup some appendStringInfo and appendPQExpBuffer callsDavid Rowley2020-10-15
| | | | | | | | | | | | | | | | | | A number of places were using appendStringInfo() when they could have been using appendStringInfoString() instead. While there's no functionality change there, it's just more efficient to use appendStringInfoString() when no formatting is required. Likewise for some appendStringInfoString() calls which were just appending a single char. We can just use appendStringInfoChar() for that. Additionally, many places were using appendPQExpBuffer() when they could have used appendPQExpBufferStr(). Change those too. Patch by Zhijie Hou, but further searching by me found significantly more places that deserved the same treatment. Author: Zhijie Hou, David Rowley Discussion: https://postgr.es/m/cb172cf4361e4c7ba7167429070979d4@G08CNEXMBPEKD05.g08.fujitsu.local
* Include result relation info in direct modify ForeignScan nodes.Heikki Linnakangas2020-10-14
| | | | | | | | | | | | | | | | | FDWs that can perform an UPDATE/DELETE remotely using the "direct modify" set of APIs need to access the ResultRelInfo of the target table. That's currently available in EState.es_result_relation_info, but the next commit will remove that field. This commit adds a new resultRelation field in ForeignScan, to store the target relation's RT index, and the corresponding ResultRelInfo in ForeignScanState. The FDW's PlanDirectModify callback is expected to set 'resultRelation' along with 'operation'. The core code doesn't need them for anything, they are for the convenience of FDW's Begin- and IterateDirectModify callbacks. Authors: Amit Langote, Etsuro Fujita Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGEmiib8FLiHMhKB%2BCH5dRgHSLc5N5wnvc4kym%2BZYpQEQ%40mail.gmail.com
* Band-aid new postgres_fdw test case to remove error text dependency.Tom Lane2020-10-10
| | | | | | | | | | | | | | | Buildfarm member lorikeet is still failing the test from commit 32a9c0bdf, but now it's down to the should-have-foreseen-it problem that the error message isn't what the expected-output file expects. Let's see if we can get stable results by printing just the SQLSTATE. I believe we'll reliably see ERRCODE_CONNECTION_FAILURE, since pgfdw_report_error() will report that for any libpq-originated error. There may be a better way to do this, but I'd like to get the buildfarm back to green before we discuss further improvements. Discussion: https://postgr.es/m/E1kPc9v-0005L4-2l@gemulon.postgresql.org Discussion: https://postgr.es/m/2621622.1602184554@sss.pgh.pa.us
* postgres_fdw: reestablish new connection if cached one is detected as broken.Fujii Masao2020-10-06
| | | | | | | | | | | | | | | | | | In postgres_fdw, once remote connections are established, they are cached and re-used for subsequent queries and transactions. There can be some cases where those cached connections are unavaiable, for example, by the restart of remote server. In these cases, previously an error was reported and the query accessing to remote server failed if new remote transaction failed to start because the cached connection was broken. This commit improves postgres_fdw so that new connection is remade if broken connection is detected when starting new remote transaction. This is useful to avoid unnecessary failure of queries when connection is broken but can be reestablished. Author: Bharath Rupireddy, tweaked a bit by Fujii Masao Reviewed-by: Ashutosh Bapat, Tatsuhito Kasahara, Fujii Masao Discussion: https://postgr.es/m/CALj2ACUAi23vf1WiHNar_LksM9EDOWXcbHCo-fD4Mbr1d=78YQ@mail.gmail.com
* Remove support for postfix (right-unary) operators.Tom Lane2020-09-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | This feature has been a thorn in our sides for a long time, causing many grammatical ambiguity problems. It doesn't seem worth the pain to continue to support it, so remove it. There are some follow-on improvements we can make in the grammar, but this commit only removes the bare minimum number of productions, plus assorted backend support code. Note that pg_dump and psql continue to have full support, since they may be used against older servers. However, pg_dump warns about postfix operators. There is also a check in pg_upgrade. Documentation-wise, I (tgl) largely removed the "left unary" terminology in favor of saying "prefix operator", which is a more standard and IMO less confusing term. I included a catversion bump, although no initial catalog data changes here, to mark the boundary at which oprkind = 'r' stopped being valid in pg_operator. Mark Dilger, based on work by myself and Robert Haas; review by John Naylor Discussion: https://postgr.es/m/38ca86db-42ab-9b48-2902-337a0d6b8311@2ndquadrant.com
* Remove factorial operators, leaving only the factorial() function.Tom Lane2020-09-17
| | | | | | | | | | | | | | | | The "!" operator is our only built-in postfix operator. Remove it, on the way to removal of grammar support for postfix operators. There is also a "!!" prefix operator, but since it's been marked deprecated for most of its existence, we might as well remove it too. Also zap the SQL alias function numeric_fac(), which seems to have equally little reason to live. Mark Dilger, based on work by myself and Robert Haas; review by John Naylor Discussion: https://postgr.es/m/38ca86db-42ab-9b48-2902-337a0d6b8311@2ndquadrant.com
* Add support for partitioned tables and indexes in REINDEXMichael Paquier2020-09-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, REINDEX was not able to work with partitioned tables and indexes, forcing users to reindex partitions one by one. This extends REINDEX INDEX and REINDEX TABLE so as they can accept a partitioned index and table in input, respectively, to reindex all the partitions assigned to them with physical storage (foreign tables, partitioned tables and indexes are then discarded). This shares some logic with schema and database REINDEX as each partition gets processed in its own transaction after building a list of relations to work on. This choice has the advantage to minimize the number of invalid indexes to one partition with REINDEX CONCURRENTLY in the event a cancellation or failure in-flight, as the only indexes handled at once in a single REINDEX CONCURRENTLY loop are the ones from the partition being working on. Isolation tests are added to emulate some cases I bumped into while developing this feature, particularly with the concurrent drop of a leaf partition reindexed. However, this is rather limited as LOCK would cause REINDEX to block in the first transaction building the list of partitions. Per its multi-transaction nature, this new flavor cannot run in a transaction block, similarly to REINDEX SCHEMA, SYSTEM and DATABASE. Author: Justin Pryzby, Michael Paquier Reviewed-by: Anastasia Lubennikova Discussion: https://postgr.es/m/db12e897-73ff-467e-94cb-4af03705435f.adger.lj@alibaba-inc.com
* Redefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.Tom Lane2020-08-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically, we've considered the state with relpages and reltuples both zero as indicating that we do not know the table's tuple density. This is problematic because it's impossible to distinguish "never yet vacuumed" from "vacuumed and seen to be empty". In particular, a user cannot use VACUUM or ANALYZE to override the planner's normal heuristic that an empty table should not be believed to be empty because it is probably about to get populated. That heuristic is a good safety measure, so I don't care to abandon it, but there should be a way to override it if the table is indeed intended to stay empty. Hence, represent the initial state of ignorance by setting reltuples to -1 (relpages is still set to zero), and apply the minimum-ten-pages heuristic only when reltuples is still -1. If the table is empty, VACUUM or ANALYZE (but not CREATE INDEX) will override that to reltuples = relpages = 0, and then we'll plan on that basis. This requires a bunch of fiddly little changes, but we can get rid of some ugly kluges that were formerly needed to maintain the old definition. One notable point is that FDWs' GetForeignRelSize methods will see baserel->tuples = -1 when no ANALYZE has been done on the foreign table. That seems like a net improvement, since those methods were formerly also in the dark about what baserel->tuples = 0 really meant. Still, it is an API change. I bumped catversion because code predating this change would get confused by seeing reltuples = -1. Discussion: https://postgr.es/m/F02298E0-6EF4-49A1-BCB6-C484794D9ACC@thebuild.com
* Make xact.h usable in frontend.Heikki Linnakangas2020-08-17
| | | | | | | | xact.h included utils/datetime.h, which cannot be used in the frontend (it includes fmgr.h, which needs Datum). But xact.h only needs the definition of TimestampTz from it, which is available directly in datatypes/timestamp.h. Change xact.h to include that instead of utils/datetime.h, so that it can be used in client programs.
* Revert "Use CP_SMALL_TLIST for hash aggregate"Jeff Davis2020-07-12
| | | | | | | | | | This reverts commit 4cad2534da6d17067d98cf04be2dfc1bda8f2cd0 due to a performance regression. It will be replaced by a new approach in an upcoming commit. Reported-by: Andres Freund Discussion: https://postgr.es/m/20200614181418.mx4bvljmfkkhoqzl@alap3.anarazel.de Backpatch-through: 13
* Use CP_SMALL_TLIST for hash aggregateTomas Vondra2020-05-31
| | | | | | | | | | | | | | | | | Commit 1f39bce021 added disk-based hash aggregation, which may spill incoming tuples to disk. It however did not request projection to make the tuples as narrow as possible, which may mean having to spill much more data than necessary (increasing I/O, pushing other stuff from page cache, etc.). This adds CP_SMALL_TLIST in places that may use hash aggregation - we do that only for AGG_HASHED. It's unnecessary for AGG_SORTED, because that either uses explicit Sort (which already does projection) or pre-sorted input (which does not need spilling to disk). Author: Tomas Vondra Reviewed-by: Jeff Davis Discussion: https://postgr.es/m/20200519151202.u2p2gpiawoaznsv2%40development
* Initial pgindent and pgperltidy run for v13.Tom Lane2020-05-14
| | | | | | | | | | | Includes some manual cleanup of places that pgindent messed up, most of which weren't per project style anyway. Notably, it seems some people didn't absorb the style rules of commit c9d297751, because there were a bunch of new occurrences of function calls with a newline just after the left paren, all with faulty expectations about how the rest of the call would get indented.
* Rename connection parameters to control min/max SSL protocol version in libpqMichael Paquier2020-04-30
| | | | | | | | | | | | | | The libpq parameters ssl{max|min}protocolversion are renamed to use underscores, to become ssl_{max|min}_protocol_version. The related environment variables still use the names introduced in commit ff8ca5f that added the feature. Per complaint from Peter Eisentraut (this was also mentioned by me in the original patch review but the issue got discarded). Author: Daniel Gustafsson Reviewed-by: Peter Eisentraut, Michael Paquier Discussion: https://postgr.es/m/b319e449-318d-e691-4997-1327e166fcc4@2ndquadrant.com
* Consider Incremental Sort paths at additional placesTomas Vondra2020-04-07
| | | | | | | | | | | | | | | | | | | Commit d2d8a229bc introduced Incremental Sort, but it was considered only in create_ordered_paths() as an alternative to regular Sort. There are many other places that require sorted input and might benefit from considering Incremental Sort too. This patch modifies a number of those places, but not all. The concern is that just adding Incremental Sort to any place that already adds Sort may increase the number of paths considered, negatively affecting planning time, without any benefit. So we've taken a more conservative approach, based on analysis of which places do affect a set of queries that did seem practical. This means some less common queries may not benefit from Incremental Sort yet. Author: Tomas Vondra Reviewed-by: James Coleman Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
* Fix compile failure.Tom Lane2020-02-24
| | | | | | I forgot that some compilers won't handle #if constructs within ereport() calls. Duplicating most of the call is annoying but simple. Per buildfarm.
* Account explicitly for long-lived FDs that are allocated outside fd.c.Tom Lane2020-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The comments in fd.c have long claimed that all file allocations should go through that module, but in reality that's not always practical. fd.c doesn't supply APIs for invoking some FD-producing syscalls like pipe() or epoll_create(); and the APIs it does supply for non-virtual FDs are mostly insistent on releasing those FDs at transaction end; and in some cases the actual open() call is in code that can't be made to use fd.c, such as libpq. This has led to a situation where, in a modern server, there are likely to be seven or so long-lived FDs per backend process that are not known to fd.c. Since NUM_RESERVED_FDS is only 10, that meant we had *very* few spare FDs if max_files_per_process is >= the system ulimit and fd.c had opened all the files it thought it safely could. The contrib/postgres_fdw regression test, in particular, could easily be made to fall over by running it under a restrictive ulimit. To improve matters, invent functions Acquire/Reserve/ReleaseExternalFD that allow outside callers to tell fd.c that they have or want to allocate a FD that's not directly managed by fd.c. Add calls to track all the fixed FDs in a standard backend session, so that we are honestly guaranteeing that NUM_RESERVED_FDS FDs remain unused below the EMFILE limit in a backend's idle state. The coding rules for these functions say that there's no need to call them in code that just allocates one FD over a fairly short interval; we can dip into NUM_RESERVED_FDS for such cases. That means that there aren't all that many places where we need to worry. But postgres_fdw and dblink must use this facility to account for long-lived FDs consumed by libpq connections. There may be other places where it's worth doing such accounting, too, but this seems like enough to solve the immediate problem. Internally to fd.c, "external" FDs are limited to max_safe_fds/3 FDs. (Callers can choose to ignore this limit, but of course it's unwise to do so except for fixed file allocations.) I also reduced the limit on "allocated" files to max_safe_fds/3 FDs (it had been max_safe_fds/2). Conceivably a smarter rule could be used here --- but in practice, on reasonable systems, max_safe_fds should be large enough that this isn't much of an issue, so KISS for now. To avoid possible regression in the number of external or allocated files that can be opened, increase FD_MINFREE and the lower limit on max_files_per_process a little bit; we now insist that the effective "ulimit -n" be at least 64. This seems like pretty clearly a bug fix, but in view of the lack of field complaints, I'll refrain from risking a back-patch. Discussion: https://postgr.es/m/E1izCmM-0005pV-Co@gemulon.postgresql.org
* Add connection parameters to control SSL protocol min/max in libpqMichael Paquier2020-01-28
| | | | | | | | | | | | | | | These two new parameters, named sslminprotocolversion and sslmaxprotocolversion, allow to respectively control the minimum and the maximum version of the SSL protocol used for the SSL connection attempt. The default setting is to allow any version for both the minimum and the maximum bounds, causing libpq to rely on the bounds set by the backend when negotiating the protocol to use for an SSL connection. The bounds are checked when the values are set at the earliest stage possible as this makes the checks independent of any SSL implementation. Author: Daniel Gustafsson Reviewed-by: Michael Paquier, Cary Huang Discussion: https://postgr.es/m/4F246AE3-A7AE-471E-BD3D-C799D3748E03@yesql.se
* In postgres_fdw, don't try to ship MULTIEXPR updates to remote server.Tom Lane2020-01-26
| | | | | | | | | | | | | | | | | | | | | | | | In a statement like "UPDATE remote_tab SET (x,y) = (SELECT ...)", we'd conclude that the statement could be directly executed remotely, because the sub-SELECT is in a resjunk tlist item that's not examined for shippability. Currently that ends up crashing if the sub-SELECT contains any remote Vars. Prevent the crash by deeming MULTIEXEC Params to be unshippable. This is a bit of a brute-force solution, since if the sub-SELECT *doesn't* contain any remote Vars, the current execution technology would work; but that's not a terribly common use-case for this syntax, I think. In any case, we generally don't try to ship sub-SELECTs, so it won't surprise anybody that this doesn't end up as a remote direct update. I'd be inclined to see if that general limitation can be fixed before worrying about this case further. Per report from Lukáš Sobotka. Back-patch to 9.6. 9.5 had MULTIEXPR, but we didn't try to perform remote direct updates then, so the case didn't arise anyway. Discussion: https://postgr.es/m/CAJif3k+iA_ekBB5Zw2hDBaE1wtiQa4LH4_JUXrrMGwTrH0J01Q@mail.gmail.com
* Only superuser can set sslcert/sslkey in postgres_fdw user mappingsAndrew Dunstan2020-01-13
| | | | | | Othrwise there is a security risk. Discussion: https://postgr.es/m/20200109103014.GA4192@msg.df7cb.de
* Allow 'sslkey' and 'sslcert' in postgres_fdw user mappingsAndrew Dunstan2020-01-09
| | | | | | This allows different users to authenticate with different certificates. Author: Craig Ringer
* Update copyrights for 2020Bruce Momjian2020-01-01
| | | | Backpatch-through: update all files in master, backpatch legal files through 9.4
* Adjust test case added by commit 6136e94dc.Tom Lane2019-12-20
| | | | | | | | | Per project policy, transient roles created by regression test cases should be named "regress_something", to reduce the risks of running such cases against installed servers. And no such role should ever be left behind after running a test. Discussion: https://postgr.es/m/11297.1576868677@sss.pgh.pa.us
* libpq should expose GSS-related parameters even when not implemented.Tom Lane2019-12-20
| | | | | | | | | | | | | | | | | | | | | | We realized years ago that it's better for libpq to accept all connection parameters syntactically, even if some are ignored or restricted due to lack of the feature in a particular build. However, that lesson from the SSL support was for some reason never applied to the GSSAPI support. This is causing various buildfarm members to have problems with a test case added by commit 6136e94dc, and it's just a bad idea from a user-experience standpoint anyway, so fix it. While at it, fix some places where parameter-related infrastructure was added with the aid of a dartboard, or perhaps with the aid of the anti-pattern "add new stuff at the end". It should be safe to rearrange the contents of struct pg_conn even in released branches, since that's private to libpq (and we'd have to move some fields in some builds to fix this, anyway). Back-patch to all supported branches. Discussion: https://postgr.es/m/11297.1576868677@sss.pgh.pa.us
* Superuser can permit passwordless connections on postgres_fdwAndrew Dunstan2019-12-20
| | | | | | | | | | | | | | | | | | | | Currently postgres_fdw doesn't permit a non-superuser to connect to a foreign server without specifying a password, or to use an authentication mechanism that doesn't use the password. This is to avoid using the settings and identity of the user running Postgres. However, this doesn't make sense for all authentication methods. We therefore allow a superuser to set "password_required 'false'" for user mappings for the postgres_fdw. The superuser must ensure that the foreign server won't try to rely solely on the server identity (e.g. trust, peer, ident) or use an authentication mechanism that relies on the password settings (e.g. md5, scram-sha-256). This feature is a prelude to better support for sslcert and sslkey settings in user mappings. Author: Craig Ringer. Discussion: https://postgr.es/m/075135da-545c-f958-fed0-5dcb462d6dae@2ndQuadrant.com
* Further adjust EXPLAIN's choices of table alias names.Tom Lane2019-12-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch causes EXPLAIN to always assign a separate table alias to the parent RTE of an append relation (inheritance set); before, such RTEs were ignored if not actually scanned by the plan. Since the child RTEs now always have that same alias to start with (cf. commit 55a1954da), the net effect is that the parent RTE usually gets the alias used or implied by the query text, and the children all get that alias with "_N" appended. (The exception to "usually" is if there are duplicate aliases in different subtrees of the original query; then some of those original RTEs will also have "_N" appended.) This results in more uniform output for partitioned-table plans than we had before: the partitioned table itself gets the original alias, and all child tables have aliases with "_N", rather than the previous behavior where one of the children would get an alias without "_N". The reason for giving the parent RTE an alias, even if it isn't scanned by the plan, is that we now use the parent's alias to qualify Vars that refer to an appendrel output column and appear above the Append or MergeAppend that computes the appendrel. But below the append, Vars refer to some one of the child relations, and are displayed that way. This seems clearer than the old behavior where a Var that could carry values from any child relation was displayed as if it referred to only one of them. While at it, change ruleutils.c so that the code paths used by EXPLAIN deal in Plan trees not PlanState trees. This effectively reverts a decision made in commit 1cc29fe7c, which seemed like a good idea at the time to make ruleutils.c consistent with explain.c. However, it's problematic because we'd really like to allow executor startup pruning to remove all the children of an append node when possible, leaving no child PlanState to resolve Vars against. (That's not done here, but will be in the next patch.) This requires different handling of subplans and initplans than before, but is otherwise a pretty straightforward change. Discussion: https://postgr.es/m/001001d4f44b$2a2cca50$7e865ef0$@lab.ntt.co.jp