aboutsummaryrefslogtreecommitdiff
path: root/contrib/postgres_fdw
Commit message (Collapse)AuthorAge
* postgres_fdw: Restructure connection retry logic.Fujii Masao2020-10-16
| | | | | | | | | | | | | | | | | | | | | Commit 32a9c0bdf introduced connection retry logic into postgres_fdw. Previously it used goto statement for retry. This commit gets rid of that goto from the logic to make the code simpler and easier-to-read. When getting out of PG_CATCH() for the retry, the error state should be cleaned up and the memory context should be reset. But commit 32a9c0bdf forgot to do that. This commit also fixes this bug. Previously only PQstatus()==CONNECTION_BAD was verified to detect connection failure. But this could cause false detection in the case where any error other than connection failure (e.g., out-of-memory) was thrown after a broken connection was detected in libpq and CONNECTION_BAD is set. To fix this issue, this commit changes the logic so that it also checks the error's sqlstate is ERRCODE_CONNECTION_FAILURE. Author: Fujii Masao Reviewed-by: Tom Lane Discussion: https://postgr.es/m/2943611.1602375376@sss.pgh.pa.us
* Fixup some appendStringInfo and appendPQExpBuffer callsDavid Rowley2020-10-15
| | | | | | | | | | | | | | | | | | A number of places were using appendStringInfo() when they could have been using appendStringInfoString() instead. While there's no functionality change there, it's just more efficient to use appendStringInfoString() when no formatting is required. Likewise for some appendStringInfoString() calls which were just appending a single char. We can just use appendStringInfoChar() for that. Additionally, many places were using appendPQExpBuffer() when they could have used appendPQExpBufferStr(). Change those too. Patch by Zhijie Hou, but further searching by me found significantly more places that deserved the same treatment. Author: Zhijie Hou, David Rowley Discussion: https://postgr.es/m/cb172cf4361e4c7ba7167429070979d4@G08CNEXMBPEKD05.g08.fujitsu.local
* Include result relation info in direct modify ForeignScan nodes.Heikki Linnakangas2020-10-14
| | | | | | | | | | | | | | | | | FDWs that can perform an UPDATE/DELETE remotely using the "direct modify" set of APIs need to access the ResultRelInfo of the target table. That's currently available in EState.es_result_relation_info, but the next commit will remove that field. This commit adds a new resultRelation field in ForeignScan, to store the target relation's RT index, and the corresponding ResultRelInfo in ForeignScanState. The FDW's PlanDirectModify callback is expected to set 'resultRelation' along with 'operation'. The core code doesn't need them for anything, they are for the convenience of FDW's Begin- and IterateDirectModify callbacks. Authors: Amit Langote, Etsuro Fujita Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGEmiib8FLiHMhKB%2BCH5dRgHSLc5N5wnvc4kym%2BZYpQEQ%40mail.gmail.com
* Band-aid new postgres_fdw test case to remove error text dependency.Tom Lane2020-10-10
| | | | | | | | | | | | | | | Buildfarm member lorikeet is still failing the test from commit 32a9c0bdf, but now it's down to the should-have-foreseen-it problem that the error message isn't what the expected-output file expects. Let's see if we can get stable results by printing just the SQLSTATE. I believe we'll reliably see ERRCODE_CONNECTION_FAILURE, since pgfdw_report_error() will report that for any libpq-originated error. There may be a better way to do this, but I'd like to get the buildfarm back to green before we discuss further improvements. Discussion: https://postgr.es/m/E1kPc9v-0005L4-2l@gemulon.postgresql.org Discussion: https://postgr.es/m/2621622.1602184554@sss.pgh.pa.us
* postgres_fdw: reestablish new connection if cached one is detected as broken.Fujii Masao2020-10-06
| | | | | | | | | | | | | | | | | | In postgres_fdw, once remote connections are established, they are cached and re-used for subsequent queries and transactions. There can be some cases where those cached connections are unavaiable, for example, by the restart of remote server. In these cases, previously an error was reported and the query accessing to remote server failed if new remote transaction failed to start because the cached connection was broken. This commit improves postgres_fdw so that new connection is remade if broken connection is detected when starting new remote transaction. This is useful to avoid unnecessary failure of queries when connection is broken but can be reestablished. Author: Bharath Rupireddy, tweaked a bit by Fujii Masao Reviewed-by: Ashutosh Bapat, Tatsuhito Kasahara, Fujii Masao Discussion: https://postgr.es/m/CALj2ACUAi23vf1WiHNar_LksM9EDOWXcbHCo-fD4Mbr1d=78YQ@mail.gmail.com
* Remove support for postfix (right-unary) operators.Tom Lane2020-09-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | This feature has been a thorn in our sides for a long time, causing many grammatical ambiguity problems. It doesn't seem worth the pain to continue to support it, so remove it. There are some follow-on improvements we can make in the grammar, but this commit only removes the bare minimum number of productions, plus assorted backend support code. Note that pg_dump and psql continue to have full support, since they may be used against older servers. However, pg_dump warns about postfix operators. There is also a check in pg_upgrade. Documentation-wise, I (tgl) largely removed the "left unary" terminology in favor of saying "prefix operator", which is a more standard and IMO less confusing term. I included a catversion bump, although no initial catalog data changes here, to mark the boundary at which oprkind = 'r' stopped being valid in pg_operator. Mark Dilger, based on work by myself and Robert Haas; review by John Naylor Discussion: https://postgr.es/m/38ca86db-42ab-9b48-2902-337a0d6b8311@2ndquadrant.com
* Remove factorial operators, leaving only the factorial() function.Tom Lane2020-09-17
| | | | | | | | | | | | | | | | The "!" operator is our only built-in postfix operator. Remove it, on the way to removal of grammar support for postfix operators. There is also a "!!" prefix operator, but since it's been marked deprecated for most of its existence, we might as well remove it too. Also zap the SQL alias function numeric_fac(), which seems to have equally little reason to live. Mark Dilger, based on work by myself and Robert Haas; review by John Naylor Discussion: https://postgr.es/m/38ca86db-42ab-9b48-2902-337a0d6b8311@2ndquadrant.com
* Add support for partitioned tables and indexes in REINDEXMichael Paquier2020-09-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, REINDEX was not able to work with partitioned tables and indexes, forcing users to reindex partitions one by one. This extends REINDEX INDEX and REINDEX TABLE so as they can accept a partitioned index and table in input, respectively, to reindex all the partitions assigned to them with physical storage (foreign tables, partitioned tables and indexes are then discarded). This shares some logic with schema and database REINDEX as each partition gets processed in its own transaction after building a list of relations to work on. This choice has the advantage to minimize the number of invalid indexes to one partition with REINDEX CONCURRENTLY in the event a cancellation or failure in-flight, as the only indexes handled at once in a single REINDEX CONCURRENTLY loop are the ones from the partition being working on. Isolation tests are added to emulate some cases I bumped into while developing this feature, particularly with the concurrent drop of a leaf partition reindexed. However, this is rather limited as LOCK would cause REINDEX to block in the first transaction building the list of partitions. Per its multi-transaction nature, this new flavor cannot run in a transaction block, similarly to REINDEX SCHEMA, SYSTEM and DATABASE. Author: Justin Pryzby, Michael Paquier Reviewed-by: Anastasia Lubennikova Discussion: https://postgr.es/m/db12e897-73ff-467e-94cb-4af03705435f.adger.lj@alibaba-inc.com
* Redefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.Tom Lane2020-08-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically, we've considered the state with relpages and reltuples both zero as indicating that we do not know the table's tuple density. This is problematic because it's impossible to distinguish "never yet vacuumed" from "vacuumed and seen to be empty". In particular, a user cannot use VACUUM or ANALYZE to override the planner's normal heuristic that an empty table should not be believed to be empty because it is probably about to get populated. That heuristic is a good safety measure, so I don't care to abandon it, but there should be a way to override it if the table is indeed intended to stay empty. Hence, represent the initial state of ignorance by setting reltuples to -1 (relpages is still set to zero), and apply the minimum-ten-pages heuristic only when reltuples is still -1. If the table is empty, VACUUM or ANALYZE (but not CREATE INDEX) will override that to reltuples = relpages = 0, and then we'll plan on that basis. This requires a bunch of fiddly little changes, but we can get rid of some ugly kluges that were formerly needed to maintain the old definition. One notable point is that FDWs' GetForeignRelSize methods will see baserel->tuples = -1 when no ANALYZE has been done on the foreign table. That seems like a net improvement, since those methods were formerly also in the dark about what baserel->tuples = 0 really meant. Still, it is an API change. I bumped catversion because code predating this change would get confused by seeing reltuples = -1. Discussion: https://postgr.es/m/F02298E0-6EF4-49A1-BCB6-C484794D9ACC@thebuild.com
* Make xact.h usable in frontend.Heikki Linnakangas2020-08-17
| | | | | | | | xact.h included utils/datetime.h, which cannot be used in the frontend (it includes fmgr.h, which needs Datum). But xact.h only needs the definition of TimestampTz from it, which is available directly in datatypes/timestamp.h. Change xact.h to include that instead of utils/datetime.h, so that it can be used in client programs.
* Revert "Use CP_SMALL_TLIST for hash aggregate"Jeff Davis2020-07-12
| | | | | | | | | | This reverts commit 4cad2534da6d17067d98cf04be2dfc1bda8f2cd0 due to a performance regression. It will be replaced by a new approach in an upcoming commit. Reported-by: Andres Freund Discussion: https://postgr.es/m/20200614181418.mx4bvljmfkkhoqzl@alap3.anarazel.de Backpatch-through: 13
* Use CP_SMALL_TLIST for hash aggregateTomas Vondra2020-05-31
| | | | | | | | | | | | | | | | | Commit 1f39bce021 added disk-based hash aggregation, which may spill incoming tuples to disk. It however did not request projection to make the tuples as narrow as possible, which may mean having to spill much more data than necessary (increasing I/O, pushing other stuff from page cache, etc.). This adds CP_SMALL_TLIST in places that may use hash aggregation - we do that only for AGG_HASHED. It's unnecessary for AGG_SORTED, because that either uses explicit Sort (which already does projection) or pre-sorted input (which does not need spilling to disk). Author: Tomas Vondra Reviewed-by: Jeff Davis Discussion: https://postgr.es/m/20200519151202.u2p2gpiawoaznsv2%40development
* Initial pgindent and pgperltidy run for v13.Tom Lane2020-05-14
| | | | | | | | | | | Includes some manual cleanup of places that pgindent messed up, most of which weren't per project style anyway. Notably, it seems some people didn't absorb the style rules of commit c9d297751, because there were a bunch of new occurrences of function calls with a newline just after the left paren, all with faulty expectations about how the rest of the call would get indented.
* Rename connection parameters to control min/max SSL protocol version in libpqMichael Paquier2020-04-30
| | | | | | | | | | | | | | The libpq parameters ssl{max|min}protocolversion are renamed to use underscores, to become ssl_{max|min}_protocol_version. The related environment variables still use the names introduced in commit ff8ca5f that added the feature. Per complaint from Peter Eisentraut (this was also mentioned by me in the original patch review but the issue got discarded). Author: Daniel Gustafsson Reviewed-by: Peter Eisentraut, Michael Paquier Discussion: https://postgr.es/m/b319e449-318d-e691-4997-1327e166fcc4@2ndquadrant.com
* Consider Incremental Sort paths at additional placesTomas Vondra2020-04-07
| | | | | | | | | | | | | | | | | | | Commit d2d8a229bc introduced Incremental Sort, but it was considered only in create_ordered_paths() as an alternative to regular Sort. There are many other places that require sorted input and might benefit from considering Incremental Sort too. This patch modifies a number of those places, but not all. The concern is that just adding Incremental Sort to any place that already adds Sort may increase the number of paths considered, negatively affecting planning time, without any benefit. So we've taken a more conservative approach, based on analysis of which places do affect a set of queries that did seem practical. This means some less common queries may not benefit from Incremental Sort yet. Author: Tomas Vondra Reviewed-by: James Coleman Discussion: https://postgr.es/m/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com
* Fix compile failure.Tom Lane2020-02-24
| | | | | | I forgot that some compilers won't handle #if constructs within ereport() calls. Duplicating most of the call is annoying but simple. Per buildfarm.
* Account explicitly for long-lived FDs that are allocated outside fd.c.Tom Lane2020-02-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The comments in fd.c have long claimed that all file allocations should go through that module, but in reality that's not always practical. fd.c doesn't supply APIs for invoking some FD-producing syscalls like pipe() or epoll_create(); and the APIs it does supply for non-virtual FDs are mostly insistent on releasing those FDs at transaction end; and in some cases the actual open() call is in code that can't be made to use fd.c, such as libpq. This has led to a situation where, in a modern server, there are likely to be seven or so long-lived FDs per backend process that are not known to fd.c. Since NUM_RESERVED_FDS is only 10, that meant we had *very* few spare FDs if max_files_per_process is >= the system ulimit and fd.c had opened all the files it thought it safely could. The contrib/postgres_fdw regression test, in particular, could easily be made to fall over by running it under a restrictive ulimit. To improve matters, invent functions Acquire/Reserve/ReleaseExternalFD that allow outside callers to tell fd.c that they have or want to allocate a FD that's not directly managed by fd.c. Add calls to track all the fixed FDs in a standard backend session, so that we are honestly guaranteeing that NUM_RESERVED_FDS FDs remain unused below the EMFILE limit in a backend's idle state. The coding rules for these functions say that there's no need to call them in code that just allocates one FD over a fairly short interval; we can dip into NUM_RESERVED_FDS for such cases. That means that there aren't all that many places where we need to worry. But postgres_fdw and dblink must use this facility to account for long-lived FDs consumed by libpq connections. There may be other places where it's worth doing such accounting, too, but this seems like enough to solve the immediate problem. Internally to fd.c, "external" FDs are limited to max_safe_fds/3 FDs. (Callers can choose to ignore this limit, but of course it's unwise to do so except for fixed file allocations.) I also reduced the limit on "allocated" files to max_safe_fds/3 FDs (it had been max_safe_fds/2). Conceivably a smarter rule could be used here --- but in practice, on reasonable systems, max_safe_fds should be large enough that this isn't much of an issue, so KISS for now. To avoid possible regression in the number of external or allocated files that can be opened, increase FD_MINFREE and the lower limit on max_files_per_process a little bit; we now insist that the effective "ulimit -n" be at least 64. This seems like pretty clearly a bug fix, but in view of the lack of field complaints, I'll refrain from risking a back-patch. Discussion: https://postgr.es/m/E1izCmM-0005pV-Co@gemulon.postgresql.org
* Add connection parameters to control SSL protocol min/max in libpqMichael Paquier2020-01-28
| | | | | | | | | | | | | | | These two new parameters, named sslminprotocolversion and sslmaxprotocolversion, allow to respectively control the minimum and the maximum version of the SSL protocol used for the SSL connection attempt. The default setting is to allow any version for both the minimum and the maximum bounds, causing libpq to rely on the bounds set by the backend when negotiating the protocol to use for an SSL connection. The bounds are checked when the values are set at the earliest stage possible as this makes the checks independent of any SSL implementation. Author: Daniel Gustafsson Reviewed-by: Michael Paquier, Cary Huang Discussion: https://postgr.es/m/4F246AE3-A7AE-471E-BD3D-C799D3748E03@yesql.se
* In postgres_fdw, don't try to ship MULTIEXPR updates to remote server.Tom Lane2020-01-26
| | | | | | | | | | | | | | | | | | | | | | | | In a statement like "UPDATE remote_tab SET (x,y) = (SELECT ...)", we'd conclude that the statement could be directly executed remotely, because the sub-SELECT is in a resjunk tlist item that's not examined for shippability. Currently that ends up crashing if the sub-SELECT contains any remote Vars. Prevent the crash by deeming MULTIEXEC Params to be unshippable. This is a bit of a brute-force solution, since if the sub-SELECT *doesn't* contain any remote Vars, the current execution technology would work; but that's not a terribly common use-case for this syntax, I think. In any case, we generally don't try to ship sub-SELECTs, so it won't surprise anybody that this doesn't end up as a remote direct update. I'd be inclined to see if that general limitation can be fixed before worrying about this case further. Per report from Lukáš Sobotka. Back-patch to 9.6. 9.5 had MULTIEXPR, but we didn't try to perform remote direct updates then, so the case didn't arise anyway. Discussion: https://postgr.es/m/CAJif3k+iA_ekBB5Zw2hDBaE1wtiQa4LH4_JUXrrMGwTrH0J01Q@mail.gmail.com
* Only superuser can set sslcert/sslkey in postgres_fdw user mappingsAndrew Dunstan2020-01-13
| | | | | | Othrwise there is a security risk. Discussion: https://postgr.es/m/20200109103014.GA4192@msg.df7cb.de
* Allow 'sslkey' and 'sslcert' in postgres_fdw user mappingsAndrew Dunstan2020-01-09
| | | | | | This allows different users to authenticate with different certificates. Author: Craig Ringer
* Update copyrights for 2020Bruce Momjian2020-01-01
| | | | Backpatch-through: update all files in master, backpatch legal files through 9.4
* Adjust test case added by commit 6136e94dc.Tom Lane2019-12-20
| | | | | | | | | Per project policy, transient roles created by regression test cases should be named "regress_something", to reduce the risks of running such cases against installed servers. And no such role should ever be left behind after running a test. Discussion: https://postgr.es/m/11297.1576868677@sss.pgh.pa.us
* libpq should expose GSS-related parameters even when not implemented.Tom Lane2019-12-20
| | | | | | | | | | | | | | | | | | | | | | We realized years ago that it's better for libpq to accept all connection parameters syntactically, even if some are ignored or restricted due to lack of the feature in a particular build. However, that lesson from the SSL support was for some reason never applied to the GSSAPI support. This is causing various buildfarm members to have problems with a test case added by commit 6136e94dc, and it's just a bad idea from a user-experience standpoint anyway, so fix it. While at it, fix some places where parameter-related infrastructure was added with the aid of a dartboard, or perhaps with the aid of the anti-pattern "add new stuff at the end". It should be safe to rearrange the contents of struct pg_conn even in released branches, since that's private to libpq (and we'd have to move some fields in some builds to fix this, anyway). Back-patch to all supported branches. Discussion: https://postgr.es/m/11297.1576868677@sss.pgh.pa.us
* Superuser can permit passwordless connections on postgres_fdwAndrew Dunstan2019-12-20
| | | | | | | | | | | | | | | | | | | | Currently postgres_fdw doesn't permit a non-superuser to connect to a foreign server without specifying a password, or to use an authentication mechanism that doesn't use the password. This is to avoid using the settings and identity of the user running Postgres. However, this doesn't make sense for all authentication methods. We therefore allow a superuser to set "password_required 'false'" for user mappings for the postgres_fdw. The superuser must ensure that the foreign server won't try to rely solely on the server identity (e.g. trust, peer, ident) or use an authentication mechanism that relies on the password settings (e.g. md5, scram-sha-256). This feature is a prelude to better support for sslcert and sslkey settings in user mappings. Author: Craig Ringer. Discussion: https://postgr.es/m/075135da-545c-f958-fed0-5dcb462d6dae@2ndQuadrant.com
* Further adjust EXPLAIN's choices of table alias names.Tom Lane2019-12-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch causes EXPLAIN to always assign a separate table alias to the parent RTE of an append relation (inheritance set); before, such RTEs were ignored if not actually scanned by the plan. Since the child RTEs now always have that same alias to start with (cf. commit 55a1954da), the net effect is that the parent RTE usually gets the alias used or implied by the query text, and the children all get that alias with "_N" appended. (The exception to "usually" is if there are duplicate aliases in different subtrees of the original query; then some of those original RTEs will also have "_N" appended.) This results in more uniform output for partitioned-table plans than we had before: the partitioned table itself gets the original alias, and all child tables have aliases with "_N", rather than the previous behavior where one of the children would get an alias without "_N". The reason for giving the parent RTE an alias, even if it isn't scanned by the plan, is that we now use the parent's alias to qualify Vars that refer to an appendrel output column and appear above the Append or MergeAppend that computes the appendrel. But below the append, Vars refer to some one of the child relations, and are displayed that way. This seems clearer than the old behavior where a Var that could carry values from any child relation was displayed as if it referred to only one of them. While at it, change ruleutils.c so that the code paths used by EXPLAIN deal in Plan trees not PlanState trees. This effectively reverts a decision made in commit 1cc29fe7c, which seemed like a good idea at the time to make ruleutils.c consistent with explain.c. However, it's problematic because we'd really like to allow executor startup pruning to remove all the children of an append node when possible, leaving no child PlanState to resolve Vars against. (That's not done here, but will be in the next patch.) This requires different handling of subplans and initplans than before, but is otherwise a pretty straightforward change. Discussion: https://postgr.es/m/001001d4f44b$2a2cca50$7e865ef0$@lab.ntt.co.jp
* Fix handling of multiple AFTER ROW triggers on a foreign table.Etsuro Fujita2019-12-10
| | | | | | | | | | | | | | | | | | | | | | AfterTriggerExecute() retrieves a fresh tuple or pair of tuples from a tuplestore and then stores the tuple(s) in the passed-in slot(s) if AFTER_TRIGGER_FDW_FETCH, while it uses the most-recently-retrieved tuple(s) stored in the slot(s) if AFTER_TRIGGER_FDW_REUSE. This was done correctly before 12, but commit ff11e7f4b broke it by mistakenly clearing the tuple(s) stored in the slot(s) in that function, leading to an assertion failure as reported in bug #16139 from Alexander Lakhin. Also, fix some other issues with the aforementioned commit in passing: * For tg_newslot, which is a slot added to the TriggerData struct by the commit to store new updated tuples, it didn't ensure the slot was NULL if there was no such tuple. * The commit failed to update the documentation about the trigger interface. Author: Etsuro Fujita Backpatch-through: 12 Discussion: https://postgr.es/m/16139-94f9ccf0db6119ec%40postgresql.org
* Further sync postgres_fdw's "Relations" output with the rest of EXPLAIN.Tom Lane2019-12-03
| | | | | | | | | | | | EXPLAIN generally only adds schema qualifications to table names when VERBOSE is specified. In postgres_fdw's "Relations" output, table names were always so qualified, but that was an implementation restriction: in the original coding, we didn't have access to the verbose flag at the time the string was generated. After the code rearrangement of commit 4526951d5, we do have that info available at the right time, so make this output follow the normal rule. Discussion: https://postgr.es/m/12424.1575168015@sss.pgh.pa.us
* Fix EXPLAIN's column alias output for mismatched child tables.Tom Lane2019-12-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | If an inheritance/partitioning parent table is assigned some column alias names in the query, EXPLAIN mapped those aliases onto the child tables' columns by physical position, resulting in bogus output if a child table's columns aren't one-for-one with the parent's. To fix, make expand_single_inheritance_child() generate a correctly re-mapped column alias list, rather than just copying the parent RTE's alias node. (We have to fill the alias field, not just adjust the eref field, because ruleutils.c will ignore eref in favor of looking at the real column names.) This means that child tables will now always have alias fields in plan rtables, where before they might not have. That results in a rather substantial set of regression test output changes: EXPLAIN will now always show child tables with aliases that match the parent table (usually with "_N" appended for uniqueness). But that seems like a net positive for understandability, since the parent alias corresponds to something that actually appeared in the original query, while the child table names didn't. (Note that this does not change anything for cases where an explicit table alias was written in the query for the parent table; it just makes cases without such aliases behave similarly to that.) Hence, while we could avoid these subsidiary changes if we made inherit.c more complicated, we choose not to. Discussion: https://postgr.es/m/12424.1575168015@sss.pgh.pa.us
* Make postgres_fdw's "Relations" output agree with the rest of EXPLAIN.Tom Lane2019-12-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The relation aliases shown in the "Relations" line for a foreign scan didn't always agree with those used in the rest of EXPLAIN's output. The regression test result changes appearing here provide examples. It's really impossible for postgres_fdw to duplicate EXPLAIN's alias assignment logic during postgresGetForeignRelSize(), because of the de-duplication that EXPLAIN does on a global basis --- and anyway, trying to duplicate that would be unmaintainable. Instead, just put numeric rangetable indexes into the string, and convert those to table names/aliases in postgresExplainForeignScan, which does have access to the results of ruleutils.c's alias assignment logic. Aside from being more reliable, this shifts some work from planning to EXPLAIN, which is a good tradeoff for performance. (I also changed from using StringInfo to using psprintf, which makes the code slightly simpler and reduces its memory consumption.) A kluge required by this solution is that we have to reverse-engineer the rtoffset applied by setrefs.c. If that logic ever fails (presumably because the member tables of a join got offset by different amounts), we'll need some more cooperation with setrefs.c to keep things straight. But for now, there's no need for that. Arguably this is a back-patchable bug fix, but since this is a mostly cosmetic issue and there have been no field complaints, I'll refrain for now. Discussion: https://postgr.es/m/12424.1575168015@sss.pgh.pa.us
* Make the order of the header file includes consistent.Amit Kapila2019-11-25
| | | | | | | | | Similar to commits 14aec03502, 7e735035f2 and dddf4cdc33, this commit makes the order of header file inclusion consistent in more places. Author: Vignesh C Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CALDaNm2Sznv8RR6Ex-iJO6xAdsxgWhCoETkaYX=+9DW3q0QCfA@mail.gmail.com
* Add regression test for two-phase transaction in postgres_fdwMichael Paquier2019-11-13
| | | | | | | | | | | | postgres_fdw does not support two-phase transactions, so let's add a small negative test case to check after it. Note that this is checked using an end-of-xact callback to ensure a proper connection cleanup with the foreign server, which is called before checking if a server is able to handle 2PC with max_prepared_xacts, so this test does not need an alternate output file. Author: Gilles Darold Discussion: https://postgr.es/m/20191108090507.GC1768@paquier.xyz
* postgres_fdw: Fix error message for PREPARE TRANSACTION.Etsuro Fujita2019-11-08
| | | | | | | | | | | | | | | | | | | | Currently, postgres_fdw does not support preparing a remote transaction for two-phase commit even in the case where the remote transaction is read-only, but the old error message appeared to imply that that was not supported only if the remote transaction modified remote tables. Change the message so as to include the case where the remote transaction is read-only. Also fix a comment above the message. Also add a note about the lack of supporting PREPARE TRANSACTION to the postgres_fdw documentation. Reported-by: Gilles Darold Author: Gilles Darold and Etsuro Fujita Reviewed-by: Michael Paquier and Kyotaro Horiguchi Backpatch-through: 9.4 Discussion: https://postgr.es/m/08600ed3-3084-be70-65ba-279ab19618a5%40darold.net
* Split all OBJS style lines in makefiles into one-line-per-entry style.Andres Freund2019-11-05
| | | | | | | | | | | | | | | When maintaining or merging patches, one of the most common sources for conflicts are the list of objects in makefiles. Especially when the split across lines has been changed on both sides, which is somewhat common due to attempting to stay below 80 columns, those conflicts are unnecessarily laborious to resolve. By splitting, and alphabetically sorting, OBJS style lines into one object per line, conflicts should be less frequent, and easier to resolve when they still occur. Author: Andres Freund Discussion: https://postgr.es/m/20191029200901.vww4idgcxv74cwes@alap3.anarazel.de
* PG_FINALLYPeter Eisentraut2019-11-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This gives an alternative way of catching exceptions, for the common case where the cleanup code is the same in the error and non-error cases. So instead of PG_TRY(); { ... code that might throw ereport(ERROR) ... } PG_CATCH(); { cleanup(); PG_RE_THROW(); } PG_END_TRY(); cleanup(); one can write PG_TRY(); { ... code that might throw ereport(ERROR) ... } PG_FINALLY(); { cleanup(); } PG_END_TRY(); Discussion: https://www.postgresql.org/message-id/flat/95a822c3-728b-af0e-d7e5-71890507ae0c%402ndquadrant.com
* Make the order of the header file includes consistent in contrib modules.Amit Kapila2019-10-24
| | | | | | | | | | | | | | The basic rule we follow here is to always first include 'postgres.h' or 'postgres_fe.h' whichever is applicable, then system header includes and then Postgres header includes.  In this, we also follow that all the Postgres header includes are in order based on their ASCII value.  We generally follow these rules, but the code has deviated in many places. This commit makes it consistent just for contrib modules. The later commits will enforce similar rules in other parts of code. Author: Vignesh C Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CALDaNm2Sznv8RR6Ex-iJO6xAdsxgWhCoETkaYX=+9DW3q0QCfA@mail.gmail.com
* Rationalize use of list_concat + list_copy combinations.Tom Lane2019-08-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In the wake of commit 1cff1b95a, the result of list_concat no longer shares the ListCells of the second input. Therefore, we can replace "list_concat(x, list_copy(y))" with just "list_concat(x, y)". To improve call sites that were list_copy'ing the first argument, or both arguments, invent "list_concat_copy()" which produces a new list sharing no ListCells with either input. (This is a bit faster than "list_concat(list_copy(x), y)" because it makes the result list the right size to start with.) In call sites that were not list_copy'ing the second argument, the new semantics mean that we are usually leaking the second List's storage, since typically there is no remaining pointer to it. We considered inventing another list_copy variant that would list_free the second input, but concluded that for most call sites it isn't worth worrying about, given the relative compactness of the new List representation. (Note that in cases where such leakage would happen, the old code already leaked the second List's header; so we're only discussing the size of the leak not whether there is one. I did adjust two or three places that had been troubling to free that header so that they manually free the whole second List.) Patch by me; thanks to David Rowley for review. Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
* Fix inconsistencies and typos in the tree, take 9Michael Paquier2019-08-05
| | | | | | | | This addresses more issues with code comments, variable names and unreferenced variables. Author: Alexander Lakhin Discussion: https://postgr.es/m/7ab243e0-116d-3e44-d120-76b3df7abefd@gmail.com
* Use appendBinaryStringInfo in more places where the length is knownDavid Rowley2019-07-23
| | | | | | | | | | When we already know the length that we're going to append, then it makes sense to use appendBinaryStringInfo instead of appendStringInfoString so that the append can be performed with a simple memcpy() using a known length rather than having to first perform a strlen() call to obtain the length. Discussion: https://postgr.es/m/CAKJS1f8+FRAM1s5+mAa3isajeEoAaicJ=4e0WzrH3tAusbbiMQ@mail.gmail.com
* Represent Lists as expansible arrays, not chains of cons-cells.Tom Lane2019-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Originally, Postgres Lists were a more or less exact reimplementation of Lisp lists, which consist of chains of separately-allocated cons cells, each having a value and a next-cell link. We'd hacked that once before (commit d0b4399d8) to add a separate List header, but the data was still in cons cells. That makes some operations -- notably list_nth() -- O(N), and it's bulky because of the next-cell pointers and per-cell palloc overhead, and it's very cache-unfriendly if the cons cells end up scattered around rather than being adjacent. In this rewrite, we still have List headers, but the data is in a resizable array of values, with no next-cell links. Now we need at most two palloc's per List, and often only one, since we can allocate some values in the same palloc call as the List header. (Of course, extending an existing List may require repalloc's to enlarge the array. But this involves just O(log N) allocations not O(N).) Of course this is not without downsides. The key difficulty is that addition or deletion of a list entry may now cause other entries to move, which it did not before. For example, that breaks foreach() and sister macros, which historically used a pointer to the current cons-cell as loop state. We can repair those macros transparently by making their actual loop state be an integer list index; the exposed "ListCell *" pointer is no longer state carried across loop iterations, but is just a derived value. (In practice, modern compilers can optimize things back to having just one loop state value, at least for simple cases with inline loop bodies.) In principle, this is a semantics change for cases where the loop body inserts or deletes list entries ahead of the current loop index; but I found no such cases in the Postgres code. The change is not at all transparent for code that doesn't use foreach() but chases lists "by hand" using lnext(). The largest share of such code in the backend is in loops that were maintaining "prev" and "next" variables in addition to the current-cell pointer, in order to delete list cells efficiently using list_delete_cell(). However, we no longer need a previous-cell pointer to delete a list cell efficiently. Keeping a next-cell pointer doesn't work, as explained above, but we can improve matters by changing such code to use a regular foreach() loop and then using the new macro foreach_delete_current() to delete the current cell. (This macro knows how to update the associated foreach loop's state so that no cells will be missed in the traversal.) There remains a nontrivial risk of code assuming that a ListCell * pointer will remain good over an operation that could now move the list contents. To help catch such errors, list.c can be compiled with a new define symbol DEBUG_LIST_MEMORY_USAGE that forcibly moves list contents whenever that could possibly happen. This makes list operations significantly more expensive so it's not normally turned on (though it is on by default if USE_VALGRIND is on). There are two notable API differences from the previous code: * lnext() now requires the List's header pointer in addition to the current cell's address. * list_delete_cell() no longer requires a previous-cell argument. These changes are somewhat unfortunate, but on the other hand code using either function needs inspection to see if it is assuming anything it shouldn't, so it's not all bad. Programmers should be aware of these significant performance changes: * list_nth() and related functions are now O(1); so there's no major access-speed difference between a list and an array. * Inserting or deleting a list element now takes time proportional to the distance to the end of the list, due to moving the array elements. (However, it typically *doesn't* require palloc or pfree, so except in long lists it's probably still faster than before.) Notably, lcons() used to be about the same cost as lappend(), but that's no longer true if the list is long. Code that uses lcons() and list_delete_first() to maintain a stack might usefully be rewritten to push and pop at the end of the list rather than the beginning. * There are now list_insert_nth...() and list_delete_nth...() functions that add or remove a list cell identified by index. These have the data-movement penalty explained above, but there's no search penalty. * list_concat() and variants now copy the second list's data into storage belonging to the first list, so there is no longer any sharing of cells between the input lists. The second argument is now declared "const List *" to reflect that it isn't changed. This patch just does the minimum needed to get the new implementation in place and fix bugs exposed by the regression tests. As suggested by the foregoing, there's a fair amount of followup work remaining to do. Also, the ENABLE_LIST_COMPAT macros are finally removed in this commit. Code using those should have been gone a dozen years ago. Patch by me; thanks to David Rowley, Jesper Pedersen, and others for review. Discussion: https://postgr.es/m/11587.1550975080@sss.pgh.pa.us
* Use appendStringInfoString and appendPQExpBufferStr where possibleDavid Rowley2019-07-04
| | | | | | | | | | This changes various places where appendPQExpBuffer was used in places where it was possible to use appendPQExpBufferStr, and likewise for appendStringInfo and appendStringInfoString. This is really just a stylistic improvement, but there are also small performance gains to be had from doing this. Discussion: http://postgr.es/m/CAKJS1f9P=M-3ULmPvr8iCno8yvfDViHibJjpriHU8+SXUgeZ=w@mail.gmail.com
* postgres_fdw: Remove redundancy in postgresAcquireSampleRowsFunc().Etsuro Fujita2019-07-03
| | | | | | | | | | | Previously, in the loop in postgresAcquireSampleRowsFunc() to iterate fetching rows from a given remote table, we redundantly 1) determined the fetch size by parsing the table's server/table-level options and then 2) constructed the fetch command; remove that redundancy. Author: Etsuro Fujita Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/CAPmGK17_urk9qkLV65_iYMFw64z5qhdfhY=tMVV6Jg4KNYx8+w@mail.gmail.com
* pgindent run prior to branching v12.Tom Lane2019-07-01
| | | | | pgperltidy and reformat-dat-files too, though the latter didn't find anything to change.
* Fix many typos and inconsistenciesMichael Paquier2019-07-01
| | | | | Author: Alexander Lakhin Discussion: https://postgr.es/m/af27d1b3-a128-9d62-46e0-88f424397f44@gmail.com
* postgres_fdw: Fix costing of pre-sorted foreign paths with local stats.Etsuro Fujita2019-06-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit aa09cd242 modified estimate_path_cost_size() so that it reuses cached costs of a basic foreign path for a given foreign-base/join relation when costing pre-sorted foreign paths for that relation, but it incorrectly re-computed retrieved_rows, an estimated number of rows fetched from the remote side, which is needed for costing both the basic and pre-sorted foreign paths. To fix, handle retrieved_rows the same way as the cached costs: store in that relation's fpinfo the retrieved_rows estimate computed for costing the basic foreign path, and reuse it when costing the pre-sorted foreign paths. Also, reuse the rows/width estimates stored in that relation's fpinfo when costing the pre-sorted foreign paths, to make the code consistent. In commit ffab494a4, to extend the costing mentioned above to the foreign-grouping case, I made a change to add_foreign_grouping_paths() to store in a given foreign-grouped relation's RelOptInfo the rows estimate for that relation for reuse, but this patch makes that change unnecessary since we already store the row estimate in that relation's fpinfo, which this patch reuses when costing a foreign path for that relation with the sortClause ordering; remove that change. In passing, fix thinko in commit 7012b132d: in estimate_path_cost_size(), the width estimate for a given foreign-grouped relation to be stored in that relation's fpinfo was reset incorrectly when costing a basic foreign path for that relation with local stats. Apply the patch to HEAD only to avoid destabilizing existing plan choices. Author: Etsuro Fujita Discussion: https://postgr.es/m/CAPmGK17jaJLPDEkgnP2VmkOg=5wT8YQ1CqssU8JRpZ_NSE+dqQ@mail.gmail.com
* postgres_fdw: Account for triggers in non-direct remote UPDATE planning.Etsuro Fujita2019-06-13
| | | | | | | | | | | | | | | | | | Previously, in postgresPlanForeignModify, we planned an UPDATE operation on a foreign table so that we transmit only columns that were explicitly targets of the UPDATE, so as to avoid unnecessary data transmission, but if there were BEFORE ROW UPDATE triggers on the foreign table, those triggers might change values for non-target columns, in which case we would miss sending changed values for those columns. Prevent optimizing away transmitting all columns if there are BEFORE ROW UPDATE triggers on the foreign table. This is an oversight in commit 7cbe57c34 which added triggers on foreign tables, so apply the patch all the way back to 9.4 where that came in. Author: Shohei Mochizuki Reviewed-by: Amit Langote Discussion: https://postgr.es/m/201905270152.x4R1q3qi014550@toshiba.co.jp
* postgres_fdw: Reorder C includes.Etsuro Fujita2019-06-11
| | | | | | | | | Reorder header files in postgres_fdw.c and connection.c in alphabetical order. Author: Etsuro Fujita Reviewed-by: Alvaro Herrera Discussion: https://postgr.es/m/CAPmGK17ZmNb-EELqu8LmMh2t2uFdbfWNVDEfDO5-bpejHPONMQ@mail.gmail.com
* Fix typos.Amit Kapila2019-05-26
| | | | | | | Reported-by: Alexander Lakhin Author: Alexander Lakhin Reviewed-by: Amit Kapila and Tom Lane Discussion: https://postgr.es/m/7208de98-add8-8537-91c0-f8b089e2928c@gmail.com
* Phase 2 pgindent run for v12.Tom Lane2019-05-22
| | | | | | | | | Switch to 2.1 version of pg_bsd_indent. This formats multiline function declarations "correctly", that is with additional lines of parameter declarations indented to match where the first line's left parenthesis is. Discussion: https://postgr.es/m/CAEepm=0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug@mail.gmail.com
* Initial pgindent run for v12.Tom Lane2019-05-22
| | | | | | | | This is still using the 2.0 version of pg_bsd_indent. I thought it would be good to commit this separately, so as to document the differences between 2.0 and 2.1 behavior. Discussion: https://postgr.es/m/16296.1558103386@sss.pgh.pa.us