aboutsummaryrefslogtreecommitdiff
path: root/contrib/postgres_fdw/connection.c
Commit message (Collapse)AuthorAge
* Update copyright for 2018Bruce Momjian2018-01-02
| | | | Backpatch-through: certain files through 9.3
* postgres_fdw: Judge password use by run-as user, not session user.Robert Haas2017-12-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a backward incompatibility which should be noted in the release notes for PostgreSQL 11. For security reasons, we require that a postgres_fdw foreign table use password authentication when accessing a remote server, so that an unprivileged user cannot usurp the server's credentials. Superusers are exempt from this requirement, because we assume they are entitled to usurp the server's credentials or, at least, can find some other way to do it. But what should happen when the foreign table is accessed by a view owned by a user different from the session user? Is it the view owner that must be a superuser in order to avoid the requirement of using a password, or the session user? Historically it was the latter, but this requirement makes it the former instead. This allows superusers to delegate to other users the right to select from a foreign table that doesn't use password authentication by creating a view over the foreign table and handing out rights to the view. It is also more consistent with the idea that access to a view should use the view owner's privileges rather than the session user's privileges. The upshot of this change is that a superuser selecting from a view created by a non-superuser may now get an error complaining that no password was used, while a non-superuser selecting from a view created by a superuser will no longer receive such an error. No documentation changes are present in this patch because the wording of the documentation already suggests that it works this way. We should perhaps adjust the documentation in the back-branches, but that's a task for another patch. Originally proposed by Jeff Janes, but with different semantics; adjusted to work like this by me per discussion. Discussion: http://postgr.es/m/CA+TgmoaY4HsVZJv5SqEjCKLDwtCTSwXzKpRftgj50wmMMBwciA@mail.gmail.com
* Re-establish postgres_fdw connections after server or user mapping changes.Tom Lane2017-07-21
| | | | | | | | | | | | | | | | | Previously, postgres_fdw would keep on using an existing connection even if the user did ALTER SERVER or ALTER USER MAPPING commands that should affect connection parameters. Teach it to watch for catcache invals on these catalogs and re-establish connections when the relevant catalog entries change. Per bug #14738 from Michal Lis. In passing, clean up some rather crufty decisions in commit ae9bfc5d6 about where fields of ConnCacheEntry should be reset. We now reset all the fields whenever we open a new connection. Kyotaro Horiguchi, reviewed by Ashutosh Bapat and myself. Back-patch to 9.3 where postgres_fdw appeared. Discussion: https://postgr.es/m/20170710113917.7727.10247@wrigleys.postgresql.org
* Phase 3 of pgindent updates.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | Don't move parenthesized lines to the left, even if that means they flow past the right margin. By default, BSD indent lines up statement continuation lines that are within parentheses so that they start just to the right of the preceding left parenthesis. However, traditionally, if that resulted in the continuation line extending to the right of the desired right margin, then indent would push it left just far enough to not overrun the margin, if it could do so without making the continuation line start to the left of the current statement indent. That makes for a weird mix of indentations unless one has been completely rigid about never violating the 80-column limit. This behavior has been pretty universally panned by Postgres developers. Hence, disable it with indent's new -lpl switch, so that parenthesized lines are always lined up with the preceding left paren. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Fix low-probability leaks of PGresult objects in the backend.Tom Lane2017-06-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We had three occurrences of essentially the same coding pattern wherein we tried to retrieve a query result from a libpq connection without blocking. In the case where PQconsumeInput failed (typically indicating a lost connection), all three loops simply gave up and returned, forgetting to clear any previously-collected PGresult object. Since those are malloc'd not palloc'd, the oversight results in a process-lifespan memory leak. One instance, in libpqwalreceiver, is of little significance because the walreceiver process would just quit anyway if its connection fails. But we might as well fix it. The other two instances, in postgres_fdw, are somewhat more worrisome because at least in principle the scenario could be repeated, allowing the amount of memory leaked to build up to something worth worrying about. Moreover, in these cases the loops contain CHECK_FOR_INTERRUPTS calls, as well as other calls that could potentially elog(ERROR), providing another way to exit without having cleared the PGresult. Here we need to add PG_TRY logic similar to what exists in quite a few other places in postgres_fdw. Coverity noted the libpqwalreceiver bug; I found the other two cases by checking all calls of PQconsumeInput. Back-patch to all supported versions as appropriate (9.2 lacks postgres_fdw, so this is really quite unexciting for that branch). Discussion: https://postgr.es/m/22620.1497486981@sss.pgh.pa.us
* postgres_fdw: Allow cancellation of transaction control commands.Robert Haas2017-06-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit f039eaac7131ef2a4cf63a10cf98486f8bcd09d2, later back-patched with commit 1b812afb0eafe125b820cc3b95e7ca03821aa675, allowed many of the queries issued by postgres_fdw to fetch remote data to respond to cancel interrupts in a timely fashion. However, it didn't do anything about the transaction control commands, which remained noninterruptible. Improve the situation by changing do_sql_command() to retrieve query results using pgfdw_get_result(), which uses the asynchronous interface to libpq so that it can check for interrupts every time libpq returns control. Since this might result in a situation where we can no longer be sure that the remote transaction state matches the local transaction state, add a facility to force all levels of the local transaction to abort if we've lost track of the remote state; without this, an apparently-successful commit of the local transaction might fail to commit changes made on the remote side. Also, add a 60-second timeout for queries issue during transaction abort; if that expires, give up and mark the state of the connection as unknown. Drop all such connections when we exit the local transaction. Together, these changes mean that if we're aborting the local toplevel transaction anyway, we can just drop the remote connection in lieu of waiting (possibly for a very long time) for it to complete an abort. This still leaves quite a bit of room for improvement. PQcancel() has no asynchronous interface, so if we get stuck sending the cancel request we'll still hang. Also, PQsetnonblocking() is not used, which means we could block uninterruptibly when sending a query. There might be some other optimizations possible as well. Nonetheless, this allows us to escape a wait for an unresponsive remote server quickly in many more cases than previously. Report by Suraj Kharage. Patch by me and Rafia Sabih. Review and testing by Amit Kapila and Tushar Ahuja. Discussion: http://postgr.es/m/CAF1DzPU8Kx+fMXEbFoP289xtm3bz3t+ZfxhmKavr98Bh-C0TqQ@mail.gmail.com
* chomp PQerrorMessage() in backend usesPeter Eisentraut2017-02-27
| | | | | | | | PQerrorMessage() returns an error message with a trailing newline, but in backend use (dblink, postgres_fdw, libpqwalreceiver), we want to have the error message without that for emitting via ereport(). To simplify that, add a function pchomp() that returns a pstrdup'ed string with the trailing newline characters removed.
* Update copyright via script for 2017Bruce Momjian2017-01-03
|
* Improve dblink error message when remote does not provide itJoe Conway2016-12-21
| | | | | | | | | | | | | When dblink or postgres_fdw detects an error on the remote side of the connection, it will try to construct a local error message as best it can using libpq's PQresultErrorField(). When no primary message is available, it was bailing out with an unhelpful "unknown error". Make that message better and more style guide compliant. Per discussion on hackers. Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3. Discussion: https://postgr.es/m/19872.1482338965%40sss.pgh.pa.us
* Rename WAIT_* constants to PG_WAIT_*.Robert Haas2016-10-05
| | | | | | | | Windows apparently has a constant named WAIT_TIMEOUT, and some of these other names are pretty generic, too. Insert "PG_" at the front of each name in order to disambiguate. Michael Paquier
* Extend framework from commit 53be0b1ad to report latch waits.Robert Haas2016-10-04
| | | | | | | | | | | | | | | | | | | | | | WaitLatch, WaitLatchOrSocket, and WaitEventSetWait now taken an additional wait_event_info parameter; legal values are defined in pgstat.h. This makes it possible to uniquely identify every point in the core code where we are waiting for a latch; extensions can pass WAIT_EXTENSION. Because latches were the major wait primitive not previously covered by this patch, it is now possible to see information in pg_stat_activity on a large number of important wait events not previously addressed, such as ClientRead, ClientWrite, and SyncRep. Unfortunately, many of the wait events added by this patch will fail to appear in pg_stat_activity because they're only used in background processes which don't currently appear in pg_stat_activity. We should fix this either by creating a separate view for such information, or else by deciding to include them in pg_stat_activity after all. Michael Paquier and Robert Haas, reviewed by Alexander Korotkov and Thomas Munro.
* pgindent run for 9.6Robert Haas2016-06-09
|
* Fix multiple problems in postgres_fdw query cancellation logic.Robert Haas2016-05-16
| | | | | | | | | | | First, even if we cancel a query, we still have to roll back the containing transaction; otherwise, the session will be left in a failed transaction state. Second, we need to support canceling queries whe aborting a subtransaction as well as when aborting a toplevel transaction. Etsuro Fujita, reviewed by Michael Paquier
* Allow queries submitted by postgres_fdw to be canceled.Robert Haas2016-04-21
| | | | | | | | | | This fixes a problem which is not new, but with the advent of direct foreign table modification in 0bf3ae88af330496517722e391e7c975e6bad219, it's somewhat more likely to be annoying than previously. So, arrange for a local query cancelation to propagate to the remote side. Michael Paquier, reviewed by Etsuro Fujita. Original report by Thom Brown.
* Use %u not %d to print OIDs.Tom Lane2016-02-08
| | | | | | Oversight in commit 96198d94c. Etsuro Fujita
* Add missing quotation mark.Robert Haas2016-01-28
| | | | This fix accidentally got left out of the previous commit.
* Avoid multiple foreign server connections when all use same user mapping.Robert Haas2016-01-28
| | | | | | | | | | | | | | | | Previously, postgres_fdw's connection cache was keyed by user OID and server OID, but this can lead to multiple connections when it's not really necessary. In particular, if all relevant users are mapped to the public user mapping, then their connection options are certainly the same, so one connection can be used for all of them. While we're cleaning things up here, drop the "server" argument to GetConnection(), which isn't really needed. This saves a few cycles because callers no longer have to look this up; the function itself does, but only when establishing a new connection, not when reusing an existing one. Ashutosh Bapat, with a few small changes by me.
* Update copyright for 2016Bruce Momjian2016-01-02
| | | | Backpatch certain files through 9.1
* Create an infrastructure for parallel computation in PostgreSQL.Robert Haas2015-04-30
| | | | | | | | | | | | | | | | | This does four basic things. First, it provides convenience routines to coordinate the startup and shutdown of parallel workers. Second, it synchronizes various pieces of state (e.g. GUCs, combo CID mappings, transaction snapshot) from the parallel group leader to the worker processes. Third, it prohibits various operations that would result in unsafe changes to that state while parallelism is active. Finally, it propagates events that would result in an ErrorResponse, NoticeResponse, or NotifyResponse message being sent to the client from the parallel workers back to the master, from which they can then be sent on to the client. Robert Haas, Amit Kapila, Noah Misch, Rushabh Lathia, Jeevan Chalke. Suggestions and review from Andres Freund, Heikki Linnakangas, Noah Misch, Simon Riggs, Euler Taveira, and Jim Nasby.
* Update copyright for 2015Bruce Momjian2015-01-06
| | | | Backpatch certain files through 9.0
* Improve hash_create's API for selecting simple-binary-key hash functions.Tom Lane2014-12-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, if you wanted anything besides C-string hash keys, you had to specify a custom hashing function to hash_create(). Nearly all such callers were specifying tag_hash or oid_hash; which is tedious, and rather error-prone, since a caller could easily miss the opportunity to optimize by using hash_uint32 when appropriate. Replace this with a design whereby callers using simple binary-data keys just specify HASH_BLOBS and don't need to mess with specific support functions. hash_create() itself will take care of optimizing when the key size is four bytes. This nets out saving a few hundred bytes of code space, and offers a measurable performance improvement in tidbitmap.c (which was not exploiting the opportunity to use hash_uint32 for its 4-byte keys). There might be some wins elsewhere too, I didn't analyze closely. In future we could look into offering a similar optimized hashing function for 8-byte keys. Under this design that could be done in a centralized and machine-independent fashion, whereas getting it right for keys of platform-dependent sizes would've been notationally painful before. For the moment, the old way still works fine, so as not to break source code compatibility for loadable modules. Eventually we might want to remove tag_hash and friends from the exported API altogether, since there's no real need for them to be explicitly referenced from outside dynahash.c. Teodor Sigaev and Tom Lane
* pgindent run for 9.4Bruce Momjian2014-05-06
| | | | | This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
* Improve connection-failure error handling in contrib/postgres_fdw.Tom Lane2014-02-03
| | | | | | | | | | | | | | | | | | | postgres_fdw tended to say "unknown error" if it tried to execute a command on an already-dead connection, because some paths in libpq just return a null PGresult for such cases. Out-of-memory might result in that, too. To fix, pass the PGconn to pgfdw_report_error, and look at its PQerrorMessage() string if we can't get anything out of the PGresult. Also, fix the transaction-exit logic to reliably drop a dead connection. It was attempting to do that already, but it assumed that only connection cache entries with xact_depth > 0 needed to be examined. The folly in that is that if we fail while issuing START TRANSACTION, we'll not have bumped xact_depth. (At least for the case I was testing, this fix masks the other problem; but it still seems like a good idea to have the PGconn fallback logic.) Per investigation of bug #9087 from Craig Lucas. Backpatch to 9.3 where this code was introduced.
* Update copyright for 2014Bruce Momjian2014-01-07
| | | | | Update all files in head, and files COPYRIGHT and legal.sgml in all back branches.
* pgindent run for release 9.3Bruce Momjian2013-05-29
| | | | | This is the first run of the Perl-based pgindent script. Also update pgindent instructions.
* Document cross-version compatibility issues for contrib/postgres_fdw.Tom Lane2013-03-22
| | | | | | | | | One of the use-cases for postgres_fdw is extracting data from older PG servers, so cross-version compatibility is important. Document what we can do here, and further annotate some of the coding choices that create compatibility constraints. In passing, remove one unnecessary incompatibility with old servers, namely assuming that we didn't need to quote the timezone name 'UTC'.
* Fix postgres_fdw's issues with inconsistent interpretation of data values.Tom Lane2013-03-11
| | | | | | | | | | | | | | | | | | | | | For datatypes whose output formatting depends on one or more GUC settings, we have to worry about whether the other server will interpret the value the same way it was meant. pg_dump has been aware of this hazard for a long time, but postgres_fdw needs to deal with it too. To fix data retrieval from the remote server, set the necessary remote GUC settings at connection startup. (We were already assuming that settings made then would persist throughout the remote session.) To fix data transmission to the remote server, temporarily force the relevant GUCs to the right values when we're about to convert any data values to text for transmission. This is all pretty grotty, and not very cheap either. It's tempting to think of defining one uber-GUC that would override any settings that might render printed data values unportable. But of course, older remote servers wouldn't know any such thing and would still need this logic. While at it, revert commit f7951eef89be78c50ea2241f593d76dfefe176c9, since this provides a real fix. (The timestamptz given in the error message returned from the "remote" server will now reliably be shown in UTC.)
* Support writable foreign tables.Tom Lane2013-03-10
| | | | | | | | | | | This patch adds the core-system infrastructure needed to support updates on foreign tables, and extends contrib/postgres_fdw to allow updates against remote Postgres servers. There's still a great deal of room for improvement in optimization of remote updates, but at least there's basic functionality there now. KaiGai Kohei, reviewed by Alexander Korotkov and Laurenz Albe, and rather heavily revised by Tom Lane.
* Adjust postgres_fdw's search path handling.Tom Lane2013-02-22
| | | | | | | | | | | Set the remote session's search path to exactly "pg_catalog" at session start, then schema-qualify only names that aren't in that schema. This greatly reduces clutter in the generated SQL commands, as seen in the regression test changes. Per discussion. Also, rethink use of FirstNormalObjectId as the "built-in object" cutoff --- FirstBootstrapObjectId is safer, since the former will accept objects in information_schema for instance.
* Need to decorate XactIsoLevel as PGDLLIMPORT for postgres_fdw.Tom Lane2013-02-21
| | | | Per buildfarm.
* Add postgres_fdw contrib module.Tom Lane2013-02-21
There's still a lot of room for improvement, but it basically works, and we need this to be present before we can do anything much with the writable-foreign-tables patch. So let's commit it and get on with testing. Shigeru Hanada, reviewed by KaiGai Kohei and Tom Lane