aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Prohibit pushing subqueries containing window function calculation toAmit Kapila2018-09-04
| | | | | | | | | | | | | | | | | | | | workers. Allowing window function calculation in workers leads to inconsistent results because if the input row ordering is not fully deterministic, the output of window functions might vary across workers. The fix is to treat them as parallel-restricted. In the passing, improve the coding pattern in max_parallel_hazard_walker so that it has a chain of mutually-exclusive if ... else if ... else if ... else if ... IsA tests. Reported-by: Marko Tiikkaja Bug: 15324 Author: Amit Kapila Reviewed-by: Tom Lane Backpatch-through: 9.6 Discussion: https://postgr.es/m/CAL9smLAnfPJCDUUG4ckX2iznj53V7VSMsYefzZieN93YxTNOcw@mail.gmail.com
* During the split, set checksum on an empty hash index page.Amit Kapila2018-09-04
| | | | | | | | | | | | | | | | On a split, we allocate a new splitpoint's worth of bucket pages wherein we initialize the last page with zeros which is fine, but we forgot to set the checksum for that last page. We decided to back-patch this fix till 10 because we don't have an easy way to test it in prior versions. Another reason is that the hash-index code is changed heavily in 10, so it is not advisable to push the fix without testing it in prior versions. Author: Amit Kapila Reviewed-by: Yugo Nagata Backpatch-through: 10 Discussion: https://postgr.es/m/5d03686d-727c-dbf8-0064-bf8b97ffe850@2ndquadrant.com
* Fix initial sync of slot parent directory when restoring statusMichael Paquier2018-09-02
| | | | | | | | | | | | | | | | At the beginning of recovery, information from replication slots is recovered from disk to memory. In order to ensure the durability of the information, the status file as well as its parent directory are synced. It happens that the sync on the parent directory was done directly using the status file path, which is logically incorrect, and the current code has been doing a sync on the same object twice in a row. Reported-by: Konstantin Knizhnik Diagnosed-by: Konstantin Knizhnik Author: Michael Paquier Discussion: https://postgr.es/m/9eb1a6d5-b66f-2640-598d-c5ea46b8f68a@postgrespro.ru Backpatch-through: 9.4-
* Avoid using potentially-under-aligned page buffers.Tom Lane2018-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's a project policy against using plain "char buf[BLCKSZ]" local or static variables as page buffers; preferred style is to palloc or malloc each buffer to ensure it is MAXALIGN'd. However, that policy's been ignored in an increasing number of places. We've apparently got away with it so far, probably because (a) relatively few people use platforms on which misalignment causes core dumps and/or (b) the variables chance to be sufficiently aligned anyway. But this is not something to rely on. Moreover, even if we don't get a core dump, we might be paying a lot of cycles for misaligned accesses. To fix, invent new union types PGAlignedBlock and PGAlignedXLogBlock that the compiler must allocate with sufficient alignment, and use those in place of plain char arrays. I used these types even for variables where there's no risk of a misaligned access, since ensuring proper alignment should make kernel data transfers faster. I also changed some places where we had been palloc'ing short-lived buffers, for coding style uniformity and to save palloc/pfree overhead. Since this seems to be a live portability hazard (despite the lack of field reports), back-patch to all supported versions. Patch by me; thanks to Michael Paquier for review. Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de
* Ignore server-side delays when enforcing wal_sender_timeout.Noah Misch2018-08-31
| | | | | | | | | | | Healthy clients of servers having poor I/O performance, such as buildfarm members hamster and tern, saw unexpected timeouts. That disagreed with documentation. This fix adds one gettimeofday() call whenever ProcessRepliesIfAny() finds no client reply messages. Back-patch to 9.4; the bug's symptom is rare and mild, and the code all moved between 9.3 and 9.4. Discussion: https://postgr.es/m/20180826034600.GA1105084@rfd.leadboat.com
* Ensure correct minimum consistent point on standbysMichael Paquier2018-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Startup process has improved its calculation of incorrect minimum consistent point in 8d68ee6, which ensures that all WAL available gets replayed when doing crash recovery, and has introduced an incorrect calculation of the minimum recovery point for non-startup processes, which can cause incorrect page references on a standby when for example the background writer flushed a couple of pages on-disk but was not updating the control file to let a subsequent crash recovery replay to where it should have. The only case where this has been reported to be a problem is when a standby needs to calculate the latest removed xid when replaying a btree deletion record, so one would need connections on a standby that happen just after recovery has thought it reached a consistent point. Using a background worker which is started after the consistent point is reached would be the easiest way to get into problems if it connects to a database. Having clients which attempt to connect periodically could also be a problem, but the odds of seeing this problem are much lower. The fix used is pretty simple, as the idea is to give access to the minimum recovery point written in the control file to non-startup processes so as they use a reference, while the startup process still initializes its own references of the minimum consistent point so as the original problem with incorrect page references happening post-promotion with a crash do not show up. Reported-by: Alexander Kukushkin Diagnosed-by: Alexander Kukushkin Author: Michael Paquier Reviewed-by: Kyotaro Horiguchi, Alexander Kukushkin Discussion: https://postgr.es/m/153492341830.1368.3936905691758473953@wrigleys.postgresql.org Backpatch-through: 9.3
* Make checksum_impl.h safe to compile with -fstrict-aliasing.Tom Lane2018-08-31
| | | | | | | | | | | | | | | | | | | | In general, Postgres requires -fno-strict-aliasing with compilers that implement C99 strict aliasing rules. There's little hope of getting rid of that overall. But it seems like it would be a good idea if storage/checksum_impl.h in particular didn't depend on it, because that header is explicitly intended to be included by external programs. We don't have a lot of control over the compiler switches that an external program might use, as shown by Michael Banck's report of failure in a privately-modified version of pg_verify_checksums. Hence, switch to using a union in place of willy-nilly pointer casting inside this file. I think this makes the code a bit more readable anyway. checksum_impl.h hasn't changed since it was introduced in 9.3, so back-patch all the way. Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de
* Stop bgworkers during fast shutdown with postmaster in startup phaseMichael Paquier2018-08-29
| | | | | | | | | | | | | | When a postmaster gets into its phase PM_STARTUP, it would start background workers using BgWorkerStart_PostmasterStart mode immediately, which would cause problems for a fast shutdown as the postmaster forgets to send SIGTERM to already-started background workers. With smart and immediate shutdowns, this correctly happened, and fast shutdown is the only mode missing the shot. Author: Alexander Kukushkin Reviewed-by: Michael Paquier Discussion: https://postgr.es/m/CAFh8B=mvnD8+DZUfzpi50DoaDfZRDfd7S=gwj5vU9GYn8UvHkA@mail.gmail.com Backpatch-through: 9.5
* Make pg_restore's identify_locking_dependencies() more bulletproof.Tom Lane2018-08-28
| | | | | | | | | | | | | | | | | This function had a blacklist of dump object types that it believed needed exclusive lock ... but we hadn't maintained that, so that it was missing ROW SECURITY, POLICY, and INDEX ATTACH items, all of which need (or should be treated as needing) exclusive lock. Since the same oversight seems likely in future, let's reverse the sense of the test so that the code has a whitelist of safe object types; better to wrongly assume a command can't be run in parallel than the opposite. Currently the only POST_DATA object type that's safe is CREATE INDEX ... and that list hasn't changed in a long time. Back-patch to 9.5 where RLS came in. Discussion: https://postgr.es/m/11450.1535483506@sss.pgh.pa.us
* Avoid quadratic slowdown in regexp match/split functions.Andrew Gierth2018-08-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | regexp_matches, regexp_split_to_table and regexp_split_to_array all work by compiling a list of match positions as character offsets (NOT byte positions) in the source string. Formerly, they then used text_substr to extract the matched text; but in a multi-byte encoding, that counts the characters in the string, and the characters needed to reach the starting byte position, on every call. Accordingly, the performance degraded as the product of the input string length and the number of match positions, such that splitting a string of a few hundred kbytes could take many minutes. Repair by keeping the wide-character copy of the input string available (only in the case where encoding_max_length is not 1) after performing the match operation, and extracting substrings from that instead. This reduces the complexity to being linear in the number of result bytes, discounting the actual regexp match itself (which is not affected by this patch). In passing, remove cleanup using retail pfree() which was obsoleted by commit ff428cded (Feb 2008) which made cleanup of SRF multi-call contexts automatic. Also increase (to ~134 million) the maximum number of matches and provide an error message when it is reached. Backpatch all the way because this has been wrong forever. Analysis and patch by me; review by Kaiting Chen. Discussion: https://postgr.es/m/87pnyn55qh.fsf@news-spur.riddles.org.uk see also https://postgr.es/m/87lg996g4r.fsf@news-spur.riddles.org.uk
* Fix missing dependency for pg_dump's ENABLE ROW LEVEL SECURITY items.Tom Lane2018-08-27
| | | | | | | | | | | | | | The archive should show a dependency on the item's table, but it failed to include one. This could cause failures in parallel restore due to emitting ALTER TABLE ... ENABLE ROW LEVEL SECURITY before restoring the table's data. In practice the odds of a problem seem low, since you would typically need to have set FORCE ROW LEVEL SECURITY as well, and you'd also need a very high --jobs count to have any chance of this happening. That probably explains the lack of field reports. Still, it's a bug, so back-patch to 9.5 where RLS was introduced. Discussion: https://postgr.es/m/19784.1535390902@sss.pgh.pa.us
* Make syslogger more robust against failures in opening CSV log files.Tom Lane2018-08-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous coding figured it'd be good enough to postpone opening the first CSV log file until we got a message we needed to write there. This is unsafe, though, because if the open fails we end up in infinite recursion trying to report the failure. Instead make the CSV log file management code look as nearly as possible like the longstanding logic for the stderr log file. In particular, open it immediately at postmaster startup (if enabled), or when we get a SIGHUP in which we find that log_destination has been changed to enable CSV logging. It seems OK to fail if a postmaster-start-time open attempt fails, as we've long done for the stderr log file. But we can't die if we fail to open a CSV log file during SIGHUP, so we're still left with a problem. In that case, write any output meant for the CSV log file to the stderr log file. (This will also cover race-condition cases in which backends send CSV log data before or after we have the CSV log file open.) This patch also fixes an ancient oversight that, if CSV logging was turned off during a SIGHUP, we never actually closed the last CSV log file. In passing, remember to reset whereToSendOutput = DestNone during syslogger start, since (unlike all other postmaster children) it's forked before the postmaster has done that. This made for a platform-dependent difference in error reporting behavior between the syslogger and other children: except on Windows, it'd report problems to the original postmaster stderr as well as the normal error log file(s). It's barely possible that that was intentional at some point; but it doesn't seem likely to be desirable in production, and the platform dependency definitely isn't desirable. Per report from Alexander Kukushkin. It's been like this for a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAFh8B==iLUD_gqC-dAENS0V+kVrCeGiKujtKqSQ7++S-caaChw@mail.gmail.com
* Fix lexing of standard multi-character operators in edge cases.Andrew Gierth2018-08-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits c6b3c939b (which fixed the precedence of >=, <=, <> operators) and 865f14a2d (which added support for the standard => notation for named arguments) created a class of lexer tokens which look like multi-character operators but which have their own token IDs distinct from Op. However, longest-match rules meant that following any of these tokens with another operator character, as in (1<>-1), would cause them to be incorrectly returned as Op. The error here isn't immediately obvious, because the parser would usually still find the correct operator via the Op token, but there were more subtle problems: 1. If immediately followed by a comment or +-, >= <= <> would be given the old precedence of Op rather than the correct new precedence; 2. If followed by a comment, != would be returned as Op rather than as NOT_EQUAL, causing it not to be found at all; 3. If followed by a comment or +-, the => token for named arguments would be lexed as Op, causing the argument to be mis-parsed as a simple expression, usually causing an error. Fix by explicitly checking for the operators in the {operator} code block in addition to all the existing special cases there. Backpatch to 9.5 where the problem was introduced. Analysis and patch by me; review by Tom Lane. Discussion: https://postgr.es/m/87va851ppl.fsf@news-spur.riddles.org.uk
* Reduce an unnecessary O(N^3) loop in lexer.Andrew Gierth2018-08-23
| | | | | | | | | | | | The lexer's handling of operators contained an O(N^3) hazard when dealing with long strings of + or - characters; it seems hard to prevent this case from being O(N^2), but the additional N multiplier was not needed. Backpatch all the way since this has been there since 7.x, and it presents at least a mild hazard in that trying to do Bind, PREPARE or EXPLAIN on a hostile query could take excessive time (without honouring cancels or timeouts) even if the query was never executed.
* In libpq, don't look up all the hostnames at once.Tom Lane2018-08-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically, we looked up the target hostname in connectDBStart, so that PQconnectPoll did not need to do DNS name resolution. The patches that added multiple-target-host support to libpq preserved this division of labor; but it's really nonsensical now, because it means that if any one of the target hosts fails to resolve in DNS, the connection fails. That negates the no-single-point-of-failure goal of the feature. Additionally, DNS lookups aren't exactly cheap, but the code did them all even if the first connection attempt succeeds. Hence, rearrange so that PQconnectPoll does the lookups, and only looks up a hostname when it's time to try that host. This does mean that PQconnectPoll could block on a DNS lookup --- but if you wanted to avoid that, you should be using hostaddr, as the documentation has always specified. It seems fairly unlikely that any applications would really care whether the lookup occurs inside PQconnectStart or PQconnectPoll. In addition to calling out that fact explicitly, do some other minor wordsmithing in the docs around the multiple-target-host feature. Since this seems like a bug in the multiple-target-host feature, backpatch to v10 where that was introduced. In the back branches, avoid moving any existing fields of struct pg_conn, just in case any third-party code is looking into that struct. Tom Lane, reviewed by Fabien Coelho Discussion: https://postgr.es/m/4913.1533827102@sss.pgh.pa.us
* Do not dump identity sequences with excluded parent tableMichael Paquier2018-08-22
| | | | | | | | | | | | | | This commit prevents a crash of pg_dump caused by the exclusion of a table which has identity columns, as the table would be correctly excluded but not its identity sequence. In order to fix that, identity sequences are excluded if the parent table is defined as such. Knowing about such sequences has no meaning without their parent table anyway. Reported-by: Andy Abelisto Author: David Rowley Reviewed-by: Peter Eisentraut, Michael Paquier Discussion: https://postgr.es/m/153479393218.1316.8472285660264976457@wrigleys.postgresql.org Backpatch-through: 10
* Fix typoAlvaro Herrera2018-08-21
|
* fix typoAlvaro Herrera2018-08-21
|
* Fix set of NLS translation issuesMichael Paquier2018-08-21
| | | | | | | | | | | | | | | | While monitoring the code, a couple of issues related to string translation has showed up: - Some routines for auto-updatable views return an error string, which sometimes missed the shot. A comment regarding string translation is added for each routine to help with future features. - GSSAPI authentication missed two translations. - vacuumdb handles non-translated strings. Reported-by: Kyotaro Horiguchi Author: Kyotaro Horiguchi Reviewed-by: Michael Paquier, Tom Lane Discussion: https://postgr.es/m/20180810.152131.31921918.horiguchi.kyotaro@lab.ntt.co.jp Backpatch-through: 9.3
* Ensure schema qualification in pg_restore DISABLE/ENABLE TRIGGER commands.Tom Lane2018-08-17
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously, this code blindly followed the common coding pattern of passing PQserverVersion(AH->connection) as the server-version parameter of fmtQualifiedId. That works as long as we have a connection; but in pg_restore with text output, we don't. Instead we got a zero from PQserverVersion, which fmtQualifiedId interpreted as "server is too old to have schemas", and so the name went unqualified. That still accidentally managed to work in many cases, which is probably why this ancient bug went undetected for so long. It only became obvious in the wake of the changes to force dump/restore to execute with restricted search_path. In HEAD/v11, let's deal with this by ripping out fmtQualifiedId's server- version behavioral dependency, and just making it schema-qualify all the time. We no longer support pg_dump from servers old enough to need the ability to omit schema name, let alone restoring to them. (Also, the few callers outside pg_dump already didn't work with pre-schema servers.) In older branches, that's not an acceptable solution, so instead just tweak the DISABLE/ENABLE TRIGGER logic to ensure it will schema-qualify its output regardless of server version. Per bug #15338 from Oleg somebody. Back-patch to all supported branches. Discussion: https://postgr.es/m/153452458706.1316.5328079417086507743@wrigleys.postgresql.org
* Set scan direction appropriately for SubPlans (bug #15336)Andrew Gierth2018-08-17
| | | | | | | | | | | | | | | | | | | | When executing a SubPlan in an expression, the EState's direction field was left alone, resulting in an attempt to execute the subplan backwards if it was encountered during a backwards scan of a cursor. Also, though much less likely, it was possible to reach the execution of an InitPlan while in backwards-scan state. Repair by saving/restoring estate->es_direction and forcing forward scan mode in the relevant places. Backpatch all the way, since this has been broken since 8.3 (prior to commit c7ff7663e, SubPlans had their own EStates rather than sharing the parent plan's, so there was no confusion over scan direction). Per bug #15336 reported by Vladimir Baranoff; analysis and patch by me, review by Tom Lane. Discussion: https://postgr.es/m/153449812167.1304.1741624125628126322@wrigleys.postgresql.org
* pg_upgrade: issue helpful error message for use on standbysBruce Momjian2018-08-17
| | | | | | | | | | | | | Commit 777e6ddf1723306bd2bf8fe6f804863f459b0323 checked for a shut down message from a standby and allowed it to continue. This patch reports a helpful error message in these cases, suggesting to use rsync as documented. Diagnosed-by: Martín Marqués Discussion: https://postgr.es/m/CAPdiE1xYCow-reLjrhJ9DqrMu-ppNq0ChUUEvVdxhdjGRD5_eA@mail.gmail.com Backpatch-through: 9.3
* Close the file descriptor in ApplyLogicalMappingFileTomas Vondra2018-08-16
| | | | | | | | | | | | | | The function was forgetting to close the file descriptor, resulting in failures like this: ERROR: 53000: exceeded maxAllocatedDescs (492) while trying to open file "pg_logical/mappings/map-4000-4eb-1_60DE1E08-5376b5-537c6b" LOCATION: OpenTransientFile, fd.c:2161 Simply close the file at the end, and backpatch to 9.4 (where logical decoding was introduced). While at it, fix a nearby typo. Discussion: https://www.postgresql.org/message-id/flat/738a590a-2ce5-9394-2bef-7b1caad89b37%402ndquadrant.com
* Make snprintf.c follow the C99 standard for snprintf's result value.Tom Lane2018-08-15
| | | | | | | | | | | | | | | | | | | | | | | | C99 says that the result should be the number of bytes that would have been emitted given a large enough buffer, not the number we actually were able to put in the buffer. It's time to make our substitute implementation comply with that. Not doing so results in inefficiency in buffer-enlargement cases, and also poses a portability hazard for third-party code that might expect C99-compliant snprintf behavior within Postgres. In passing, remove useless tests for str == NULL; neither C99 nor predecessor standards ever allowed that except when count == 0, so I see no reason to expend cycles on making that a non-crash case for this implementation. Also, don't waste a byte in pg_vfprintf's local I/O buffer; this might have performance benefits by allowing aligned writes during flushbuffer calls. Back-patch of commit 805889d7d. There was some concern about this possibly breaking code that assumes pre-C99 behavior, but there is much more risk (and reality, in our own code) of code that assumes C99 behavior and hence fails to detect buffer overrun without this. Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
* Update FSM on WAL replay of page all-visible/frozenAlvaro Herrera2018-08-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We aren't very strict about keeping FSM up to date on WAL replay, because per-page freespace values aren't critical in replicas (can't write to heap in a replica; and if the replica is promoted, the values would be updated by VACUUM anyway). However, VACUUM since 9.6 can skip processing pages marked all-visible or all-frozen, and if such pages are recorded in FSM with wrong values, those values are blindly propagated to FSM's upper layers by VACUUM's FreeSpaceMapVacuum. (This rationale assumes that crashes are not very frequent, because those would cause outdated FSM to occur in the primary.) Even when the FSM is outdated in standby, things are not too bad normally, because, most per-page FSM values will be zero (other than those propagated with the base-backup that created the standby); only once the remaining free space is less than 0.2*BLCKSZ the per-page value is maintained by WAL replay of heap ins/upd/del. However, if wal_log_hints=on causes complete FSM pages to be propagated to a standby via full-page images, many too-optimistic per-page values can end up being registered in the standby. Incorrect per-page values aren't critical in most cases, since an inserter that is given a page that doesn't actually contain the claimed free space will update FSM with the correct value, and retry until it finds a usable page. However, if there are many such updates to do, an inserter can spend a long time doing them before a usable page is found; in a heavily trafficked insert-only table with many concurrent inserters this has been observed to cause several second stalls, causing visible application malfunction. To fix this problem, it seems sufficient to have heap_xlog_visible (replay of setting all-visible and all-frozen VM bits for a heap page) update the FSM value for the page being processed. This fixes the per-page counters together with making the page skippable to vacuum, so when vacuum does FreeSpaceMapVacuum, the values propagated to FSM upper layers are the correct ones, avoiding the problem. While at it, apply the same fix to heap_xlog_clean (replay of tuple removal by HOT pruning and vacuum). This makes any space freed by the cleaning available earlier than the next vacuum in the promoted replica. Backpatch to 9.6, where this problem was diagnosed on an insert-only table with all-frozen pages, which were introduced as a concept in that release. Theoretically it could apply with all-visible pages to older branches, but there's been no report of that and it doesn't backpatch cleanly anyway. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20180802172857.5skoexsilnjvgruk@alvherre.pgsql
* Clean up assorted misuses of snprintf()'s result value.Tom Lane2018-08-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a small number of places that were testing the result of snprintf() but doing so incorrectly. The right test for buffer overrun, per C99, is "result >= bufsize" not "result > bufsize". Some places were also checking for failure with "result == -1", but the standard only says that a negative value is delivered on failure. (Note that this only makes these places correct if snprintf() delivers C99-compliant results. But at least now these places are consistent with all the other places where we assume that.) Also, make psql_start_test() and isolation_start_test() check for buffer overrun while constructing their shell commands. There seems like a higher risk of overrun, with more severe consequences, here than there is for the individual file paths that are made elsewhere in the same functions, so this seemed like a worthwhile change. Also fix guc.c's do_serialize() to initialize errno = 0 before calling vsnprintf. In principle, this should be unnecessary because vsnprintf should have set errno if it returns a failure indication ... but the other two places this coding pattern is cribbed from don't assume that, so let's be consistent. These errors are all very old, so back-patch as appropriate. I think that only the shell command overrun cases are even theoretically reachable in practice, but there's not much point in erroneous error checks. Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
* pg_upgrade: fix shutdown check for standby serversBruce Momjian2018-08-14
| | | | | | | | | | | | Commit 244142d32afd02e7408a2ef1f249b00393983822 only tested for the pg_controldata output for primary servers, but standby servers have different "Database cluster state" output, so check for that too. Diagnosed-by: Michael Paquier Discussion: https://postgr.es/m/20180810164240.GM13638@paquier.xyz Backpatch-through: 9.3
* Remove obsolete commentPeter Eisentraut2018-08-13
| | | | | The sequence name is no longer stored in the sequence relation, since 1753b1b027035029c2a2a1649065762fafbf63f3.
* Fix libpq's implementation of per-host connection timeouts.Tom Lane2018-08-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5f374fe7a attempted to turn the connect_timeout from an overall maximum time limit into a per-host limit, but it didn't do a great job of that. The timer would only get restarted if we actually detected timeout within connectDBComplete(), not if we changed our attention to a new host for some other reason. In that case the old timeout continued to run, possibly causing a premature timeout failure for the new host. Fix that, and also tweak the logic so that if we do get a timeout, we advance to the next available IP address, not to the next host name. There doesn't seem to be a good reason to assume that all the IP addresses supplied for a given host name will necessarily fail the same way as the current one. Moreover, this conforms better to the admittedly-vague documentation statement that the timeout is "per connection attempt". I changed that to "per host name or IP address" to be clearer. (Note that reconnections to the same server, such as for switching protocol version or SSL status, don't get their own separate timeout; that was true before and remains so.) Also clarify documentation about the interpretation of connect_timeout values less than 2. This seems like a bug, so back-patch to v10 where this logic came in. Tom Lane, reviewed by Fabien Coelho Discussion: https://postgr.es/m/5735.1533828184@sss.pgh.pa.us
* Adjust comment atop ExecShutdownNode.Amit Kapila2018-08-13
| | | | | | | | | After commits a315b967cc and b805b63ac2, part of the comment atop ExecShutdownNode is redundant. Adjust it. Author: Amit Kapila Backpatch-through: 10 where both the mentioned commits are present. Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
* Prohibit shutting down resources if there is a possibility of back up.Amit Kapila2018-08-13
| | | | | | | | | | | | | | | Currently, we release the asynchronous resources as soon as it is evident that no more rows will be needed e.g. when a Limit is filled. This can be problematic especially for custom and foreign scans where we can scan backward. Fix that by disallowing the shutting down of resources in such cases. Reported-by: Robert Haas Analysed-by: Robert Haas and Amit Kapila Author: Amit Kapila Reviewed-by: Robert Haas Backpatch-through: 9.6 where this code was introduced Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
* Avoid query-lifetime memory leaks in XMLTABLE (bug #15321)Andrew Gierth2018-08-13
| | | | | | | | | | | | | | | | | | | Multiple calls to XMLTABLE in a query (e.g. laterally applying it to a table with an xml column, an important use-case) were leaking large amounts of memory into the per-query context, blowing up memory usage. Repair by reorganizing memory context usage in nodeTableFuncscan; use the usual per-tuple context for row-by-row evaluations instead of perValueCxt, and use the explicitly created context -- renamed from perValueCxt to perTableCxt -- for arguments and state for each individual table-generation operation. Backpatch to PG10 where this code was introduced. Original report by IRC user begriffs; analysis and patch by me. Reviewed by Tom Lane and Pavel Stehule. Discussion: https://postgr.es/m/153394403528.10284.7530399040974170549@wrigleys.postgresql.org
* Don't run atexit callbacks in quickdie signal handlers.Heikki Linnakangas2018-08-08
| | | | | | | | | | | | | | | | | exit() is not async-signal safe. Even if the libc implementation is, 3rd party libraries might have installed unsafe atexit() callbacks. After receiving SIGQUIT, we really just want to exit as quickly as possible, so we don't really want to run the atexit() callbacks anyway. The original report by Jimmy Yih was a self-deadlock in startup_die(). However, this patch doesn't address that scenario; the signal handling while waiting for the startup packet is more complicated. But at least this alleviates similar problems in the SIGQUIT handlers, like that reported by Asim R P later in the same thread. Backpatch to 9.3 (all supported versions). Discussion: https://www.postgresql.org/message-id/CAOMx_OAuRUHiAuCg2YgicZLzPVv5d9_H4KrL_OFsFP%3DVPekigA%40mail.gmail.com
* Don't record FDW user mappings as members of extensions.Tom Lane2018-08-07
| | | | | | | | | | | | | | | | | CreateUserMapping has a recordDependencyOnCurrentExtension call that's been there since extensions were introduced (very possibly my fault). However, there's no support anywhere else for user mappings as members of extensions, nor are they listed as a possible member object type in the documentation. Nor does it really seem like a good idea for user mappings to belong to extensions when roles don't. Hence, remove the bogus call. (As we saw in bug #15310, the lack of any pg_dump support for this case ensures that any such membership record would silently disappear during pg_upgrade. So there's probably no need for us to do anything else about cleaning up after this mistake.) Discussion: https://postgr.es/m/27952.1533667213@sss.pgh.pa.us
* Fix incorrect initialization of BackendActivityBuffer.Tom Lane2018-08-07
| | | | | | | | | | | | | Since commit c8e8b5a6e, this has been zeroed out using the wrong length. In practice the length would always be too small, leading to not zeroing the whole buffer rather than clobbering additional memory; and that's pretty harmless, both because shmem would likely start out as zeroes and because we'd reinitialize any given entry before use. Still, it's bogus, so fix it. Reported by Petru-Florin Mihancea (bug #15312) Discussion: https://postgr.es/m/153363913073.1303.6518849192351268091@wrigleys.postgresql.org
* Fix pg_upgrade to handle event triggers in extensions correctly.Tom Lane2018-08-07
| | | | | | | | | | | pg_dump with --binary-upgrade must emit ALTER EXTENSION ADD commands for all objects that are members of extensions. It forgot to do so for event triggers, as per bug #15310 from Nick Barnes. Back-patch to 9.3 where event triggers were introduced. Haribabu Kommi Discussion: https://postgr.es/m/153360083872.1395.4593932457718151600@wrigleys.postgresql.org
* Ensure pg_dump_sort.c sorts null vs non-null namespace consistently.Tom Lane2018-08-07
| | | | | | | | | | | | | | | | | | | | The original coding here (which is, I believe, my fault) supposed that it didn't need to concern itself with the possibility that one object of a given type-priority has a namespace while another doesn't. But that's not reliably true anymore, if it ever was; and if it does happen then it's possible that DOTypeNameCompare returns self-inconsistent comparison results. That leads to unspecified behavior in qsort() and a resultant weird output order from pg_dump. This should end up being only a cosmetic problem, because any ordering constraints that actually matter should be enforced by the later dependency-based sort. Still, it's a bug, so back-patch. Report and fix by Jacob Champion, though I editorialized on his patch to the extent of making NULL sort after non-NULL, for consistency with our usual sorting definitions. Discussion: https://postgr.es/m/CABAq_6Hw+V-Kj7PNfD5tgOaWT_-qaYkc+SRmJkPLeUjYXLdxwQ@mail.gmail.com
* Stamp 10.5.REL_10_5Tom Lane2018-08-06
|
* Translation updatesPeter Eisentraut2018-08-06
| | | | | Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 3463878fc340cb1436153b18a300f8cfdcb12adb
* Fix failure to reset libpq's state fully between connection attempts.Tom Lane2018-08-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The logic in PQconnectPoll() did not take care to ensure that all of a PGconn's internal state variables were reset before trying a new connection attempt. If we got far enough in the connection sequence to have changed any of these variables, and then decided to try a new server address or server name, the new connection might be completed with some state that really only applied to the failed connection. While this has assorted bad consequences, the only one that is clearly a security issue is that password_needed didn't get reset, so that if the first server asked for a password and the second didn't, PQconnectionUsedPassword() would return an incorrect result. This could be leveraged by unprivileged users of dblink or postgres_fdw to allow them to use server-side login credentials that they should not be able to use. Other notable problems include the possibility of forcing a v2-protocol connection to a server capable of supporting v3, or overriding "sslmode=prefer" to cause a non-encrypted connection to a server that would have accepted an encrypted one. Those are certainly bugs but it's harder to paint them as security problems in themselves. However, forcing a v2-protocol connection could result in libpq having a wrong idea of the server's standard_conforming_strings setting, which opens the door to SQL-injection attacks. The extent to which that's actually a problem, given the prerequisite that the attacker needs control of the client's connection parameters, is unclear. These problems have existed for a long time, but became more easily exploitable in v10, both because it introduced easy ways to force libpq to abandon a connection attempt at a late stage and then try another one (rather than just giving up), and because it provided an easy way to specify multiple target hosts. Fix by rearranging PQconnectPoll's state machine to provide centralized places to reset state properly when moving to a new target host or when dropping and retrying a connection to the same host. Tom Lane, reviewed by Noah Misch. Our thanks to Andrew Krasichkov for finding and reporting the problem. Security: CVE-2018-10915
* Adjust error messagePeter Eisentraut2018-08-06
| | | | | Makes it look more similar to other ones, and avoids the need for pluralization.
* Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *.Tom Lane2018-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When expanding an updatable view that is an INSERT's target, the rewriter failed to rewrite Vars in the ON CONFLICT UPDATE clause. This accidentally worked if the view was just "SELECT * FROM ...", as the transformation would be a no-op in that case. With more complicated view targetlists, this omission would often lead to "attribute ... has the wrong type" errors or even crashes, as reported by Mario De Frutos Dieguez. Fix by adding code to rewriteTargetView to fix up the data structure correctly. The easiest way to update the exclRelTlist list is to rebuild it from scratch looking at the new target relation, so factor the code for that out of transformOnConflictClause to make it sharable. In passing, avoid duplicate permissions checks against the EXCLUDED pseudo-relation, and prevent useless view expansion of that relation's dummy RTE. The latter is only known to happen (after this patch) in cases where the query would fail later due to not having any INSTEAD OF triggers for the view. But by exactly that token, it would create an unintended and very poorly tested state of the query data structure, so it seems like a good idea to prevent it from happening at all. This has been broken since ON CONFLICT was introduced, so back-patch to 9.5. Dean Rasheed, based on an earlier patch by Amit Langote; comment-kibitzing and back-patching by me Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
* Reset properly errno before calling write()Michael Paquier2018-08-05
| | | | | | | | | | | | 6cb3372 enforces errno to ENOSPC when less bytes than what is expected have been written when it is unset, though it forgot to properly reset errno before doing a system call to write(), causing errno to potentially come from a previous system call. Reported-by: Tom Lane Author: Michael Paquier Reviewed-by: Tom Lane Discussion: https://postgr.es/m/31797.1533326676@sss.pgh.pa.us
* Add table relcache invalidation to index builds.Peter Geoghegan2018-08-03
| | | | | | | | | | | | | | | | | | | | | | | | It's necessary to make sure that owning tables have a relcache invalidation prior to advancing the command counter to make newly-entered catalog tuples for the index visible. inval.c must be able to maintain the consistency of the local caches in the event of transaction abort. There is usually only a problem when CREATE INDEX transactions abort, since there is a generic invalidation once we reach index_update_stats(). This bug is of long standing. Problems were made much more likely by the addition of parallel CREATE INDEX (commit 9da0cc35284), but it is strongly suspected that similar problems can be triggered without involving plan_create_index_workers(). (plan_create_index_workers() triggers a relcache build or rebuild, which previously only happened in rare edge cases.) Author: Peter Geoghegan Reported-By: Luca Ferrari Diagnosed-By: Andres Freund Reviewed-By: Andres Freund Discussion: https://postgr.es/m/CAKoxK+5fVodiCtMsXKV_1YAKXbzwSfp7DgDqUmcUAzeAhf=HEQ@mail.gmail.com Backpatch: 9.3-
* Remove no-longer-appropriate special case in psql's \conninfo code.Tom Lane2018-08-03
| | | | | | | | | | | | | | \conninfo prints the results of PQhost() and some other libpq functions. It used to override the PQhost() result with the hostaddr parameter if that'd been given, but that's unhelpful when multiple hosts were listed in the connection string. Furthermore, it seems unnecessary in the wake of commit 1944cdc98, since PQhost does any useful substitution itself. So let's just remove the extra code and print PQhost()'s result without any editorialization. Back-patch to v10, as 1944cdc98 (just) was. Discussion: https://postgr.es/m/23287.1533227021@sss.pgh.pa.us
* Change libpq's internal uses of PQhost() to inspect host field directly.Tom Lane2018-08-03
| | | | | | | | | | | | | | | | | | | | | Commit 1944cdc98 changed PQhost() to return the hostaddr value when that is specified and host isn't. This is a good idea in general, but fe-auth.c and related files contain PQhost() calls for which it isn't. Specifically, when we compare SSL certificates or other server identity information to the host field, we do not want to use hostaddr instead; that's not what's documented, that's not what happened pre-v10, and it doesn't seem like a good idea. Instead, we can just look at connhost[].host directly. This does what we want in v10 and up; in particular, if neither host nor hostaddr were given, the host field will be replaced with the default host name. That seems useful, and it's likely the reason that these places were coded to call PQhost() originally (since pre-v10, the stored field was not replaced with the default). Back-patch to v10, as 1944cdc98 (just) was. Discussion: https://postgr.es/m/23287.1533227021@sss.pgh.pa.us
* libpq: PQhost to return active connected host or hostaddrTom Lane2018-08-03
| | | | | | | | | | | | | | | | | | | | | | | | | Previously, PQhost didn't return the connected host details when the connection type was CHT_HOST_ADDRESS (i.e., via hostaddr). Instead, it returned the complete host connection parameter (which could contain multiple hosts) or the default host details, which was confusing and arguably incorrect. Change this to return the actually connected host or hostaddr irrespective of the connection type. When hostaddr but no host was specified, hostaddr is now returned. Never return the original host connection parameter, and document that PQhost cannot be relied on before the connection is established. PQport is similarly changed to always return the active connection port and never the original connection parameter. Back-patch of commit 1944cdc98273dbb8439ad9b387ca2858531afcf0 into the v10 branch. Author: Hari Babu <kommi.haribabu@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
* Fix buffer usage stats for parallel nodes.Amit Kapila2018-08-03
| | | | | | | | | | | | | | | | | | | | | | | | The buffer usage stats is accounted only for the execution phase of the node. For Gather and Gather Merge nodes, such stats are accumulated at the time of shutdown of workers which is done after execution of node due to which we missed to account them for such nodes. Fix it by treating nodes as running while we shut down them. We can also miss accounting for a Limit node when Gather or Gather Merge is beneath it, because it can finish the execution before shutting down such nodes. So we allow a Limit node to shut down the resources before it completes the execution. In the passing fix the gather node code to allow workers to shut down as soon as we find that all the tuples from the workers have been retrieved. The original code use to do that, but is accidently removed by commit 01edb5c7fc. Reported-by: Adrien Nayrat Author: Amit Kapila and Robert Haas Reviewed-by: Robert Haas and Andres Freund Backpatch-through: 9.6 where this code was introduced Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
* Match the buffer usage tracking for leader and worker backends.Amit Kapila2018-08-03
| | | | | | | | | | | | | In the leader backend, we don't track the buffer usage for ExecutorStart phase whereas in worker backend we track it for ExecutorStart phase as well. This leads to different value for buffer usage stats for the parallel and non-parallel query. Change the code so that worker backend also starts tracking buffer usage after ExecutorStart. Author: Amit Kapila and Robert Haas Reviewed-by: Robert Haas and Andres Freund Backpatch-through: 9.6 where this code was introduced Discussion:https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
* Fix libpq's code for searching .pgpass; rationalize empty-list-item cases.Tom Lane2018-08-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before v10, we always searched ~/.pgpass using the host parameter, and nothing else, to match to the "hostname" field of ~/.pgpass. (However, null host or host matching DEFAULT_PGSOCKET_DIR was replaced by "localhost".) In v10, this got broken by commit 274bb2b38, repaired by commit bdac9836d, and broken again by commit 7b02ba62e; in the code actually shipped, we'd search with hostaddr if both that and host were specified --- though oddly, *not* if only hostaddr were specified. Since this is directly contrary to the documentation, and not backwards-compatible, it's clearly a bug. However, the change wasn't totally without justification, even though it wasn't done quite right, because the pre-v10 behavior has arguably been buggy since we added hostaddr. If hostaddr is specified and host isn't, the pre-v10 code will search ~/.pgpass for "localhost", and ship that password off to a server that most likely isn't local at all. That's unhelpful at best, and could be a security breach at worst. Therefore, rather than just revert to that old behavior, let's define the behavior as "search with host if provided, else with hostaddr if provided, else search for localhost". (As before, a host name matching DEFAULT_PGSOCKET_DIR is replaced by localhost.) This matches the behavior of the actual connection code, so that we don't pick up an inappropriate password; and it allows useful searches to happen when only hostaddr is given. While we're messing around here, ensure that empty elements within a host or hostaddr list select the same behavior as a totally-empty field would; for instance "host=a,,b" is equivalent to "host=a,/tmp,b" if DEFAULT_PGSOCKET_DIR is /tmp. Things worked that way in some cases already, but not consistently so, which contributed to the confusion about what key ~/.pgpass would get searched with. Update documentation accordingly, and also clarify some nearby text. Back-patch to v10 where the host/hostaddr list functionality was introduced. Discussion: https://postgr.es/m/30805.1532749137@sss.pgh.pa.us