aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Fix typo in comment.Heikki Linnakangas2017-06-21
| | | | Etsuro Fujita
* pg_upgrade: start/stop new server after pg_resetwalBruce Momjian2017-06-20
| | | | | | | | | | | | | | | | | | When commit 0f33a719fdbb5d8c43839ea0d2c90cd03e2af2d2 removed the instructions to start/stop the new cluster before running rsync, it was now possible for pg_resetwal/pg_resetxlog to leave the final WAL record at wal_level=minimum, preventing upgraded standby servers from reconnecting. This patch fixes that by having pg_upgrade unconditionally start/stop the new cluster after pg_resetwal/pg_resetxlog has run. Backpatch through 9.2 since, though the instructions were added in PG 9.5, they worked all the way back to 9.2. Discussion: https://postgr.es/m/20170620171844.GC24975@momjian.us Backpatch-through: 9.2
* Fix materialized-view documentation oversights.Tom Lane2017-06-19
| | | | | | | | | | When materialized views were added, psql's \d commands were made to treat them as a separate object category ... but not everyplace in the documentation or comments got the memo. Noted by David Johnston. Back-patch to 9.3 where matviews came in. Discussion: https://postgr.es/m/CAKFQuwb27M3VXRhHErjCpkWwN9eKThbqWb1=trtoXi9_ejqPXQ@mail.gmail.com
* Avoid regressions in foreign-key-based selectivity estimates.Tom Lane2017-06-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | David Rowley found that the "use the smallest per-column selectivity" heuristic applied in some cases by get_foreign_key_join_selectivity() was badly off if the FK columns are independent, producing estimates much worse than we got before that code was added in 9.6. One case where that heuristic was used was for LEFT and FULL outer joins with the referenced rel on the outside of the join. But we should not really need to special-case those here. eqjoinsel() never has had such a special case; the correction is applied by calc_joinrel_size_estimate() instead. Let's just estimate such cases like inner joins and rely on that later adjustment. (I think there was something of a thinko here, in that the comments seem to be thinking about the selectivity as defined for semi/anti joins; but that shouldn't apply to left/full joins.) Add a regression test exercising such a case to show that this is sane in at least some cases. The other case where we used that heuristic was for SEMI/ANTI outer joins, either if the referenced rel was on the outside, or if it was on the inside but was part of a join within the RHS. In either case, the FK doesn't give us a lot of traction towards estimating the selectivity. To ensure that we don't have regressions from what happened before 9.6, let's punt by ignoring the FK in such cases and applying the traditional selectivity calculation. (We might be able to improve on that later, but for now I just want to be sure it's not worse than 9.5.) Report and patch by David Rowley, simplified a bit by me. Back-patch to 9.6 where this code was added. Discussion: https://postgr.es/m/CAKJS1f8NO8oCDcxrteohG6O72uU1saEVT9qX=R8pENr5QWerXw@mail.gmail.com
* On Windows, make pg_dump use binary mode for compressed plain text output.Tom Lane2017-06-19
| | | | | | | | | | | | | The combination of -Z -Fp and output to stdout resulted in corrupted output data, because we left stdout in text mode, resulting in newline conversion being done on the compressed stream. Switch stdout to binary mode for this case, at the same place where we do it for non-text output formats. Report and patch by Kuntal Ghosh, tested by Ashutosh Sharma and Neha Sharma. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAGz5QCJPvbBjXAmJuGx1B_41yVCetAJhp7rtaDf7XQGWuB1GSw@mail.gmail.com
* Fix leaking of small spilled subtransactions during logical decoding.Andres Freund2017-06-18
| | | | | | | | | | | | | | | | | | When, during logical decoding, a transaction gets too big, it's contents get spilled to disk. Not just the top-transaction gets spilled, but *also* all of its subtransactions, even if they're not that large themselves. Unfortunately we didn't clean up such small spilled subtransactions from disk. Fix that, by keeping better track of whether a transaction has been spilled to disk. Author: Andres Freund Reported-By: Dmitriy Sarafannikov, Fabrízio de Royes Mello Discussion: https://postgr.es/m/1457621358.355011041@f382.i.mail.ru https://postgr.es/m/CAFcNs+qNMhNYii4nxpO6gqsndiyxNDYV0S=JNq0v_sEE+9PHXg@mail.gmail.com Backpatch: 9.4-, where logical decoding was introduced
* Fix dependency, when changing a function's argument/return type.Heikki Linnakangas2017-06-16
| | | | | | | | | | | | When a new base type is created using the old-style procedure of first creating the input/output functions with "opaque" in place of the base type, the "opaque" argument/return type is changed to the final base type, on CREATE TYPE. However, we did not create a pg_depend record when doing that, so the functions were left not depending on the type. Fixes bug #14706, reported by Karen Huddleston. Discussion: https://www.postgresql.org/message-id/20170614232259.1424.82774@wrigleys.postgresql.org
* Fix low-probability leaks of PGresult objects in the backend.Tom Lane2017-06-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We had three occurrences of essentially the same coding pattern wherein we tried to retrieve a query result from a libpq connection without blocking. In the case where PQconsumeInput failed (typically indicating a lost connection), all three loops simply gave up and returned, forgetting to clear any previously-collected PGresult object. Since those are malloc'd not palloc'd, the oversight results in a process-lifespan memory leak. One instance, in libpqwalreceiver, is of little significance because the walreceiver process would just quit anyway if its connection fails. But we might as well fix it. The other two instances, in postgres_fdw, are somewhat more worrisome because at least in principle the scenario could be repeated, allowing the amount of memory leaked to build up to something worth worrying about. Moreover, in these cases the loops contain CHECK_FOR_INTERRUPTS calls, as well as other calls that could potentially elog(ERROR), providing another way to exit without having cleared the PGresult. Here we need to add PG_TRY logic similar to what exists in quite a few other places in postgres_fdw. Coverity noted the libpqwalreceiver bug; I found the other two cases by checking all calls of PQconsumeInput. Back-patch to all supported versions as appropriate (9.2 lacks postgres_fdw, so this is really quite unexciting for that branch). Discussion: https://postgr.es/m/22620.1497486981@sss.pgh.pa.us
* Assert that we don't invent relfilenodes or type OIDs in binary upgrade.Tom Lane2017-06-12
| | | | | | | | | | | | | | | | | | | | | | | | | | During pg_upgrade's restore run, all relfilenode choices should be overridden by commands in the dump script. If we ever find ourselves choosing a relfilenode in the ordinary way, someone blew it. Likewise for pg_type OIDs. Since pg_upgrade might well succeed anyway, if there happens not to be a conflict during the regression test run, we need assertions here to keep us on the straight and narrow. We might someday be able to remove the assertion in GetNewRelFileNode, if pg_upgrade is rewritten to remove its assumption that old and new relfilenodes always match. But it's hard to see how to get rid of the pg_type OID constraint, since those OIDs are embedded in user tables in some cases. Back-patch as far as 9.5, because of the risk of back-patches breaking something here even if it works in HEAD. I'd prefer to go back further, but 9.4 fails both assertions due to get_rel_infos()'s use of a temporary table. We can't use the later-branch solution of a CTE for compatibility reasons (cf commit 5d16332e9), and it doesn't seem worth inventing some other way to do the query. (I did check, by dint of changing the Asserts to elog(WARNING), that there are no other cases of unwanted OID assignments during 9.4's regression test run.) Discussion: https://postgr.es/m/19785.1497215827@sss.pgh.pa.us
* Take PROVE_FLAGS from the command line but not the environmentAndrew Dunstan2017-06-10
| | | | | | | | | | | This reverts commit 56b6ef893fee9e9bf47d927a02f4d1ea911f4d9c and instead makes vcregress.pl parse out PROVE_FLAGS from a command line argument when doing a TAP test, thus making it consistent with the makefile treatment. Discussion: https://postgr.es/m/c26a7416-2fb9-34ab-7991-618c922f896e%402ndquadrant.com Backpatch to 9.4 like previous patch.
* Clear auth context correctly when re-connecting after failed auth attempt.Heikki Linnakangas2017-06-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If authentication over an SSL connection fails, with sslmode=prefer, libpq will reconnect without SSL and retry. However, we did not clear the variables related to GSS, SSPI, and SASL authentication state, when reconnecting. Because of that, the second authentication attempt would always fail with a "duplicate GSS/SASL authentication request" error. pg_SSPI_startup did not check for duplicate authentication requests like the corresponding GSS and SASL functions, so with SSPI, you would leak some memory instead. Another way this could manifest itself, on version 10, is if you list multiple hostnames in the "host" parameter. If the first server requests Kerberos or SCRAM authentication, but it fails, the attempts to connect to the other servers will also fail with "duplicate authentication request" errors. To fix, move the clearing of authentication state from closePGconn to pgDropConnection, so that it is cleared also when re-connecting. Patch by Michael Paquier, with some kibitzing by me. Backpatch down to 9.3. 9.2 has the same bug, but the code around closing the connection is somewhat different, so that this patch doesn't apply. To fix this in 9.2, I think we would need to back-port commit 210eb9b743 first, and then apply this patch. However, given that we only bumped into this in our own testing, we haven't heard any reports from users about this, and that 9.2 will be end-of-lifed in a couple of months anyway, it doesn't seem worth the risk and trouble. Discussion: https://www.postgresql.org/message-id/CAB7nPqRuOUm0MyJaUy9L3eXYJU3AKCZ-0-03=-aDTZJGV4GyWw@mail.gmail.com
* Unify SIGHUP handling between normal and walsender backends.Andres Freund2017-06-05
| | | | | | | | | | | | | | | | | | | | Because walsender and normal backends share the same main loop it's problematic to have two different flag variables, set in signal handlers, indicating a pending configuration reload. Only certain walsender commands reach code paths checking for the variable (START_[LOGICAL_]REPLICATION, CREATE_REPLICATION_SLOT ... LOGICAL, notably not base backups). This is a bug present since the introduction of walsender, but has gotten worse in releases since then which allow walsender to do more. A later patch, not slated for v10, will similarly unify SIGHUP handling in other types of processes as well. Author: Petr Jelinek, Andres Freund Reviewed-By: Michael Paquier Discussion: https://postgr.es/m/20170423235941.qosiuoyqprq4nu7v@alap3.anarazel.de Backpatch: 9.2-, bug is present since 9.0
* Prevent possibility of panics during shutdown checkpoint.Andres Freund2017-06-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the checkpointer writes the shutdown checkpoint, it checks afterwards whether any WAL has been written since it started and throws a PANIC if so. At that point, only walsenders are still active, so one might think this could not happen, but walsenders can also generate WAL, for instance in BASE_BACKUP and logical decoding related commands (e.g. via hint bits). So they can trigger this panic if such a command is run while the shutdown checkpoint is being written. To fix this, divide the walsender shutdown into two phases. First, checkpointer, itself triggered by postmaster, sends a PROCSIG_WALSND_INIT_STOPPING signal to all walsenders. If the backend is idle or runs an SQL query this causes the backend to shutdown, if logical replication is in progress all existing WAL records are processed followed by a shutdown. Otherwise this causes the walsender to switch to the "stopping" state. In this state, the walsender will reject any further replication commands. The checkpointer begins the shutdown checkpoint once all walsenders are confirmed as stopping. When the shutdown checkpoint finishes, the postmaster sends us SIGUSR2. This instructs walsender to send any outstanding WAL, including the shutdown checkpoint record, wait for it to be replicated to the standby, and then exit. Author: Andres Freund, based on an earlier patch by Michael Paquier Reported-By: Fujii Masao, Andres Freund Reviewed-By: Michael Paquier Discussion: https://postgr.es/m/20170602002912.tqlwn4gymzlxpvs2@alap3.anarazel.de Backpatch: 9.4, where logical decoding was introduced
* Have walsenders participate in procsignal infrastructure.Andres Freund2017-06-05
| | | | | | | | | | | | | | | | | The non-participation in procsignal was a problem for both changes in master, e.g. parallelism not working for normal statements run in walsender backends, and older branches, e.g. recovery conflicts and catchup interrupts not working for logical decoding walsenders. This commit thus replaces the previous WalSndXLogSendHandler with procsignal_sigusr1_handler. In branches since db0f6cad48 that can lead to additional SetLatch calls, but that only rarely seems to make a difference. Author: Andres Freund Reviewed-By: Michael Paquier Discussion: https://postgr.es/m/20170421014030.fdzvvvbrz4nckrow@alap3.anarazel.de Backpatch: 9.4, earlier commits don't seem to benefit sufficiently
* Fix thinko in previous openssl changeAndrew Dunstan2017-06-05
|
* Fix record length computation in pg_waldump/xlogdump.Andres Freund2017-06-05
| | | | | | | | | | | | | | | | The current method of computing the record length (excluding the lenght of full-page images) has been wrong since the WAL format has been revamped in 2c03216d831160bedd72d45f712601b6f7d03f1c. Only the main record's length was counted, but that can be significantly too little if there's data associated with further blocks. Fix by computing the record length as total_lenght - fpi_length. Reported-By: Chen Huajun Bug: #14687 Reviewed-By: Heikki Linnakangas Discussion: https://postgr.es/m/20170603165939.1436.58887@wrigleys.postgresql.org Backpatch: 9.5-
* Find openssl lib files in right directory for MSVCAndrew Dunstan2017-06-05
| | | | | | | | | Some openssl builds put their lib files in a VC subdirectory, others do not. Cater for both cases. Backpatch to all live branches. From an offline discussion with Leonardo Cecchi.
* Assorted translatable string fixesAlvaro Herrera2017-06-04
| | | | | Mark our rusage reportage string translatable; remove quotes from type names; unify formatting of very similar messages.
* Allow parallelism in COPY (query) TO ...;Andres Freund2017-06-02
| | | | | | | | | | | | | Previously this was not allowed, as copy.c didn't set the CURSOR_OPT_PARALLEL_OK flag when planning the query. Set it. While the lack of parallel query for COPY isn't strictly speaking a bug, it does prevent parallelism from being used in a facility commonly used to run long running queries. Thus backpatch to 9.6. Author: Andres Freund Discussion: https://postgr.es/m/20170531231958.ihanapplorptykzm@alap3.anarazel.de Backpatch: 9.6, where parallelism was introduced.
* Always use -fPIC, not -fpic, when building shared libraries with gcc.Tom Lane2017-06-01
| | | | | | | | | | | | | | | | | On some platforms, -fpic fails for sufficiently large shared libraries. We've mostly not hit that boundary yet, but there are some extensions such as Citus and pglogical where it's becoming a problem. A bit of research suggests that the penalty for -fPIC is small, in the single-digit-percentage range --- and there's none at all on popular platforms such as x86_64. So let's just default to -fPIC everywhere and provide one less thing for extension developers to worry about. Per complaint from Christoph Berg. Back-patch to all supported branches. (I did not bother to touch the recently-removed Makefiles for sco and unixware in the back branches, though. We'd have no way to test that it doesn't break anything on those platforms.) Discussion: https://postgr.es/m/20170529155850.qojdfrwkkqnjb3ap@msg.df7cb.de
* Fix wording in amvalidate error messagesAlvaro Herrera2017-05-30
| | | | | | | | Remove some gratuituous message differences by making the AM name previously embedded in each message be a %s instead. While at it, get rid of terminology that's unclear and unnecessary in one message. Discussion: https://postgr.es/m/20170523001557.bq2hbq7hxyvyw62q@alvherre.pgsql
* Try to ensure that stats collector's receive buffer size is at least 100KB.Tom Lane2017-05-29
| | | | | | Back-patch of commit 8b0b6303e991079726e83d17401405e94da11564. Discussion: https://postgr.es/m/22173.1494788088@sss.pgh.pa.us
* Prevent running pg_resetwal/pg_resetxlog against wrong-version data dirs.Tom Lane2017-05-29
| | | | | | | | | | | | | | | | | | | | pg_resetwal (formerly pg_resetxlog) doesn't insist on finding a matching version number in pg_control, and that seems like an important thing to preserve since recovering from corrupt pg_control is a prime reason to need to run it. However, that means you can try to run it against a data directory of a different major version, which is at best useless and at worst disastrous. So as to provide some protection against that type of pilot error, inspect PG_VERSION at startup and refuse to do anything if it doesn't match. PG_VERSION is read-only after initdb, so it's unlikely to get corrupted, and even if it were corrupted it would be easy to fix by hand. This hazard has been there all along, so back-patch to all supported branches. Michael Paquier, with some kibitzing by me Discussion: https://postgr.es/m/f4b8eb91-b934-8a0d-b3cc-68f06e2279d1@enterprisedb.com
* Allow NumericOnly to be "+ FCONST".Tom Lane2017-05-29
| | | | | | | | | | | | | | | | | | | | | The NumericOnly grammar production accepted ICONST, + ICONST, - ICONST, FCONST, and - FCONST, but for some reason not + FCONST. This led to strange inconsistencies like regression=# set random_page_cost = +4; SET regression=# set random_page_cost = 4000000000; SET regression=# set random_page_cost = +4000000000; ERROR: syntax error at or near "4000000000" (because 4000000000 is too large to be an ICONST). While there's no actual functional reason to need to write a "+", if we allow it for integers it seems like we should allow it for numerics too. It's been like that forever, so back-patch to all supported branches. Discussion: https://postgr.es/m/30908.1496006184@sss.pgh.pa.us
* Move autogenerated array types out of the way during ALTER ... RENAME.Tom Lane2017-05-26
| | | | | | | | | | | | Commit 9aa3c782c added code to allow CREATE TABLE/CREATE TYPE to not fail when the desired type name conflicts with an autogenerated array type, by dint of renaming the array type out of the way. But I (tgl) overlooked that the same case arises in ALTER TABLE/TYPE RENAME. Fix that too. Back-patch to all supported branches. Report and patch by Vik Fearing, modified a bit by me Discussion: https://postgr.es/m/0f4ade49-4f0b-a9a3-c120-7589f01d1eb8@2ndquadrant.com
* Fix pg_dump to not emit invalid SQL for an empty operator class.Tom Lane2017-05-26
| | | | | | | | | | | | | | | | | If an operator class has no operators or functions, and doesn't need a STORAGE clause, we emitted "CREATE OPERATOR CLASS ... AS ;" which is syntactically invalid. Fix by forcing a STORAGE clause to be emitted anyway in this case. (At some point we might consider changing the grammar to allow CREATE OPERATOR CLASS without an opclass_item_list. But probably we'd want to omit the AS in that case, so that wouldn't fix this pg_dump issue anyway.) It's been like this all along, so back-patch to all supported branches. Daniel Gustafsson, tweaked by me to avoid a dangling-pointer bug Discussion: https://postgr.es/m/D9E5FC64-7A37-4F3D-B946-7E4FB468F88A@yesql.se
* Tighten checks for whitespace in functions that parse identifiers etc.Tom Lane2017-05-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces isspace() calls with scanner_isspace() in functions that are likely to be presented with non-ASCII input. isspace() has the small advantage that it will correctly recognize no-break space in single-byte encodings (such as LATIN1); but it cannot work successfully for any multibyte character, and depending on platform it might return false positive results for some fragments of multibyte characters. That's disastrous for functions that are trying to discard whitespace between valid strings, as noted in bug #14662 from Justin Muise. Even treating no-break space as whitespace is pretty questionable for the usages touched here, because the core scanner would think it is an identifier character. Affected functions are parse_ident(), parseNameAndArgTypes (underlying regprocedurein() and siblings), SplitIdentifierString (used for parsing GUCs and options that are qualified names or lists of names), and SplitDirectoriesString (used for parsing GUCs that are lists of directories). All the functions adjusted here are parsing SQL identifiers and similar constructs, so it's reasonable to insist that their definition of whitespace match the core scanner. So we can hope that this won't cause many backwards-compatibility problems. I've left alone isspace() calls in places that aren't really expecting any non-ASCII input characters, such as float8in(). Back-patch to all supported branches. Discussion: https://postgr.es/m/10129.1495302480@sss.pgh.pa.us
* Update URLs in pgindent source and READMEMagnus Hagander2017-05-23
| | | | | Website and buildfarm is https, not http, and the ftp protocol will be shut down shortly.
* Fix precision and rounding issues in money multiplication and division.Tom Lane2017-05-21
| | | | | | | | | | | | | | | | | | | | | | | The cash_div_intX functions applied rint() to the result of the division. That's not merely useless (because the result is already an integer) but it causes precision loss for values larger than 2^52 or so, because of the forced conversion to float8. On the other hand, the cash_mul_fltX functions neglected to apply rint() to their multiplication results, thus possibly causing off-by-one outputs. Per C standard, arithmetic between any integral value and a float value is performed in float format. Thus, cash_mul_flt4 and cash_div_flt4 produced answers good to only about six digits, even when the float value is exact. We can improve matters noticeably by widening the float inputs to double. (It's tempting to consider using "long double" arithmetic if available, but that's probably too much of a stretch for a back-patched fix.) Also, document that cash_div_intX operators truncate rather than round. Per bug #14663 from Richard Pistole. Back-patch to all supported branches. Discussion: https://postgr.es/m/22403.1495223615@sss.pgh.pa.us
* Fix typo in comment.Heikki Linnakangas2017-05-18
| | | | Daniel Gustafsson
* Make psql handle EOF during COPY FROM STDIN properly on all platforms.Tom Lane2017-05-17
| | | | | | | | | | | | | | | | | | | | | | When stdin is a terminal, it's possible to end a COPY FROM STDIN with a keyboard EOF signal (typically control-D), and then keep on issuing SQL commands. One would expect another COPY FROM STDIN to work as well, but on some platforms it did not. This turns out to be because we were not resetting the stream's feof() flag, and BSD-ish versions of fread() and fgets() won't attempt to read more data if that's set. The misbehavior is observed on BSDen (including macOS), but not Linux, Windows, or SysV-ish Unixen, which makes this a portability bug not just a missing feature. Add a clearerr() call to fix the behavior, and improve the prompt that's issued when copying from a TTY to mention that EOF signals work. It's been like this forever, so back-patch to all supported branches. Thomas Munro Discussion: https://postgr.es/m/CAEepm=0MCGfYf=JAMiYhO6JPtv9-3ZfBo8fcGeCZ8oMzaw+Z+Q@mail.gmail.com
* Fix new warnings from GCC 7Peter Eisentraut2017-05-16
| | | | | This addresses the new warning types -Wformat-truncation -Wformat-overflow that are part of -Wall, via -Wformat, in GCC 7.
* In SSL tests, don't scribble on permissions of a repo file.Tom Lane2017-05-15
| | | | | | | | | | | | Modifying the permissions of a persistent file isn't really much nicer than modifying its contents, even if git doesn't currently notice it. Adjust the test script to make a copy and set the permissions of that instead. Michael Paquier, per a gripe from me. Back-patch to 9.5 where these tests were introduced. Discussion: https://postgr.es/m/14836.1494885946@sss.pgh.pa.us
* Fix unsafe reference into relcache in constructed CommentStmt.Tom Lane2017-05-15
| | | | | | | | | | | | | | The CommentStmt made by RebuildConstraintComment() has to pstrdup the relation name, else it will contain a dangling pointer after that relcache entry is flushed. (I'm less sure that pstrdup'ing conname is necessary, but let's be safe.) Failure to do this leads to weird errors or crashes, as reported by Marko Elezovic. Bug introduced by commit e42375fc8, so back-patch to 9.5 as that was. Fix by David Rowley, regression test by Michael Paquier Discussion: https://postgr.es/m/DB6PR03MB30775D58E732D4EB0C13725B9AE00@DB6PR03MB3077.eurprd03.prod.outlook.com
* Make stats regression test more robust in the face of parallel query.Tom Lane2017-05-14
| | | | | | | | | | | | | | | | | | | | Commit 60690a6fe attempted to fix the wait_for_stats() function in this test so that it would wait properly if the tenk2 scans were done in parallel workers instead of the main session (typically as a consequence of force_parallel_mode being turned on). However, we made it test for whether the main session's actions had been reported by looking for inserts on 'trunc_stats_test'. This is the Wrong Thing, because those aren't the last updates we expect the main session to do. As shown by recent failures on buildfarm member frogmouth, it's entirely likely that the trunc_stats_test updates will be reported in a separate message from later updates, which means there can be a window in which wait_for_stats() will exit but not all the updates we are expecting to see will have arrived. We should test for the last updates we're expecting, namely those on 'trunc_stats_test4'. Unfortunately, I doubt that this explains frogmouth's failures, because there's no reason to believe that it's running the tenk2 queries in parallel. Still, the test is wrong on its own terms, so fix and back-patch to 9.6 where parallel query came in.
* Avoid superfluous work for commits during logical slot creation.Andres Freund2017-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Before 955a684e0401 logical decoding snapshot maintenance needed to cope with transactions it might not have seen in their entirety. For such transactions we'd to assume they modified the catalog (could have happened before we were watching), and thus a new snapshot had to be built, and distributed to concurrently running transactions. That's problematic because building a new snapshot isn't that cheap , especially as the the array of committed transactions needs to be sorted. When creating a slot on a server with a lot of transactions, this could make logical slot creation infeasibly expensive. After 955a684e0401 there's no need to deal with transaction that aren't guaranteed to be fully observable. That allows to avoid building snapshots for transactions that haven't modified catalog, even before reaching consistency. While this isn't necessarily a bugfix, slot creation being impossible in some production workloads, is severe enough to warrant backpatching. Author: Andres Freund, based on a quite different patch from Petr Jelinek Analyzed-By: Petr Jelinek Reviewed-By: Petr Jelinek Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com Backpatch: 9.4-, where logical decoding has been introduced
* Fix race condition leading to hanging logical slot creation.Andres Freund2017-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The snapshot assembly during the creation of logical slots relied waiting for transactions in xl_running_xacts to end, by checking for their commit/abort records. Unfortunately, despite locking, it is possible to see an xl_running_xact record listing transactions as ready, that have already WAL-logged an commit/abort record, as the locking just prevents the ProcArray to be adjusted, and the commit record has to be logged first. That lead to either delayed or hanging snapshot creation, because snapbuild.c would wait "forever" to see commit/abort records for some transactions. That hang resolved only if a xl_running_xacts record without any running transactions happened to be logged, far from certain on a busy server. It's impractical to prevent that via more heavyweight locking, the likelihood of deadlocks and significantly increased contention would be too big. Instead change the initial snapshot creation to be solely based on tracking the oldest running transaction via xl_running_xacts->oldestRunningXid - that actually ends up significantly simplifying the code. That has two disadvantages: 1) Because we cannot fully "trust" the contents of xl_running_xacts, we cannot use it to build the initial snapshot. Instead we have to wait twice for all running transactions to finish. 2) Previously a slot, unless the race occurred, could be created when the all transaction perceived as running based on commit/abort records, now we have to wait for the next xl_running_xacts record. To address that, trigger logging new xl_running_xacts record from within snapbuild.c exactly when necessary. Unfortunately snabuild.c's SnapBuild is stored on disk, one of the stupider ideas of a certain Mr Freund, so we can't change it in a minor release. As this is going to be backpatched, we have to hack around a bit to keep on-disk compatibility. A later commit will rejigger that on master. Author: Andres Freund, based on a quite different patch from Petr Jelinek Analyzed-By: Petr Jelinek Reviewed-By: Petr Jelinek Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com Backpatch: 9.4-, where logical decoding has been introduced
* Avoid searching for callback functions in CallSyscacheCallbacks().Tom Lane2017-05-12
| | | | | | | | | | | | | | | | | | | | | | | | | We have now grown enough registerable syscache-invalidation callback functions that the original assumption that there would be few of them is causing performance problems. In particular, let's fix things so that CallSyscacheCallbacks doesn't have to search the whole array to find which callback(s) to invoke for a given cache ID. Preserve the original behavior that callbacks are called in order of registration, just in case there's someplace that depends on that (which I doubt). In support of this, export the number of syscaches from syscache.h. People could have found that out anyway from the enum, but adding a #define makes that much safer. This provides a useful additional speedup in Mathieu Fenniak's logical-decoding test case, although we're reaching the point of diminishing returns there. I think any further improvement will have to come from reducing the number of cache invalidations that are triggered in the first place. Still, we can hope that this change gives some incremental benefit for all invalidation scenarios. Back-patch to 9.4 where logical decoding was introduced. Discussion: https://postgr.es/m/CAHoiPjzea6N0zuCi=+f9v_j94nfsy6y8SU7-=bp4=7qw6_i=Rg@mail.gmail.com
* Reduce initial size of RelfilenodeMapHash.Tom Lane2017-05-12
| | | | | | | | | | | | | | | | | | | | | A test case provided by Mathieu Fenniak shows that hash_seq_search'ing this hashtable can consume a very significant amount of overhead during logical decoding, which triggers frequent cache invalidation. Testing suggests that the actual population of the hashtable is often no more than a few dozen entries, so we can cut the overhead just by dropping the initial number of buckets down from 1024 --- I chose to cut it to 64. (In situations where we do have a significant number of entries, we shouldn't get any real penalty from doing this, as the dynahash.c code will resize the hashtable automatically.) This gives a further factor-of-two savings in Mathieu's test case. That may be overly optimistic for real-world benefit, as real cases may have larger average table populations, but it's hard to see it turning into a net negative for any workload. Back-patch to 9.4 where relfilenodemap.c was introduced. Discussion: https://postgr.es/m/CAHoiPjzea6N0zuCi=+f9v_j94nfsy6y8SU7-=bp4=7qw6_i=Rg@mail.gmail.com
* Avoid searching for the target catcache in CatalogCacheIdInvalidate.Tom Lane2017-05-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | A test case provided by Mathieu Fenniak shows that the initial search for the target catcache in CatalogCacheIdInvalidate consumes a very significant amount of overhead in cases where cache invalidation is triggered but has little useful work to do. There is no good reason for that search to exist at all, as the index array maintained by syscache.c allows direct lookup of the catcache from its ID. We just need a frontend function in syscache.c, matching the division of labor for most other cache-accessing operations. While there's more that can be done in this area, this patch alone reduces the runtime of Mathieu's example by 2X. We can hope that it offers some useful benefit in other cases too, although usually cache invalidation overhead is not such a striking fraction of the total runtime. Back-patch to 9.4 where logical decoding was introduced. It might be worth going further back, but presently the only case we know of where cache invalidation is really a significant burden is in logical decoding. Also, older branches have fewer catcaches, reducing the possible benefit. (Note: although this nominally changes catcache's API, we have always documented CatalogCacheIdInvalidate as a private function, so I would have little sympathy for an external module calling it directly. So backpatching should be fine.) Discussion: https://postgr.es/m/CAHoiPjzea6N0zuCi=+f9v_j94nfsy6y8SU7-=bp4=7qw6_i=Rg@mail.gmail.com
* Honor PROVE_FLAGS environment settingAndrew Dunstan2017-05-12
| | | | | | | | | On MSVC builds and on back branches that means removing the hardcoded --verbose setting. On master for Unix that means removing the empty setting in the global Makefile so that the value can be acquired from the environment as well as from the make arguments. Backpatch to 9.4 where we introduced TAP tests
* Add libxml2 include path for MSVC buildsAndrew Dunstan2017-05-12
| | | | | | | | | On Unix this path is detected via the use of xml2-config, but that's not available on Windows. This means that users building with libxml2 will no longer need to move things around from the standard libxml2 installation for MSVC builds. Backpatch to all live branches.
* Increase MAX_SYSCACHE_CALLBACKS to provide more room for extensions.Tom Lane2017-05-11
| | | | | | | | | | | | | | | | | | Increase from the historical value of 32 to 64. We are up to 31 callers of CacheRegisterSyscacheCallback() in HEAD, so if they were all to be exercised in one process that would leave only one slot for add-on modules. It's probably not possible for that to happen, but still we clearly need more daylight here. (At some point it might be worth making the array dynamically resizable; but since we've never heard a complaint of "out of syscache_callback_list slots" happening in the field, I doubt it's worth it yet.) Back-patch as far as 9.4, which is where we increased the companion limit MAX_RELCACHE_CALLBACKS (cf commit f01d1ae3a). It's not as urgent in released branches, which have only a couple dozen call sites in core, but it still seems that somebody might hit the limit before these branches die. Discussion: https://postgr.es/m/12184.1494450131@sss.pgh.pa.us
* psql: Add missing translation markersPeter Eisentraut2017-05-10
|
* Ignore PQcancel errors properlyAlvaro Herrera2017-05-09
| | | | | | | Add a (void) cast to all PQcancel() calls that purposefully don't check the return value, to keep compilers and static checkers happy. Per Coverity.
* Stamp 9.6.3.REL9_6_3Tom Lane2017-05-08
|
* Further patch rangetypes_selfuncs.c's statistics slot management.Tom Lane2017-05-08
| | | | | | | | | | | | | | | Values in a STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM slot are float8, not of the type of the column the statistics are for. This bug is at least partly the fault of sloppy specification comments for get_attstatsslot()/free_attstatsslot(): the type OID they want is that of the stavalues entries, not of the underlying column. (I double-checked other callers and they seem to get this right.) Adjust the comments to be more correct. Per buildfarm. Security: CVE-2017-7484
* Fix possibly-uninitialized variable.Tom Lane2017-05-08
| | | | | | Oversight in e2d4ef8de et al (my fault not Peter's). Per buildfarm. Security: CVE-2017-7484
* Match pg_user_mappings limits to information_schema.user_mapping_options.Noah Misch2017-05-08
| | | | | | | | | | | | | | | Both views replace the umoptions field with NULL when the user does not meet qualifications to see it. They used different qualifications, and pg_user_mappings documented qualifications did not match its implemented qualifications. Make its documentation and implementation match those of user_mapping_options. One might argue for stronger qualifications, but these have long, documented tenure. pg_user_mappings has always exhibited this problem, so back-patch to 9.2 (all supported versions). Michael Paquier and Feike Steenbergen. Reviewed by Jeff Janes. Reported by Andrew Wheelwright. Security: CVE-2017-7486
* Restore PGREQUIRESSL recognition in libpq.Noah Misch2017-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 65c3bf19fd3e1f6a591618e92eb4c54d0b217564 moved handling of the, already then, deprecated requiressl parameter into conninfo_storeval(). The default PGREQUIRESSL environment variable was however lost in the change resulting in a potentially silent accept of a non-SSL connection even when set. Its documentation remained. Restore its implementation. Also amend the documentation to mark PGREQUIRESSL as deprecated for those not following the link to requiressl. Back-patch to 9.3, where commit 65c3bf1 first appeared. Behavior has been more complex when the user provides both deprecated and non-deprecated settings. Before commit 65c3bf1, libpq operated according to the first of these found: requiressl=1 PGREQUIRESSL=1 sslmode=* PGSSLMODE=* (Note requiressl=0 didn't override sslmode=*; it would only suppress PGREQUIRESSL=1 or a previous requiressl=1. PGREQUIRESSL=0 had no effect whatsoever.) Starting with commit 65c3bf1, libpq ignored PGREQUIRESSL, and order of precedence changed to this: last of requiressl=* or sslmode=* PGSSLMODE=* Starting now, adopt the following order of precedence: last of requiressl=* or sslmode=* PGSSLMODE=* PGREQUIRESSL=1 This retains the 65c3bf1 behavior for connection strings that contain both requiressl=* and sslmode=*. It retains the 65c3bf1 change that either connection string option overrides both environment variables. For the first time, PGSSLMODE has precedence over PGREQUIRESSL; this avoids reducing security of "PGREQUIRESSL=1 PGSSLMODE=verify-full" configurations originating under v9.3 and later. Daniel Gustafsson Security: CVE-2017-7485