aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
...
* Update line count totals for psql help displays.Tom Lane2016-08-18
| | | | | | | | As usual, we've been pretty awful about maintaining these counts. They're not all that critical, perhaps, but let's get them right at release time. Also fix 9.5, which I notice is just as bad. It's probably wrong further back, but the lack of --help=foo options before 9.5 makes it too painful to count.
* In plpgsql, don't try to convert int2vector or oidvector to expanded array.Tom Lane2016-08-18
| | | | | | | | | | These types are storage-compatible with real arrays, but they don't support toasting, so of course they can't support expansion either. Per bug #14289 from Michael Overmeyer. Back-patch to 9.5 where expanded arrays were introduced. Report: <20160818174414.1529.37913@wrigleys.postgresql.org>
* Update Windows timezone mapping from Windows 7 and 10Magnus Hagander2016-08-18
| | | | | | | | This adds a couple of new timezones that are present in the newer versions of Windows. It also updates comments to reference UTC rather than GMT, as this change has been made in Windows. Michael Paquier
* Fix deletion of speculatively inserted TOAST on conflictAndres Freund2016-08-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INSERT .. ON CONFLICT runs a pre-check of the possible conflicting constraints before performing the actual speculative insertion. In case the inserted tuple included TOASTed columns the ON CONFLICT condition would be handled correctly in case the conflict was caught by the pre-check, but if two transactions entered the speculative insertion phase at the same time, one would have to re-try, and the code for aborting a speculative insertion did not handle deleting the speculatively inserted TOAST datums correctly. TOAST deletion would fail with "ERROR: attempted to delete invisible tuple" as we attempted to remove the TOAST tuples using simple_heap_delete which reasoned that the given tuples should not be visible to the command that wrote them. This commit updates the heap_abort_speculative() function which aborts the conflicting tuple to use itself, via toast_delete, for deleting associated TOAST datums. Like before, the inserted toast rows are not marked as being speculative. This commit also adds a isolationtester spec test, exercising the relevant code path. Unfortunately 9.5 cannot handle two waiting sessions, and thus cannot execute this test. Reported-By: Viren Negi, Oskari Saarenmaa Author: Oskari Saarenmaa, edited a bit by me Bug: #14150 Discussion: <20160519123338.12513.20271@wrigleys.postgresql.org> Backpatch: 9.5, where ON CONFLICT was introduced
* Properly re-initialize replication slot shared memory upon creation.Andres Freund2016-08-17
| | | | | | | | | | | | | | | | | | Slot creation did not clear all fields upon creation. After start the memory is zeroed, but when a physical replication slot was created in the shared memory of a previously existing logical slot, catalog_xmin would not be cleared. That in turn would prevent vacuum from doing its duties. To fix initialize all the fields. To make similar future bugs less likely, zero all of ReplicationSlotPersistentData, and re-order the rest of the initialization to be in struct member order. Analysis: Andrew Gierth Reported-By: md@chewy.com Author: Michael Paquier Discussion: <20160705173502.1398.70934@wrigleys.postgresql.org> Backpatch: 9.4, where replication slots were introduced
* Fix -e option in contrib/intarray/bench/bench.pl.Tom Lane2016-08-17
| | | | | | | | | | As implemented, -e ran an EXPLAIN but then discarded the output, which certainly seems pointless. Make it print to stdout instead. It's been like that forever, so back-patch to all supported branches. Daniel Gustafsson, reviewed by Andreas Scherbaum Patch: <B97BDCB7-A3B3-4734-90B5-EDD586941629@yesql.se>
* Fix assorted places in psql to print version numbers >= 10 in new style.Tom Lane2016-08-16
| | | | | | | | | | | | | | | | | | | This is somewhat cosmetic, since as long as you know what you are looking at, "10.0" is a serviceable substitute for "10". But there is a potential for confusion between version numbers with minor numbers and those without --- we don't want people asking "why is psql saying 10.0 when my server is 10.2". Therefore, back-patch as far as practical, which turns out to be 9.3. I could have redone the patch to use fprintf(stderr) in place of psql_error(), but it seems more work than is warranted for branches that will be EOL or nearly so by the time v10 comes out. Although only psql seems to contain any code that needs this, I chose to put the support function into fe_utils, since it seems likely we'll need it in other client programs in future. (In 9.3-9.5, use dumputils.c, the predecessor of fe_utils/string_utils.c.) In HEAD, also fix the backend code that whines about loadable-library version mismatch. I don't see much need to back-patch that.
* Remove bogus dependencies on NUMERIC_MAX_PRECISION.Tom Lane2016-08-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NUMERIC_MAX_PRECISION is a purely arbitrary constraint on the precision and scale you can write in a numeric typmod. It might once have had something to do with the allowed range of a typmod-less numeric value, but at least since 9.1 we've allowed, and documented that we allowed, any value that would physically fit in the numeric storage format; which is something over 100000 decimal digits, not 1000. Hence, get rid of numeric_in()'s use of NUMERIC_MAX_PRECISION as a limit on the allowed range of the exponent in scientific-format input. That was especially silly in view of the fact that you can enter larger numbers as long as you don't use 'e' to do it. Just constrain the value enough to avoid localized overflow, and let make_result be the final arbiter of what is too large. Likewise adjust ecpg's equivalent of this code. Also get rid of numeric_recv()'s use of NUMERIC_MAX_PRECISION to limit the number of base-NBASE digits it would accept. That created a dump/restore hazard for binary COPY without doing anything useful; the wire-format limit on number of digits (65535) is about as tight as we would want. In HEAD, also get rid of pg_size_bytes()'s unnecessary intimacy with what the numeric range limit is. That code doesn't exist in the back branches. Per gripe from Aravind Kumar. Back-patch to all supported branches, since they all contain the documentation claim about allowed range of NUMERIC (cf commit cabf5d84b). Discussion: <2895.1471195721@sss.pgh.pa.us>
* Fix inappropriate printing of never-measured times in EXPLAIN.Tom Lane2016-08-12
| | | | | | | | | | | | | | | | | EXPLAIN (ANALYZE, TIMING OFF) would print an elapsed time of zero for a trigger function, because no measurement has been taken but it printed the field anyway. This isn't what EXPLAIN does elsewhere, so suppress it. In the same vein, EXPLAIN (ANALYZE, BUFFERS) with non-text output format would print buffer I/O timing numbers even when no measurement has been taken because track_io_timing is off. That seems not per policy, either, so change it. Back-patch to 9.2 where these features were introduced. Maksim Milyutin Discussion: <081c0540-ecaa-bd29-3fd2-6358f3b359a9@postgrespro.ru>
* Code cleanup in SyncRepWaitForLSN()Simon Riggs2016-08-12
| | | | | | | | | | Commit 14e8803f1 removed LWLocks when accessing MyProc->syncRepState but didn't clean up the surrounding code and comments. Cleanup and backpatch to 9.5, to keep code similar. Julien Rouhaud, improved by suggestion from Michael Paquier, implemented trivially by myself.
* Correct TABLESAMPLE docsSimon Riggs2016-08-12
| | | | | | Original wording was correct but not the intended meaning. Reported by Patrik Wenger
* Add ID property to replication slots' sect2Alvaro Herrera2016-08-11
|
* Fix busted Assert for CREATE MATVIEW ... WITH NO DATA.Tom Lane2016-08-11
| | | | | | | | | | | | | | | | Commit 874fe3aea changed the command tag returned for CREATE MATVIEW/CREATE TABLE AS ... WITH NO DATA, but missed that there was code in spi.c that expected the command tag to always be "SELECT". Fortunately, the consequence was only an Assert failure, so this oversight should have no impact in production builds. Since this code path was evidently un-exercised, add a regression test. Per report from Shivam Saxena. Back-patch to 9.3, like the previous commit. Michael Paquier Report: <97218716-480B-4527-B5CD-D08D798A0C7B@dresources.com>
* Doc: write some for adminpack.Tom Lane2016-08-10
| | | | | Previous contents of adminpack.sgml were rather far short of project norms. Not to mention being outright wrong about the signature of pg_file_read().
* Fix typoPeter Eisentraut2016-08-09
|
* Doc: clarify description of CREATE/ALTER FUNCTION ... SET FROM CURRENT.Tom Lane2016-08-09
| | | | Per discussion with David Johnston.
* Stamp 9.5.4.REL9_5_4Tom Lane2016-08-08
|
* Last-minute updates for release notes.Tom Lane2016-08-08
| | | | Security: CVE-2016-5423, CVE-2016-5424
* Fix several one-byte buffer over-reads in to_numberPeter Eisentraut2016-08-08
| | | | | | | | | | | | | | | | | | | | | | | | | Several places in NUM_numpart_from_char(), which is called from the SQL function to_number(text, text), could accidentally read one byte past the end of the input buffer (which comes from the input text datum and is not null-terminated). 1. One leading space character would be skipped, but there was no check that the input was at least one byte long. This does not happen in practice, but for defensiveness, add a check anyway. 2. Commit 4a3a1e2cf apparently accidentally doubled that code that skips one space character (so that two spaces might be skipped), but there was no overflow check before skipping the second byte. Fix by removing that duplicate code. 3. A logic error would allow a one-byte over-read when looking for a trailing sign (S) placeholder. In each case, the extra byte cannot be read out directly, but looking at it might cause a crash. The third item was discovered by Piotr Stefaniak, the first two were found and analyzed by Tom Lane and Peter Eisentraut.
* Translation updatesPeter Eisentraut2016-08-08
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: f1a1631efd7a51f9b1122f22cf688a3124bf1342
* Fix two errors with nested CASE/WHEN constructs.Tom Lane2016-08-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ExecEvalCase() tried to save a cycle or two by passing &econtext->caseValue_isNull as the isNull argument to its sub-evaluation of the CASE value expression. If that subexpression itself contained a CASE, then *isNull was an alias for econtext->caseValue_isNull within the recursive call of ExecEvalCase(), leading to confusion about whether the inner call's caseValue was null or not. In the worst case this could lead to a core dump due to dereferencing a null pointer. Fix by not assigning to the global variable until control comes back from the subexpression. Also, avoid using the passed-in isNull pointer transiently for evaluation of WHEN expressions. (Either one of these changes would have been sufficient to fix the known misbehavior, but it's clear now that each of these choices was in itself dangerous coding practice and best avoided. There do not seem to be any similar hazards elsewhere in execQual.c.) Also, it was possible for inlining of a SQL function that implements the equality operator used for a CASE comparison to result in one CASE expression's CaseTestExpr node being inserted inside another CASE expression. This would certainly result in wrong answers since the improperly nested CaseTestExpr would be caused to return the inner CASE's comparison value not the outer's. If the CASE values were of different data types, a crash might result; moreover such situations could be abused to allow disclosure of portions of server memory. To fix, teach inline_function to check for "bare" CaseTestExpr nodes in the arguments of a function to be inlined, and avoid inlining if there are any. Heikki Linnakangas, Michael Paquier, Tom Lane Report: https://github.com/greenplum-db/gpdb/pull/327 Report: <4DDCEEB8.50602@enterprisedb.com> Security: CVE-2016-5423
* Obstruct shell, SQL, and conninfo injection via database and role names.Noah Misch2016-08-08
| | | | | | | | | | | | | | | | Due to simplistic quoting and confusion of database names with conninfo strings, roles with the CREATEDB or CREATEROLE option could escalate to superuser privileges when a superuser next ran certain maintenance commands. The new coding rule for PQconnectdbParams() calls, documented at conninfo_array_parse(), is to pass expand_dbname=true and wrap literal database names in a trivial connection string. Escape zero-length values in appendConnStrVal(). Back-patch to 9.1 (all supported versions). Nathan Bossart, Michael Paquier, and Noah Misch. Reviewed by Peter Eisentraut. Reported by Nathan Bossart. Security: CVE-2016-5424
* Promote pg_dumpall shell/connstr quoting functions to src/fe_utils.Noah Misch2016-08-08
| | | | | | | | | | Rename these newly-extern functions with terms more typical of their new neighbors. No functional changes; a subsequent commit will use them in more places. Back-patch to 9.1 (all supported versions). Back branches lack src/fe_utils, so instead rename the functions in place; the subsequent commit will copy them into the other programs using them. Security: CVE-2016-5424
* Fix Windows shell argument quoting.Noah Misch2016-08-08
| | | | | | | | | The incorrect quoting may have permitted arbitrary command execution. At a minimum, it gave broader control over the command line to actors supposed to have control over a single argument. Back-patch to 9.1 (all supported versions). Security: CVE-2016-5424
* Reject, in pg_dumpall, names containing CR or LF.Noah Misch2016-08-08
| | | | | | | | | | | | | | | | | | | | | | | | These characters prematurely terminate Windows shell command processing, causing the shell to execute a prefix of the intended command. The chief alternative to rejecting these characters was to bypass the Windows shell with CreateProcess(), but the ability to use such names has little value. Back-patch to 9.1 (all supported versions). This change formally revokes support for these characters in database names and roles names. Don't document this; the error message is self-explanatory, and too few users would benefit. A future major release may forbid creation of databases and roles so named. For now, check only at known weak points in pg_dumpall. Future commits will, without notice, reject affected names from other frontend programs. Also extend the restriction to pg_dumpall --dbname=CONNSTR arguments and --file arguments. Unlike the effects on role name arguments and database names, this does not reflect a broad policy change. A migration to CreateProcess() could lift these two restrictions. Reviewed by Peter Eisentraut. Security: CVE-2016-5424
* Field conninfo strings throughout src/bin/scripts.Noah Misch2016-08-08
| | | | | | | | | | | | | | | | | | | | | These programs nominally accepted conninfo strings, but they would proceed to use the original dbname parameter as though it were an unadorned database name. This caused "reindexdb dbname=foo" to issue an SQL command that always failed, and other programs printed a conninfo string in error messages that purported to print a database name. Fix both problems by using PQdb() to retrieve actual database names. Continue to print the full conninfo string when reporting a connection failure. It is informative there, and if the database name is the sole problem, the server-side error message will include the name. Beyond those user-visible fixes, this allows a subsequent commit to synthesize and use conninfo strings without that implementation detail leaking into messages. As a side effect, the "vacuuming database" message now appears after, not before, the connection attempt. Back-patch to 9.1 (all supported versions). Reviewed by Michael Paquier and Peter Eisentraut. Security: CVE-2016-5424
* Introduce a psql "\connect -reuse-previous=on|off" option.Noah Misch2016-08-08
| | | | | | | | | | | | The decision to reuse values of parameters from a previous connection has been based on whether the new target is a conninfo string. Add this means of overriding that default. This feature arose as one component of a fix for security vulnerabilities in pg_dump, pg_dumpall, and pg_upgrade, so back-patch to 9.1 (all supported versions). In 9.3 and later, comment paragraphs that required update had already-incorrect claims about behavior when no connection is open; fix those problems. Security: CVE-2016-5424
* Sort out paired double quotes in \connect, \password and \crosstabview.Noah Misch2016-08-08
| | | | | | | | | | | | In arguments, these meta-commands wrongly treated each pair as closing the double quoted string. Make the behavior match the documentation. This is a compatibility break, but I more expect to find software with untested reliance on the documented behavior than software reliant on today's behavior. Back-patch to 9.1 (all supported versions). Reviewed by Tom Lane and Peter Eisentraut. Security: CVE-2016-5424
* Release notes for 9.5.4, 9.4.9, 9.3.14, 9.2.18, 9.1.23.Tom Lane2016-08-07
|
* Fix misestimation of n_distinct for a nearly-unique column with many nulls.Tom Lane2016-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If ANALYZE found no repeated non-null entries in its sample, it set the column's stadistinct value to -1.0, intending to indicate that the entries are all distinct. But what this value actually means is that the number of distinct values is 100% of the table's rowcount, and thus it was overestimating the number of distinct values by however many nulls there are. This could lead to very poor selectivity estimates, as for example in a recent report from Andreas Joseph Krogh. We should discount the stadistinct value by whatever we've estimated the nulls fraction to be. (That is what will happen if we choose to use a negative stadistinct for a column that does have repeated entries, so this code path was just inconsistent.) In addition to fixing the stadistinct entries stored by several different ANALYZE code paths, adjust the logic where get_variable_numdistinct() forces an "all distinct" estimate on the basis of finding a relevant unique index. Unique indexes don't reject nulls, so there's no reason to assume that the null fraction doesn't apply. Back-patch to all supported branches. Back-patching is a bit of a judgment call, but this problem seems to affect only a few users (else we'd have identified it long ago), and it's bad enough when it does happen that destabilizing plan choices in a worse direction seems unlikely. Patch by me, with documentation wording suggested by Dean Rasheed Report: <VisenaEmail.26.df42f82acae38a58.156463942b8@tc7-visena> Discussion: <16143.1470350371@sss.pgh.pa.us>
* Don't propagate a null subtransaction snapshot up to parent transaction.Tom Lane2016-08-07
| | | | | | | | | This oversight could cause logical decoding to fail to decode an outer transaction containing changes, if a subtransaction had an XID but no actual changes. Per bug #14279 from Marko Tiikkaja. Patch by Marko based on analysis by Andrew Gierth. Discussion: <20160804191757.1430.39011@wrigleys.postgresql.org>
* In B-tree page deletion, clean up properly after page deletion failure.Tom Lane2016-08-06
| | | | | | | | | | | | | | | | | | | | | | In _bt_unlink_halfdead_page(), we might fail to find an immediate left sibling of the target page, perhaps because of corruption of the page sibling links. The code intends to cope with this by just abandoning the deletion attempt; but what actually happens is that it fails outright due to releasing the same buffer lock twice. (And error recovery masks a second problem, which is possible leakage of a pin on another page.) Seems to have been introduced by careless refactoring in commit efada2b8e. Since there are multiple cases to consider, let's make releasing the buffer lock in the failure case the responsibility of _bt_unlink_halfdead_page() not its caller. Also, avoid fetching the leaf page's left-link again after we've dropped lock on the page. This is probably harmless, but it's not exactly good coding practice. Per report from Kyotaro Horiguchi. Back-patch to 9.4 where the faulty code was introduced. Discussion: <20160803.173116.111915228.horiguchi.kyotaro@lab.ntt.co.jp>
* Teach libpq to decode server version correctly from future servers.Tom Lane2016-08-05
| | | | | | | | | | | | | | | | | | | Beginning with the next development cycle, PG servers will report two-part not three-part version numbers. Fix libpq so that it will compute the correct numeric representation of such server versions for reporting by PQserverVersion(). It's desirable to get this into the field and back-patched ASAP, so that older clients are more likely to understand the new server version numbering by the time any such servers are in the wild. (The results with an old client would probably not be catastrophic anyway for a released server; for example "10.1" would be interpreted as 100100 which would be wrong in detail but would not likely cause an old client to misbehave badly. But "10devel" or "10beta1" would result in sversion==0 which at best would result in disabling all use of modern features.) Extracted from a patch by Peter Eisentraut; comments added by me Patch: <802ec140-635d-ad86-5fdf-d3af0e260c22@2ndquadrant.com>
* Add note about unused arguments for pg_replication_origin_xact_reset() in docs.Fujii Masao2016-08-06
| | | | | | | | | In 9.5, two arguments were introduced into pg_replication_origin_xact_reset() by mistake while they are actually not used at all. We cannot fix this issue for 9.5 anymore because it needs a catalog version bump. Instead, we add a note about those unused arguments into the document. Reviewed-By: Andres Freund
* Update time zone data files to tzdata release 2016f.Tom Lane2016-08-05
| | | | | | | | | | | | | | | DST law changes in Kemerovo and Novosibirsk. Historical corrections for Azerbaijan, Belarus, and Morocco. Asia/Novokuznetsk and Asia/Novosibirsk now use numeric time zone abbreviations instead of invented ones. Zones for Antarctic bases and other locations that have been uninhabited for portions of the time span known to the tzdata database now report "-00" rather than "zzz" as the zone abbreviation for those time spans. Also, I decided to remove some of the timezone/data/ files that we don't use. At one time that subdirectory was a complete copy of what IANA distributes in the tzdata tarballs, but that hasn't been true for a long time. There seems no good reason to keep shipping those specific files but not others; they're just bloating our tarballs.
* Fix bogus coding in WaitForBackgroundWorkerShutdown().Tom Lane2016-08-04
| | | | | | | | | | | | | | | | Some conditions resulted in "return" directly out of a PG_TRY block, which left the exception stack dangling, and to add insult to injury failed to restore the state of set_latch_on_sigusr1. This is a bug only in 9.5; in HEAD it was accidentally fixed by commit db0f6cad4, which removed the surrounding PG_TRY block. However, I (tgl) chose to apply the patch to HEAD as well, because the old coding was gratuitously different from WaitForBackgroundWorkerStartup(), and there would indeed have been no bug if it were done like that to start with. Dmitry Ivanov Discussion: <1637882.WfYN5gPf1A@abook>
* doc: Remove documentation of nonexistent information schema columnsPeter Eisentraut2016-08-03
| | | | | | These were probably copied in by accident. From: Clément Prévost <prevostclement@gmail.com>
* Remove duplicate InitPostmasterChild() call while starting a bgworker.Tom Lane2016-08-02
| | | | | | | | | | | This is apparently harmless on Windows, but on Unix it results in an assertion failure. We'd not noticed because this code doesn't get used on Unix unless you build with -DEXEC_BACKEND. Bug was evidently introduced by sloppy refactoring in commit 31c453165. Thomas Munro Discussion: <CAEepm=1VOnbVx4wsgQFvj94hu9jVt2nVabCr7QiooUSvPJXkgQ@mail.gmail.com>
* doc: OS collation changes can break indexesBruce Momjian2016-08-02
| | | | | | | | Discussion: 20160702155517.GD18610@momjian.us Reviewed-by: Christoph Berg Backpatch-through: 9.1
* Block interrupts during HandleParallelMessages().Tom Lane2016-08-02
| | | | | | | | | | | | | | | | As noted by Alvaro, there are CHECK_FOR_INTERRUPTS() calls in the shm_mq.c functions called by HandleParallelMessages(). I believe they're all unreachable since we always pass nowait = true, but it doesn't seem like a great idea to assume that no such call will ever be reachable from HandleParallelMessages(). If that did happen, there would be a risk of a recursive call to HandleParallelMessages(), which it does not appear to be designed for --- for example, there's nothing that would prevent out-of-order processing of received messages. And certainly such cases cannot easily be tested. So let's prevent it by holding off interrupts for the duration of the function. Back-patch to 9.5 which contains identical code. Discussion: <14869.1470083848@sss.pgh.pa.us>
* Sync 9.5 version of access/transam/parallel.c with HEAD.Tom Lane2016-08-02
| | | | | | | | | This back-patches commit a5fe473ad (notably, marking ParallelMessagePending as volatile, which is not particularly optional). I also back-patched some previous cosmetic changes to remove unnecessary diffs between the two branches. I'm unsure how much of this code is actually reachable in 9.5, but to the extent that it is reachable, it needs to be maintained, and minimizing cross-branch diffs will make that easier.
* Fix pg_dump's handling of public schema with both -c and -C options.Tom Lane2016-08-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | Since -c plus -C requests dropping and recreating the target database as a whole, not dropping individual objects in it, we should assume that the public schema already exists and need not be created. The previous coding considered only the state of the -c option, so it would emit "CREATE SCHEMA public" anyway, leading to an unexpected error in restore. Back-patch to 9.2. Older versions did not accept -c with -C so the issue doesn't arise there. (The logic being patched here dates to 8.0, cf commit 2193121fa, so it's not really wrong that it didn't consider the case at the time.) Note that versions before 9.6 will still attempt to emit REVOKE/GRANT on the public schema; but that happens without -c/-C too, and doesn't seem to be the focus of this complaint. I considered extending this stanza to also skip the public schema's ACL, but that would be a misfeature, as it'd break cases where users intentionally changed that ACL. The real fix for this aspect is Stephen Frost's work to not dump built-in ACLs, and that's not going to get back-ported. Per bugs #13804 and #14271. Solution found by David Johnston and later rediscovered by me. Report: <20151207163520.2628.95990@wrigleys.postgresql.org> Report: <20160801021955.1430.47434@wrigleys.postgresql.org>
* doc: Whitespace fixes in man pagesPeter Eisentraut2016-08-02
|
* Don't CHECK_FOR_INTERRUPTS between WaitLatch and ResetLatch.Tom Lane2016-08-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | This coding pattern creates a race condition, because if an interesting interrupt happens after we've checked InterruptPending but before we reset our latch, the latch-setting done by the signal handler would get lost, and then we might block at WaitLatch in the next iteration without ever noticing the interrupt condition. You can put the CHECK_FOR_INTERRUPTS before WaitLatch or after ResetLatch, but not between them. Aside from fixing the bugs, add some explanatory comments to latch.h to perhaps forestall the next person from making the same mistake. In HEAD, also replace gather_readnext's direct call of HandleParallelMessages with CHECK_FOR_INTERRUPTS. It does not seem clean or useful for this one caller to bypass ProcessInterrupts and go straight to HandleParallelMessages; not least because that fails to consider the InterruptPending flag, resulting in useless work both here (if InterruptPending isn't set) and in the next CHECK_FOR_INTERRUPTS call (if it is). This thinko seems to have been introduced in the initial coding of storage/ipc/shm_mq.c (commit ec9037df2), and then blindly copied into all the subsequent parallel-query support logic. Back-patch relevant hunks to 9.4 to extirpate the error everywhere. Discussion: <1661.1469996911@sss.pgh.pa.us>
* Fixed array checking code for "unsigned long long" datatypes in libecpg.Michael Meskes2016-08-01
|
* Fix pg_basebackup so that it accepts 0 as a valid compression level.Fujii Masao2016-08-01
| | | | | | | | | | | | | | | The help message for pg_basebackup specifies that the numbers 0 through 9 are accepted as valid values of -Z option. But, previously -Z 0 was rejected as an invalid compression level. Per discussion, it's better to make pg_basebackup treat 0 as valid compression level meaning no compression, like pg_dump. Back-patch to all supported versions. Reported-By: Jeff Janes Reviewed-By: Amit Kapila Discussion: CAMkU=1x+GwjSayc57v6w87ij6iRGFWt=hVfM0B64b1_bPVKRqg@mail.gmail.com
* Doc: remove claim that hash index creation depends on effective_cache_size.Tom Lane2016-07-31
| | | | | | | | | | | | | | This text was added by commit ff213239c, and not long thereafter obsoleted by commit 4adc2f72a (which made the test depend on NBuffers instead); but nobody noticed the need for an update. Commit 9563d5b5e adds some further dependency on maintenance_work_mem, but the existing verbiage seems to cover that with about as much precision as we really want here. Let's just take it all out rather than leaving ourselves open to more errors of omission in future. (That solution makes this change back-patchable, too.) Noted by Peter Geoghegan. Discussion: <CAM3SWZRVANbj9GA9j40fAwheQCZQtSwqTN1GBTVwRrRbmSf7cg@mail.gmail.com>
* pgbench docs: fix incorrect "last two" fields textBruce Momjian2016-07-30
| | | | | | | | Reported-by: Alexander Law Discussion: 5786638C.8080508@gmail.com Backpatch-through: 9.4
* doc: apply hypen fix that was not backpatchedBruce Momjian2016-07-30
| | | | | | | | | | Head patch was 42ec6c2da699e8e0b1774988fa97297a2cdf716c. Reported-by: Alexander Law Discussion: 5785FBE7.7060508@gmail.com Backpatch-through: 9.1
* Fix pq_putmessage_noblock() to not block.Tom Lane2016-07-29
| | | | | | | | | | | | An evident copy-and-pasteo in commit 2bd9e412f broke the non-blocking aspect of pq_putmessage_noblock(), causing it to behave identically to pq_putmessage(). That function is nowadays used only in walsender.c, so that the net effect was to cause walsenders to hang up waiting for the receiver in situations where they should not. Kyotaro Horiguchi Patch: <20160728.185228.58375982.horiguchi.kyotaro@lab.ntt.co.jp>