aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
...
* Avoid calling PageGetSpecialPointer() on an all-zeros page.Heikki Linnakangas2015-07-27
| | | | | | | That was otherwise harmless, but tripped the new assertion in PageGetSpecialPointer(). Reported by Amit Langote. Backpatch to 9.5, where the assertion was added.
* Remove false comment about speculative insertion.Heikki Linnakangas2015-07-27
| | | | | | | | There is no full discussion of speculative insertions in the executor README. There is a high-level explanation in execIndexing.c, but it doesn't seem necessary to refer it from here. Peter Geoghegan
* Fix oversight in flattening of subqueries with empty FROM.Tom Lane2015-07-26
| | | | | | | | | | | | | | | | | I missed a restriction that commit f4abd0241de20d5d6a79b84992b9e88603d44134 should have enforced: we can't pull up an empty-FROM subquery if it's under an outer join, because then we'd need to wrap its output columns in PlaceHolderVars. As the code currently stands, the PHVs end up with empty relid sets, which doesn't work (and is correctly caught by an Assert). It's possible that this could be fixed by assigning the PHVs the relid sets of the parent FromExpr/JoinExpr, but getting that to work is more complication than I care to add right now; indeed it's likely that we'll never bother, since pulling up empty-FROM subqueries is a rather marginal optimization anyway. Per report from Andreas Seltenreich. Back-patch to 9.5 where the faulty code was added.
* Make entirely-dummy appendrels get marked as such in set_append_rel_size.Tom Lane2015-07-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The planner generally expects that the estimated rowcount of any relation is at least one row, *unless* it has been proven empty by constraint exclusion or similar mechanisms, which is marked by installing a dummy path as the rel's cheapest path (cf. IS_DUMMY_REL). When I split up allpaths.c's processing of base rels into separate set_base_rel_sizes and set_base_rel_pathlists steps, the intention was that dummy rels would get marked as such during the "set size" step; this is what justifies an Assert in indxpath.c's get_loop_count that other relations should either be dummy or have positive rowcount. Unfortunately I didn't get that quite right for append relations: if all the child rels have been proven empty then set_append_rel_size would come up with a rowcount of zero, which is correct, but it didn't then do set_dummy_rel_pathlist. (We would have ended up with the right state after set_append_rel_pathlist, but that's too late, if we generate indexpaths for some other rel first.) In addition to fixing the actual bug, I installed an Assert enforcing this convention in set_rel_size; that then allows simplification of a couple of now-redundant tests for zero rowcount in set_append_rel_size. Also, to cover the possibility that third-party FDWs have been careless about not returning a zero rowcount estimate, apply clamp_row_est to whatever an FDW comes up with as the rows estimate. Per report from Andreas Seltenreich. Back-patch to 9.2. Earlier branches did not have the separation between set_base_rel_sizes and set_base_rel_pathlists steps, so there was no intermediate state where an appendrel would have had inconsistent rowcount and pathlist. It's possible that adding the Assert to set_rel_size would be a good idea in older branches too; but since they're not under development any more, it's likely not worth the trouble.
* Check the relevant index element in ON CONFLICT unique index inference.Andres Freund2015-07-26
| | | | | | | | | | | | | | | | ON CONFLICT unique index inference had a thinko that could affect cases where the user-supplied inference clause required that an attribute match a particular (user specified) collation and/or opclass. infer_collation_opclass_match() has to check for opclass and/or collation matches and that the attribute is in the list of attributes or expressions known to be in the definition of the index under consideration. The bug was that these two conditions weren't necessarily evaluated for the same index attribute. Author: Peter Geoghegan Discussion: CAM3SWZR4uug=WvmGk7UgsqHn2MkEzy9YU-+8jKGO4JPhesyeWg@mail.gmail.com Backpatch: 9.5, where ON CONFLICT was introduced
* Fix flattening of nested grouping sets.Andres Freund2015-07-26
| | | | | | | | | | | | | | | Previously nested grouping set specifications accidentally weren't flattened, but instead contained the nested specification as a element in the outer list. Fix this by, as actually documented in comments, concatenating the nested set specification into the outer one. Also add tests to prevent this from breaking again. Author: Andrew Gierth, with tests from Jeevan Chalke Reported-By: Jeevan Chalke Discussion: CAM2+6=V5YvuxB+EyN4iH=GbD-XTA435TCNvnDFSD--YvXs+pww@mail.gmail.com Backpatch: 9.5, where grouping sets were introduced
* Allow to push down clauses from HAVING to WHERE when grouping sets are used.Andres Freund2015-07-26
| | | | | | | | | | | | | | | Previously we disallowed pushing down quals to WHERE in the presence of grouping sets. That's overly restrictive. We now instead copy quals to WHERE if applicable, leaving the one in HAVING in place. That's because, at that stage of the planning process, it's nontrivial to determine if it's safe to remove the one in HAVING. Author: Andrew Gierth Discussion: 874mkt3l59.fsf@news-spur.riddles.org.uk Backpatch: 9.5, where grouping sets were introduced. This isn't exactly a bugfix, but it seems better to keep the branches in sync at this point.
* Recognize GROUPING() as a aggregate expression.Andres Freund2015-07-26
| | | | | | | | | | Previously GROUPING() was not recognized as a aggregate expression, erroneously allowing the planner to move it from HAVING to WHERE. Author: Jeevan Chalke Reviewed-By: Andrew Gierth Discussion: CAM2+6=WG9omG5rFOMAYBweJxmpTaapvVp5pCeMrE6BfpCwr4Og@mail.gmail.com Backpatch: 9.5, where grouping sets were introduced
* Build column mapping for grouping sets in all required cases.Andres Freund2015-07-26
| | | | | | | | | | | The previous coding frequently failed to fail because for one it's unusual to have rollup clauses with one column, and for another sometimes the wrong mapping didn't cause obvious problems. Author: Jeevan Chalke Reviewed-By: Andrew Gierth Discussion: CAM2+6=W=9=hQOipH0HAPbkun3Z3TFWij_EiHue0_6UX=oR=1kw@mail.gmail.com Backpatch: 9.5, where grouping sets were introduced
* Dodge portability issue (apparent compiler bug) in new tablesample code.Tom Lane2015-07-25
| | | | | | | | | | | | | | Some of the older OS X critters in the buildfarm are failing regression, with symptoms showing that a request for 100% sampling in BERNOULLI or SYSTEM methods actually gets only around 50% of the table. gdb revealed that the computation of the "cutoff" number was producing 0x7FFFFFFF rather than the expected 0x100000000. Inspecting the assembly code, it looks like gcc is trying to use lrint() instead of rint() and then fumbling the conversion from long double to uint64. This seems like a clear compiler bug, but assigning the intermediate result into a plain double variable works around it, so let's just do that. (Another idea would be to give up one bit of hash width so that we don't need to use a uint64 cutoff, but let's see if this is enough.)
* Redesign tablesample method API, and do extensive code review.Tom Lane2015-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original implementation of TABLESAMPLE modeled the tablesample method API on index access methods, which wasn't a good choice because, without specialized DDL commands, there's no way to build an extension that can implement a TSM. (Raw inserts into system catalogs are not an acceptable thing to do, because we can't undo them during DROP EXTENSION, nor will pg_upgrade behave sanely.) Instead adopt an API more like procedural language handlers or foreign data wrappers, wherein the only SQL-level support object needed is a single handler function identified by having a special return type. This lets us get rid of the supporting catalog altogether, so that no custom DDL support is needed for the feature. Adjust the API so that it can support non-constant tablesample arguments (the original coding assumed we could evaluate the argument expressions at ExecInitSampleScan time, which is undesirable even if it weren't outright unsafe), and discourage sampling methods from looking at invisible tuples. Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable within and across queries, as required by the SQL standard, and deal more honestly with methods that can't support that requirement. Make a full code-review pass over the tablesample additions, and fix assorted bugs, omissions, infelicities, and cosmetic issues (such as failure to put the added code stanzas in a consistent ordering). Improve EXPLAIN's output of tablesample plans, too. Back-patch to 9.5 so that we don't have to support the original API in production.
* Make RLS work with UPDATE ... WHERE CURRENT OFJoe Conway2015-07-24
| | | | | | | UPDATE ... WHERE CURRENT OF would not work in conjunction with RLS. Arrange to allow the CURRENT OF expression to be pushed down. Issue noted by Peter Geoghegan. Patch by Dean Rasheed. Back patch to 9.5 where RLS was introduced.
* Fix treatment of nulls in jsonb_agg and jsonb_object_aggAndrew Dunstan2015-07-24
| | | | | | | | | The wrong is_null flag was being passed to datum_to_json. Also, null object key values are not permitted, and this was not being checked for. Add regression tests covering these cases, and also add those tests to the json set, even though it was doing the right thing. Fixes bug #13514, initially diagnosed by Tom Lane.
* Fix bug around assignment expressions containing indirections.Andres Freund2015-07-24
| | | | | | | | | | | | | | | | | | Handling of assigned-to expressions with indirection (e.g. set f1[1] = 3) was broken for ON CONFLICT DO UPDATE. The problem was that ParseState was consulted to determine if an INSERT-appropriate or UPDATE-appropriate behavior should be used when transforming expressions with indirections. When the wrong path was taken the old row was substituted with NULL, leading to wrong results.. To fix remove p_is_update and only use p_is_insert to decide how to transform the assignment expression, and uset p_is_insert while parsing the on conflict statement. This isn't particularly pretty, but it's not any worse than before. Author: Peter Geoghegan, slightly edited by me Discussion: CAM3SWZS8RPvA=KFxADZWw3wAHnnbxMxDzkEC6fNaFc7zSm411w@mail.gmail.com Backpatch: 9.5, where the feature was introduced
* Fix off-by-one error in calculating subtrans/multixact truncation point.Heikki Linnakangas2015-07-23
| | | | | | | | | | If there were no subtransactions (or multixacts) active, we would calculate the oldestxid == next xid. That's correct, but if next XID happens to be on the next pg_subtrans (pg_multixact) page, the page does not exist yet, and SimpleLruTruncate will produce an "apparent wraparound" warning. The warning is harmless in this case, but looks very alarming to users. Backpatch to all supported versions. Patch and analysis by Thomas Munro.
* Fix add_rte_to_flat_rtable() for recent feature additions.Tom Lane2015-07-21
| | | | | | | | | | | The TABLESAMPLE and row security patches each overlooked this function, though their errors of omission were opposite: RLS failed to zero out the securityQuals field, leading to wasteful copying of useless expression trees in finished plans, while TABLESAMPLE neglected to add a comment saying that it intentionally *isn't* deleting the tablesample subtree. There probably should be a similar comment about ctename, too. Back-patch as appropriate.
* Fix some oversights in BRIN patch.Tom Lane2015-07-21
| | | | | | | | | | | | | | | | | Remove HeapScanDescData.rs_initblock, which wasn't being used for anything in the final version of the patch. Fix IndexBuildHeapScan so that it supports syncscan again; the patch broke synchronous scanning for index builds by forcing rs_startblk to zero even when the caller did not care about that and had asked for syncscan. Add some commentary and usage defenses to heap_setscanlimits(). Fix heapam so that asking for rs_numblocks == 0 does what you would reasonably expect. As coded it amounted to requesting a whole-table scan, because those "--x <= 0" tests on an unsigned variable would behave surprisingly.
* Fix omission of OCLASS_TRANSFORM in object_classes[]Alvaro Herrera2015-07-21
| | | | | | | | | | | | | | | | | This was forgotten in cac76582053e (and its fixup ad89a5d115). Since it seems way too easy to miss this, this commit also introduces a mechanism to enforce that the array is consistent with the enum. Problem reported independently by Robert Haas and Jaimin Pan. Patches proposed by Jaimin Pan, Jim Nasby, Michael Paquier and myself, though I didn't use any of these and instead went with a cleaner approach suggested by Tom Lane. Backpatch to 9.5. Discussion: https://www.postgresql.org/message-id/CA+Tgmoa6SgDaxW_n_7SEhwBAc=mniYga+obUj5fmw4rU9_mLvA@mail.gmail.com https://www.postgresql.org/message-id/29788.1437411581@sss.pgh.pa.us
* Sanity-check that a page zeroed by redo routine is marked with WILL_INIT.Heikki Linnakangas2015-07-20
| | | | | | | | | | | | | | | There was already a sanity-check in the other direction: if a page was marked with WILL_INIT, it had to be initialized by the redo routine. It's not strictly necessary for correctness that a page is marked with WILL_INIT if it's going to be initialized at redo, but it's a missed optimization if nothing else. Fix a few instances of this issue in SP-GiST, where a block in WAL record was not marked with WILL_INIT, but was in fact always initialized at redo. We were creating a full-page image of the page unnecessarily in those cases. Backpatch to 9.5, where the new WILL_INIT flag was added.
* Don't handle PUBLIC/NONE separatelyAlvaro Herrera2015-07-20
| | | | | | | | | Since those role specifiers are checked in the grammar, there's no need for the old checks to remain in place after 31eae6028ec. Remove them. Backpatch to 9.5. Noted and patch by Jeevan Chalke
* Improve BRIN documentation somewhatAlvaro Herrera2015-07-20
| | | | | | | | | | | | This removes some info about support procedures being used, which was obsoleted by commit db5f98ab4f, as well as add some more documentation on how to create new opclasses using the Minmax infrastructure. (Hopefully we can get something similar for Inclusion as well.) In passing, fix some obsolete mentions of "mmtuples" in source code comments. Backpatch to 9.5, where BRIN was introduced.
* Remove dead code.Andrew Dunstan2015-07-19
| | | | Defect noticed by Coverity.
* Make WaitLatchOrSocket's timeout detection more robust.Tom Lane2015-07-18
| | | | | | | | | | | | | | | | | | | | In the previous coding, timeout would be noticed and reported only when poll() or socket() returned zero (or the equivalent behavior on Windows). Ordinarily that should work well enough, but it seems conceivable that we could get into a state where poll() always returns a nonzero value --- for example, if it is noticing a condition on one of the file descriptors that we do not think is reason to exit the loop. If that happened, we'd be in a busy-wait loop that would fail to terminate even when the timeout expires. We can make this more robust at essentially no cost, by deciding to exit of our own accord if we compute a zero or negative time-remaining-to-wait. Previously the code noted this but just clamped the time-remaining to zero, expecting that we'd detect timeout on the next loop iteration. Back-patch to 9.2. While 9.1 had a version of WaitLatchOrSocket, it was primitive compared to later versions, and did not guarantee reliable detection of timeouts anyway. (Essentially, this is a refinement of commit 3e7fdcffd6f77187, which was back-patched only as far as 9.2.)
* Support JSON negative array subscripts everywhereAndrew Dunstan2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, there was an inconsistency across json/jsonb operators that operate on datums containing JSON arrays -- only some operators supported negative array count-from-the-end subscripting. Specifically, only a new-to-9.5 jsonb deletion operator had support (the new "jsonb - integer" operator). This inconsistency seemed likely to be counter-intuitive to users. To fix, allow all places where the user can supply an integer subscript to accept a negative subscript value, including path-orientated operators and functions, as well as other extraction operators. This will need to be called out as an incompatibility in the 9.5 release notes, since it's possible that users are relying on certain established extraction operators changed here yielding NULL in the event of a negative subscript. For the json type, this requires adding a way of cheaply getting the total JSON array element count ahead of time when parsing arrays with a negative subscript involved, necessitating an ad-hoc lex and parse. This is followed by a "conversion" from a negative subscript to its equivalent positive-wise value using the count. From there on, it's as if a positive-wise value was originally provided. Note that there is still a minor inconsistency here across jsonb deletion operators. Unlike the aforementioned new "-" deletion operator that accepts an integer on its right hand side, the new "#-" path orientated deletion variant does not throw an error when it appears like an array subscript (input that could be recognized by as an integer literal) is being used on an object, which is wrong-headed. The reason for not being stricter is that it could be the case that an object pair happens to have a key value that looks like an integer; in general, these two possibilities are impossible to differentiate with rhs path text[] argument elements. However, we still don't allow the "#-" path-orientated deletion operator to perform array-style subscripting. Rather, we just return the original left operand value in the event of a negative subscript (which seems analogous to how the established "jsonb/json #> text[]" path-orientated operator may yield NULL in the event of an invalid subscript). In passing, make SetArrayPath() stricter about not accepting cases where there is trailing non-numeric garbage bytes rather than a clean NUL byte. This means, for example, that strings like "10e10" are now not accepted as an array subscript of 10 by some new-to-9.5 path-orientated jsonb operators (e.g. the new #- operator). Finally, remove dead code for jsonb subscript deletion; arguably, this should have been done in commit b81c7b409. Peter Geoghegan and Andrew Dunstan
* Fix a low-probability crash in our qsort implementation.Tom Lane2015-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's standard for quicksort implementations, after having partitioned the input into two subgroups, to recurse to process the smaller partition and then handle the larger partition by iterating. This method guarantees that no more than log2(N) levels of recursion can be needed. However, Bentley and McIlroy argued that checking to see which partition is smaller isn't worth the cycles, and so their code doesn't do that but just always recurses on the left partition. In most cases that's fine; but with worst-case input we might need O(N) levels of recursion, and that means that qsort could be driven to stack overflow. Such an overflow seems to be the only explanation for today's report from Yiqing Jin of a SIGSEGV in med3_tuple while creating an index of a couple billion entries with a very large maintenance_work_mem setting. Therefore, let's spend the few additional cycles and lines of code needed to choose the smaller partition for recursion. Also, fix up the qsort code so that it properly uses size_t not int for some intermediate values representing numbers of items. This would only be a live risk when sorting more than INT_MAX bytes (in qsort/qsort_arg) or tuples (in qsort_tuple), which I believe would never happen with any caller in the current core code --- but perhaps it could happen with call sites in third-party modules? In any case, this is trouble waiting to happen, and the corrected code is probably if anything shorter and faster than before, since it removes sign-extension steps that had to happen when converting between int and size_t. In passing, move a couple of CHECK_FOR_INTERRUPTS() calls so that it's not necessary to preserve the value of "r" across them, and prettify the output of gen_qsort_tuple.pl a little. Back-patch to all supported branches. The odds of hitting this issue are probably higher in 9.4 and up than before, due to the new ability to allocate sort workspaces exceeding 1GB, but there's no good reason to believe that it's impossible to crash older branches this way.
* Fix spelling errorMagnus Hagander2015-07-16
| | | | David Rowley
* Fix copy/past error in commentMagnus Hagander2015-07-16
| | | | David Christensen
* AIX: Link the postgres executable with -Wl,-brtllib.Noah Misch2015-07-15
| | | | | | | | | This allows PostgreSQL modules and their dependencies to have undefined symbols, resolved at runtime. Perl module shared objects rely on that in Perl 5.8.0 and later. This fixes the crash when PL/PerlU loads such modules, as the hstore_plperl test suite does. Module authors can link using -Wl,-G to permit undefined symbols; by default, linking will fail as it has. Back-patch to 9.0 (all supported versions).
* Retain comments on indexes and constraints at ALTER TABLE ... TYPE ...Heikki Linnakangas2015-07-14
| | | | | | | | | | | | | | | When a column's datatype is changed, ATExecAlterColumnType() rebuilds all the affected indexes and constraints, and the comments from the old indexes/constraints were not carried over. To fix, create a synthetic COMMENT ON command in the work queue, to re-add any comments on constraints. For indexes, there's a comment field in IndexStmt that is used. This fixes bug #13126, reported by Kirill Simonov. Original patch by Michael Paquier, reviewed by Petr Jelinek and me. This bug is present in all versions, but only backpatch to 9.5. Given how minor the issue is, it doesn't seem worth the work and risk to backpatch further than that.
* Reformat code in ATPostAlterTypeParse.Heikki Linnakangas2015-07-14
| | | | | | | | | | | The code in ATPostAlterTypeParse was very deeply indented, mostly because there were two nested switch-case statements, which add a lot of indentation. Use if-else blocks instead, to make the code less indented and more readable. This is in preparation for next patch that makes some actualy changes to the function. These cosmetic parts have been separated to make it easier to see the real changes in the other patch.
* For consistency add a pfree to ON CONFLICT set_plan_refs code.Andres Freund2015-07-12
| | | | | | Backpatch to 9.5 where ON CONFLICT was introduced. Author: Peter Geoghegan
* Add now-required #include.Tom Lane2015-07-11
| | | | Fixes compiler warning induced by 808ea8fc7bb259ddd810353719cac66e85a608c8.
* Add assign_expr_collations() to CreatePolicy() and AlterPolicy().Joe Conway2015-07-11
| | | | | | As noted by Noah Misch, CreatePolicy() and AlterPolicy() omit to call assign_expr_collations() on the node trees. Fix the omission and add his test case to the rowsecurity regression test.
* Fix postmaster's handling of a startup-process crash.Tom Lane2015-07-09
| | | | | | | | | | | | | | | | | | | | | | | Ordinarily, a failure (unexpected exit status) of the startup subprocess should be considered fatal, so the postmaster should just close up shop and quit. However, if we sent the startup process a SIGQUIT or SIGKILL signal, the failure is hardly "unexpected", and we should attempt restart; this is necessary for recovery from ordinary backend crashes in hot-standby scenarios. I attempted to implement the latter rule with a two-line patch in commit 442231d7f71764b8c628044e7ce2225f9aa43b67, but it now emerges that that patch was a few bricks shy of a load: it failed to distinguish the case of a signaled startup process from the case where the new startup process crashes before reaching database consistency. That resulted in infinitely respawning a new startup process only to have it crash again. To handle this properly, we really must track whether we have sent the *current* startup process a kill signal. Rather than add yet another ad-hoc boolean to the postmaster's state, I chose to unify this with the existing RecoveryError flag into an enum tracking the startup process's state. That seems more consistent with the postmaster's general state machine design. Back-patch to 9.0, like the previous patch.
* Make wal_compression PGC_SUSET rather than PGC_USERSET.Fujii Masao2015-07-09
| | | | | | | | | | | | | When enabling wal_compression, there is a risk to leak data similarly to the BREACH and CRIME attacks on SSL where the compression ratio of a full page image gives a hint of what is the existing data of this page. This vulnerability is quite cumbersome to exploit in practice, but doable. So this patch makes wal_compression PGC_SUSET in order to prevent non-superusers from enabling it and exploiting the vulnerability while DBA thinks the risk very seriously and disables it in postgresql.conf. Back-patch to 9.5 where wal_compression was introduced.
* Revoke support for strxfrm() that write past the specified array length.Noah Misch2015-07-08
| | | | | | | This formalizes a decision implicit in commit 4ea51cdfe85ceef8afabceb03c446574daa0ac23 and adds clean detection of affected systems. Vendor updates are available for each such known bug. Back-patch to 9.5, where the aforementioned commit first appeared.
* Fix logical decoding bug leading to inefficient reopening of files.Andres Freund2015-07-07
| | | | | | | | | | | | | | | | | | | | When spilling transaction data to disk a simple typo caused the output file to be closed and reopened for every serialized change. That happens to not have a huge impact on linux, which is why it probably wasn't noticed so far, but on windows that appears to trigger actual disk writes after every change. Not fun. The bug fortunately does not have any impact besides speed. A change could end up being in the wrong segment (last instead of next), but since we read all files to the end, that's just ugly, not really problematic. It's not a problem to upgrade, since transaction spill files do not persist across restarts. Bug: #13484 Reported-By: Olivier Gosseaume Discussion: 20150703090217.1190.63940@wrigleys.postgresql.org Backpatch to 9.4, where logical decoding was added.
* Make RLS related error messages more consistent and compliant.Joe Conway2015-07-06
| | | | Also updated regression expected output to match. Noted and patch by Daniele Varrazzo.
* Fix misuse of TextDatumGetCString().Tom Lane2015-07-02
| | | | | | | | | | "TextDatumGetCString(PG_GETARG_TEXT_P(x))" is formally wrong: a text* is not a Datum. Although this coding will accidentally fail to fail on all known platforms, it risks leaking memory if a detoast step is needed, unlike "TextDatumGetCString(PG_GETARG_DATUM(x))" which is what's used elsewhere. Make pg_get_object_address() fall in line with other uses. Noted while reviewing two-arg current_setting() patch.
* Use appendStringInfoString/Char et al where appropriate.Heikki Linnakangas2015-07-02
| | | | | | Patch by David Rowley. Backpatch to 9.5, as some of the calls were new in 9.5, and keeping the code in sync with master makes future backpatching easier.
* Make sampler_random_fract() actually obey its API contract.Tom Lane2015-07-01
| | | | | | | | | | | | This function is documented to return a value in the range (0,1), which is what its predecessor anl_random_fract() did. However, the new version depends on pg_erand48() which returns a value in [0,1). The possibility of returning zero creates hazards of division by zero or trying to compute log(0) at some call sites, and it might well break third-party modules using anl_random_fract() too. So let's change it to never return zero. Spotted by Coverity. Michael Paquier, cosmetically adjusted by me
* Make XLogFileCopy() look the same as in 9.4.Fujii Masao2015-07-01
| | | | | | | | | | | | | | | | XLogFileCopy() was changed heavily in commit de76884. However it was partially reverted in commit 7abc685 and most of those changes to XLogFileCopy() were no longer needed. Then commit 7cbee7c removed those unnecessary code, but XLogFileCopy() looked different in master and 9.4 though the contents are almost the same. This patch makes XLogFileCopy() look the same in master and back-branches, which makes back-patching easier, per discussion on pgsql-hackers. Back-patch to 9.5. Discussion: 55760844.7090703@iki.fi Michael Paquier
* Remove useless check for NULL subexpression.Tom Lane2015-06-30
| | | | | | | Coverity rightly gripes that it's silly to have a test here when the adjacent ExecEvalExpr() would choke on a NULL expression pointer. Petr Jelinek
* Don't call PageGetSpecialPointer() on page until it's been initialized.Heikki Linnakangas2015-06-30
| | | | | | | | | | After calling XLogInitBufferForRedo(), the page might be all-zeros if it was not in page cache already. btree_xlog_unlink_page initialized the page correctly, but it called PageGetSpecialPointer before initializing it, which would lead to a corrupt page at WAL replay, if the unlinked page is not in page cache. Backpatch to 9.4, the bug came with the rewrite of B-tree page deletion.
* In bttext_abbrev_convert, move pfree to the right place.Robert Haas2015-06-29
| | | | | | | Without this, we might access memory that's already been freed, or leak memory if in the C locale. Peter Geoghegan
* Initialize GIN metapage correctly when replaying metapage-update WAL record.Heikki Linnakangas2015-06-30
| | | | | | | | | | | | | | | | | I broke this with my WAL format refactoring patch. Before that, the metapage was read from disk, and modified in-place regardless of the LSN. That was always a bit silly, as there's no need to read the old page version from disk disk when we're overwriting it anyway. So that was changed in 9.5, but I failed to add a GinInitPage call to initialize the page-headers correctly. Usually you wouldn't notice, because the metapage is already in the page cache and is not zeroed. One way to reproduce this is to perform a VACUUM on an already vacuumed table (so that the vacuum has no real work to do), immediately after a checkpoint, and then perform an immediate shutdown. After recovery, the page headers of the metapage will be incorrectly all-zeroes. Reported by Jeff Janes
* Code + docs review for escaping of option values (commit 11a020eb6).Tom Lane2015-06-29
| | | | | | | | | Avoid memory leak from incorrect choice of how to free a StringInfo (resetStringInfo doesn't do it). Now that pg_split_opts doesn't scribble on the optstr, mark that as "const" for clarity. Attach the commentary in protocol.sgml to the right place, and add documentation about the user-visible effects of this change on postgres' -o option and libpq's PGOPTIONS option.
* Translation updatesPeter Eisentraut2015-06-28
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: fb7e72f46cfafa1b5bfe4564d9686d63a1e6383f
* Run the C portions of guc-file.l through pgindent.Tom Lane2015-06-28
| | | | | Yeah, I know, pretty anal-retentive of me. But we oughta find some way to automate this for the .y and .l files.
* Improve design and implementation of pg_file_settings view.Tom Lane2015-06-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As first committed, this view reported on the file contents as they were at the last SIGHUP event. That's not as useful as reporting on the current contents, and what's more, it didn't work right on Windows unless the current session had serviced at least one SIGHUP. Therefore, arrange to re-read the files when pg_show_all_settings() is called. This requires only minor refactoring so that we can pass changeVal = false to set_config_option() so that it won't actually apply any changes locally. In addition, add error reporting so that errors that would prevent the configuration files from being loaded, or would prevent individual settings from being applied, are visible directly in the view. This makes the view usable for pre-testing whether edits made in the config files will have the desired effect, before one actually issues a SIGHUP. I also added an "applied" column so that it's easy to identify entries that are superseded by later entries; this was the main use-case for the original design, but it seemed unnecessarily hard to use for that. Also fix a 9.4.1 regression that allowed multiple entries for a PGC_POSTMASTER variable to cause bogus complaints in the postmaster log. (The issue here was that commit bf007a27acd7b2fb unintentionally reverted 3e3f65973a3c94a6, which suppressed any duplicate entries within ParseConfigFp. However, since the original coding of the pg_file_settings view depended on such suppression *not* happening, we couldn't have fixed this issue now without first doing something with pg_file_settings. Now we suppress duplicates by marking them "ignored" within ProcessConfigFileInternal, which doesn't hide them in the view.) Lesser changes include: Drive the view directly off the ConfigVariable list, instead of making a basically-equivalent second copy of the data. There's no longer any need to hang onto the data permanently, anyway. Convert show_all_file_settings() to do its work in one call and return a tuplestore; this avoids risks associated with assuming that the GUC state will hold still over the course of query execution. (I think there were probably latent bugs here, though you might need something like a cursor on the view to expose them.) Arrange to run SIGHUP processing in a short-lived memory context, to forestall process-lifespan memory leaks. (There is one known leak in this code, in ProcessConfigDirectory; it seems minor enough to not be worth back-patching a specific fix for.) Remove mistaken assignment to ConfigFileLineno that caused line counting after an include_dir directive to be completely wrong. Add missed failure check in AlterSystemSetConfigFile(). We don't really expect ParseConfigFp() to fail, but that's not an excuse for not checking.