aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Avoid pin scan for replay of XLOG_BTREE_VACUUMSimon Riggs2016-01-09
| | | | | | | | | | | | | | | | | | | | Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to require complex interlocking that matched the requirements on the master. This required an O(N) operation that became a significant problem with large indexes, causing replication delays of seconds or in some cases minutes while the XLOG_BTREE_VACUUM was replayed. This commit skips the “pin scan” that was previously required, by observing in detail when and how it is safe to do so, with full documentation. The pin scan is skipped only in replay; the VACUUM code path on master is not touched here. The current commit still performs the pin scan for toast indexes, though this can also be avoided if we recheck scans on toast indexes. Later patch will address this. No tests included. Manual tests using an additional patch to view WAL records and their timing have shown the change in WAL records and their handling has successfully reduced replication delay.
* Revert "Blind attempt at a Cygwin fix"Alvaro Herrera2016-01-08
| | | | | | This reverts commit e9282e953205a2f3125fc8d1052bc01cb77cd2a3, which blew up in a pretty spectacular way. Re-introduce the original code while we search for a real fix.
* Fix typo in commentMagnus Hagander2016-01-08
| | | | Tatsuro Yamada
* Remove reundand include of TestLibMagnus Hagander2016-01-08
| | | | Kyotaro HORIGUCHI
* Marginal cleanup of GROUPING SETS code in grouping_planner().Tom Lane2016-01-07
| | | | | | | | | Improve comments and make it a shade less messy. I think we might want to move all of this somewhere else later, but it needs to be more readable first. In passing, re-pgindent the file, affecting some recently-added comments concerning parallel query planning.
* Delay creation of subplan tlist until after create_plan().Tom Lane2016-01-07
| | | | | | | | | | | | | | | | | | | | | | | Once upon a time it was necessary for grouping_planner() to determine the tlist it wanted from the scan/join plan subtree before it called query_planner(), because query_planner() would actually make a Plan using that. But we refactored things a long time ago to delay construction of the Plan tree till later, so there's no need to build that tlist until (and indeed unless) we're ready to plaster it onto the Plan. The only thing query_planner() cares about is what Vars are going to be needed for the tlist, and it can perfectly well get that by looking at the real tlist rather than some masticated version. Well, actually, there is one minor glitch in that argument, which is that make_subplanTargetList also adds Vars appearing only in HAVING to the tlist it produces. So now we have to account for HAVING explicitly in build_base_rel_tlists. But that just adds a few lines of code, and I doubt it moves the needle much on processing time; we might be doing pull_var_clause() twice on the havingQual, but before we had it scanning dummy tlist entries instead. This is a very small down payment on rationalizing grouping_planner enough so it can be refactored.
* Fix order of arguments to va_start()Alvaro Herrera2016-01-07
|
* Fix unobvious interaction between -X switch and subdirectory creation.Tom Lane2016-01-07
| | | | | | | | | Turns out the only reason initdb -X worked is that pg_mkdir_p won't whine if you point it at something that's a symlink to a directory. Otherwise, the attempt to create pg_xlog/ just like all the other subdirectories would have failed. Let's be a little more explicit about what's happening. Oversight in my patch for bug #13853 (mea culpa for not testing -X ...)
* Use plain mkdir() not pg_mkdir_p() to create subdirectories of PGDATA.Tom Lane2016-01-07
| | | | | | | | | | | | | | When we're creating subdirectories of PGDATA during initdb, we know darn well that the parent directory exists (or should exist) and that the new subdirectory doesn't (or shouldn't). There is therefore no need to use anything more complicated than mkdir(). Using pg_mkdir_p() just opens us up to unexpected failure modes, such as the one exhibited in bug #13853 from Nuri Boardman. It's not very clear why pg_mkdir_p() went wrong there, but it is clear that we didn't need to be trying to create parent directories in the first place. We're not even saving any code, as proven by the fact that this patch nets out at minus five lines. Since this is a response to a field bug report, back-patch to all branches.
* pgstat: add WAL receiver status view & SRFAlvaro Herrera2016-01-07
| | | | | | | | | | | | | | | This new view provides insight into the state of a running WAL receiver in a HOT standby node. The information returned includes the PID of the WAL receiver process, its status (stopped, starting, streaming, etc), start LSN and TLI, last received LSN and TLI, timestamp of last message send and receipt, latest end-of-WAL LSN and time, and the name of the slot (if any). Access to the detailed data is only granted to superusers; others only get the PID. Author: Michael Paquier Reviewer: Haribabu Kommi
* Remove vestigial CHECK_FOR_INTERRUPTS call.Tom Lane2016-01-07
| | | | | | | | | | | Commit e710b65c inserted code in md5_crypt_verify to disable and later re-enable interrupts, with a CHECK_FOR_INTERRUPTS call as part of the second step, to process any interrupts that had been held off. Commit 6647248e removed the interrupt disable/re-enable code, but left behind the CHECK_FOR_INTERRUPTS, even though this is now an entirely random, pointless place for one. md5_crypt_verify doesn't run long enough to need such a check, and if it did, this would still be the wrong place to put one.
* Provide more detail in postmaster log for password authentication failures.Tom Lane2016-01-07
| | | | | | | | | | | | | We tell people to examine the postmaster log if they're unsure why they are getting auth failures, but actually only a few relatively-uncommon failure cases were given their own log detail messages in commit 64e43c59b817a78d. Expand on that so that every failure case detected within md5_crypt_verify gets a specific log detail message. This should cover pretty much every ordinary password auth failure cause. So far I've not noticed user demand for a similar level of auth detail for the other auth methods, but sooner or later somebody might want to work on them. This is not that patch, though.
* Windows: Make pg_ctl reliably detect service statusAlvaro Herrera2016-01-07
| | | | | | | | | | | | | | | | | | pg_ctl is using isatty() to verify whether the process is running in a terminal, and if not it sends its output to Windows' Event Log ... which does the wrong thing when the output has been redirected to a pipe, as reported in bug #13592. To fix, make pg_ctl use the code we already have to detect service-ness: in the master branch, move src/backend/port/win32/security.c to src/port (with suitable tweaks so that it runs properly in backend and frontend environments); pg_ctl already has access to pgport so it Just Works. In older branches, that's likely to cause trouble, so instead duplicate the required code in pg_ctl.c. Author: Michael Paquier Bug report and diagnosis: Egon Kocjan Backpatch: all supported branches
* In initdb's post-bootstrap phase, drop temp tables explicitly.Tom Lane2016-01-06
| | | | | | | | | | | | Although these temp tables will get removed from template1 at the end of the standalone-backend run, that's too late to keep them from getting copied into the template0 and postgres databases, now that we use only a single backend run for the whole sequence. While no real harm is done by the extra copies (since they'd be deleted on first use of the temp schema), it's still unsightly, and it would mean some wasted cycles for every database creation for the life of the installation. Oversight in commit c4a8812cf64b1426. Noticed by Amit Langote.
* Comment typo fix.Tom Lane2016-01-06
| | | | Per Amit Langote.
* Add scale(numeric)Alvaro Herrera2016-01-05
| | | | Author: Marko Tiikkaja
* Remove some ancient and unmaintained encoding-conversion test cruft.Tom Lane2016-01-05
| | | | | | | | | | | | | | In commit 921191912c48a68d I claimed that we weren't testing encoding conversion functions, but further poking around reveals that we did have an equivalent though hard-wired set of tests in conversion.sql. AFAICS there is no advantage to doing it like that as compared to letting the catalog contents drive the test, so let the opr_sanity addition stand and remove the now-redundant tests in conversion.sql. Also, remove some infrastructure in src/backend/utils/mb/conversion_procs for building conversion.sql's list of tests. That was unmaintained, and had not corresponded to the actual contents of conversion.sql since 2007 or perhaps even further back.
* Sort $(wildcard) output where needed for reproducible build output.Tom Lane2016-01-05
| | | | | | | | The order of inclusion of .o files makes a difference in linker output; not a functional difference, but still a bitwise difference, which annoys some packagers who would like reproducible builds. Report and patch by Christoph Berg
* Make pg_receivexlog silent with 9.3 and older serversAlvaro Herrera2016-01-05
| | | | | | | | | | | | | | | | | A pointless and confusing error message is shown to the user when attempting to identify a 9.3 or older remote server with a 9.5/9.6 pg_receivexlog, because the return signature of IDENTIFY_SYSTEM was changed in 9.4. There's no good reason for the warning message, so shuffle code around to keep it quiet. (pg_recvlogical is also affected by this commit, but since it obviously cannot work with 9.3 that doesn't actually matter much.) Backpatch to 9.5. Reported by Marco Nenciarini, who also wrote the initial patch. Further tweaked by Robert Haas and Fujii Masao; reviewed by Michael Paquier and Craig Ringer.
* In opr_sanity regression test, check for unexpected uses of cstring.Tom Lane2016-01-05
| | | | | | | | | | | | | | In light of commit ea0d494dae0d3d6f, it seems like a good idea to add a regression test that will complain about random functions taking or returning cstring. Only I/O support functions and encoding conversion functions should be declared that way. While at it, add some checks that encoding conversion functions are declared properly. Since pg_conversion isn't populated manually, it's not quite as necessary to check its contents as it is for catalogs like pg_proc; but one thing we definitely have not tested in the past is whether the identified conproc for a conversion actually does that conversion vs. some other one.
* Make the to_reg*() functions accept text not cstring.Tom Lane2016-01-05
| | | | | | | | | | | | | | Using cstring as the input type was a poor decision, because that's not really a full-fledged type. In particular, it lacks implicit coercions from text or varchar, meaning that usages like to_regproc('foo'||'bar') wouldn't work; basically the only case that did work without explicit casting was a simple literal constant argument. The lack of field complaints about this suggests that hardly anyone is using these functions, so hopefully fixing it won't cause much of a compatibility problem. They've only been there since 9.4, anyway. Petr Korobeinikov
* Make pg_shseclabel available in early backend startupAlvaro Herrera2016-01-05
| | | | | | | | | | | | While the in-core authentication mechanism doesn't need to access pg_shseclabel at all, it's reasonable to think that an authentication hook will want to look at the label for the role logging in, or for rows in other catalogs used during the authentication phase of startup. Catalog version bumped, because this changes the "is nailed" status for pg_shseclabel. Author: Adam Brightwell
* Convert psql's tab completion for backslash commands to the new style.Tom Lane2016-01-05
| | | | | | | | | This requires adding some more infrastructure to handle both case-sensitive and case-insensitive matching, as well as the ability to match a prefix of a previous word. So it ends up being about a wash line-count-wise, but it's just as big a readability win here as in the SQL tab completion rules. Michael Paquier, some adjustments by me
* In psql's tab completion, change most TailMatches patterns to Matches.Tom Lane2016-01-04
| | | | | | | | | | | | | | | | | | | | | | | | | | In the refactoring in commit d37b816dc9e8f976c8913296781e08cbd45c5af1, we mostly kept to the original design whereby only the last few words on the line were matched to identify a completable pattern. However, after commit d854118c8df8c413d069f7e88bb01b9e18e4c8ed, there's really no reason to do it like that: where it's sensible, we can use patterns that expect to match the entire input line. And mostly, it's sensible. Matching the entire line greatly reduces the odds of a false match that leads to offering irrelevant completions. Moreover (though I've not tried to measure this), it should make tab completion faster since many of the patterns will be discarded after a single integer comparison that finds that the wrong number of words appear on the line. There are certain identifiable places where we still need to use TailMatches because the statement in question is allowed to appear embedded in a larger statement. These are just a small minority of the existing patterns, though, so the benefit of switching where possible is large. It's possible that this patch has removed some within-line matching behaviors that are in fact desirable, but we can put those back when we get complaints. Most of the removed behaviors are certainly silly. Michael Paquier, with some further adjustments by me
* Adjust behavior of row_security GUC to match the docs.Tom Lane2016-01-04
| | | | | | | | | | | | | | | | Some time back we agreed that row_security=off should not be a way to bypass RLS entirely, but only a way to get an error if it was being applied. However, the code failed to act that way for table owners. Per discussion, this is a must-fix bug for 9.5.0. Adjust the logic in rls.c to behave as expected; also, modify the error message to be more consistent with the new interpretation. The regression tests need minor corrections as well. Also update the comments about row_security in ddl.sgml to be correct. (The official description of the GUC in config.sgml is already correct.) I failed to resist the temptation to do some other very minor cleanup as well, such as getting rid of a duplicate extern declaration.
* Fix typo in comment.Robert Haas2016-01-04
| | | | Masahiko Sawada
* Fix regrole and regnamespace output functions to do quoting, too.Tom Lane2016-01-04
| | | | We discussed this but somehow failed to implement it...
* Fix regrole and regnamespace types to honor quoting like other reg* types.Tom Lane2016-01-04
| | | | | | | | | | | | | | | Aside from any consistency arguments, this is logically necessary because the I/O functions for these types also handle numeric OID values. Without a quoting rule it is impossible to distinguish numeric OIDs from role or namespace names that happen to contain only digits. Also change the to_regrole and to_regnamespace functions to dequote their arguments. While not logically essential, this seems like a good idea since the other to_reg* functions do it. Anyone who really wants raw lookup of an uninterpreted name can fall back on the time-honored solution of (SELECT oid FROM pg_namespace WHERE nspname = whatever). Report and patch by Jim Nasby, reviewed by Michael Paquier
* Fix bogus lock release in RemovePolicyById and RemoveRoleFromObjectPolicy.Tom Lane2016-01-03
| | | | | | | | | | | | | | Can't release the AccessExclusiveLock on the target table until commit. Otherwise there is a race condition whereby other backends might service our cache invalidation signals before they can actually see the updated catalog rows. Just to add insult to injury, RemovePolicyById was closing the rel (with incorrect lock drop) and then passing the now-dangling rel pointer to CacheInvalidateRelcache. Probably the reason this doesn't fall over on CLOBBER_CACHE buildfarm members is that some outer level of the DROP logic is still holding the rel open ... but it'd have bit us on the arse eventually, no doubt.
* Guard against null arguments in binary_upgrade_create_empty_extension().Tom Lane2016-01-03
| | | | | | | | | | | | | The CHECK_IS_BINARY_UPGRADE macro is not sufficient security protection if we're going to dereference pass-by-reference arguments before it. But in any case we really need to explicitly check PG_ARGISNULL for all the arguments of a non-strict function, not only the ones we expect null values for. Oversight in commits 30982be4e5019684e1772dd9170aaa53f5a8e894 and f92fc4c95ddcc25978354a8248d3df22269201bc. Found by Andreas Seltenreich. (The other usages in pg_upgrade_support.c seem safe.)
* Fix treatment of *lpNumberOfBytesRecvd == 0: that's a completion condition.Tom Lane2016-01-03
| | | | | | | | | | | | | | | | | | | | pgwin32_recv() has treated a non-error return of zero bytes from WSARecv() as being a reason to block ever since the current implementation was introduced in commit a4c40f140d23cefb. However, so far as one can tell from Microsoft's documentation, that is just wrong: what it means is graceful connection closure (in stream protocols) or receipt of a zero-length message (in message protocols), and neither case should result in blocking here. The only reason the code worked at all was that control then fell into the retry loop, which did *not* treat zero bytes specially, so we'd get out after only wasting some cycles. But as of 9.5 we do not normally reach the retry loop and so the bug is exposed, as reported by Shay Rojansky and diagnosed by Andres Freund. Remove the unnecessary test on the byte count, and rearrange the code in the retry loop so that it looks identical to the initial sequence. Back-patch to 9.5. The code is wrong all the way back, AFAICS, but since it's relatively harmless in earlier branches we'll leave it alone.
* Teach pg_dump to quote reloption values safely.Tom Lane2016-01-02
| | | | | | | Commit c7e27becd2e6eb93 fixed this on the backend side, but we neglected the fact that several code paths in pg_dump were printing reloptions values that had not gotten massaged by ruleutils. Apply essentially the same quoting logic in those places, too.
* Fix overly-strict assertions in spgtextproc.c.Tom Lane2016-01-02
| | | | | | | | | | | | | | | | | spg_text_inner_consistent is capable of reconstructing an empty string to pass down to the next index level; this happens if we have an empty string coming in, no prefix, and a dummy node label. (In practice, what is needed to trigger that is insertion of a whole bunch of empty-string values.) Then, we will arrive at the next level with in->level == 0 and a non-NULL (but zero length) in->reconstructedValue, which is valid but the Assert tests weren't expecting it. Per report from Andreas Seltenreich. This has no impact in non-Assert builds, so should not be a problem in production, but back-patch to all affected branches anyway. In passing, remove a couple of useless variable initializations and shorten the code by not duplicating DatumGetPointer() calls.
* Make copyright.pl cope with nonstandard case choices in copyright notices.Tom Lane2016-01-02
| | | | | | | | The need for this is shown by the files it missed in Bruce's recent run. I fixed it so that it will actually adjust the case when needed. In passing, also make it skip .po files, since those will just get overwritten anyway from the translation repository.
* Update copyright for 2016Tom Lane2016-01-02
| | | | | | On closer inspection, the reason copyright.pl was missing files is that it is looking for 'Copyright (c)' and they had 'Copyright (C)'. Fix that, and update a couple more that grepping for that revealed.
* Update copyright for 2016Tom Lane2016-01-02
| | | | Manually fix some copyright lines missed by the automated script.
* Update copyright for 2016Bruce Momjian2016-01-02
| | | | Backpatch certain files through 9.1
* Cover heap_page_prune_opt()'s cleanup lock tactic in README.Noah Misch2016-01-01
| | | | Jeff Janes, reviewed by Jim Nasby.
* Teach flatten_reloptions() to quote option values safely.Tom Lane2016-01-01
| | | | | | | | | | | | | | | | | | | | | | | | | | flatten_reloptions() supposed that it didn't really need to do anything beyond inserting commas between reloption array elements. However, in principle the value of a reloption could be nearly anything, since the grammar allows a quoted string there. Any restrictions on it would come from validity checking appropriate to the particular option, if any. A reloption value that isn't a simple identifier or number could thus lead to dump/reload failures due to syntax errors in CREATE statements issued by pg_dump. We've gotten away with not worrying about this so far with the core-supported reloptions, but extensions might allow reloption values that cause trouble, as in bug #13840 from Kouhei Sutou. To fix, split the reloption array elements explicitly, and then convert any value that doesn't look like a safe identifier to a string literal. (The details of the quoting rule could be debated, but this way is safe and requires little code.) While we're at it, also quote reloption names if they're not safe identifiers; that may not be a likely problem in the field, but we might as well try to be bulletproof here. It's been like this for a long time, so back-patch to all supported branches. Kouhei Sutou, adjusted some by me
* Add some more defenses against silly estimates to gincostestimate().Tom Lane2016-01-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A report from Andy Colson showed that gincostestimate() was not being nearly paranoid enough about whether to believe the statistics it finds in the index metapage. The problem is that the metapage stats (other than the pending-pages count) are only updated by VACUUM, and in the worst case could still reflect the index's original empty state even when it has grown to many entries. We attempted to deal with that by scaling up the stats to match the current index size, but if nEntries is zero then scaling it up still gives zero. Moreover, the proportion of pages that are entry pages vs. data pages vs. pending pages is unlikely to be estimated very well by scaling if the index is now orders of magnitude larger than before. We can improve matters by expanding the use of the rule-of-thumb estimates I introduced in commit 7fb008c5ee59b040: if the index has grown by more than a cutoff amount (here set at 4X growth) since VACUUM, then use the rule-of-thumb numbers instead of scaling. This might not be exactly right but it seems much less likely to produce insane estimates. I also improved both the scaling estimate and the rule-of-thumb estimate to account for numPendingPages, since it's reasonable to expect that that is accurate in any case, and certainly pages that are in the pending list are not either entry or data pages. As a somewhat separate issue, adjust the estimation equations that are concerned with extra fetches for partial-match searches. These equations suppose that a fraction partialEntries / numEntries of the entry and data pages will be visited as a consequence of a partial-match search. Now, it's physically impossible for that fraction to exceed one, but our estimate of partialEntries is mostly bunk, and our estimate of numEntries isn't exactly gospel either, so we could arrive at a silly value. In the example presented by Andy we were coming out with a value of 100, leading to insane cost estimates. Clamp the fraction to one to avoid that. Like the previous patch, back-patch to all supported branches; this problem can be demonstrated in one form or another in all of them.
* Fix comments about WAL rule "write xlog before data" versus pg_multixact.Noah Misch2016-01-01
| | | | | | | | | | | Recovery does not achieve its goal of zeroing all pg_multixact entries whose accompanying WAL records never reached disk. Remove that claim and justify its expendability. Detail the need for TrimMultiXact(), which has little in common with the TrimCLOG() rationale. Merge two tightly-related comments. Stop presenting pg_multixact as specific to heap_lock_tuple(); PostgreSQL 9.3 extended its use to heap_update(). Noticed while investigating a report from Andres Freund.
* Fix ALTER OPERATOR to update dependencies properly.Tom Lane2015-12-31
| | | | | | | | | | | | | | Fix an oversight in commit 321eed5f0f7563a0: replacing an operator's selectivity functions needs to result in a corresponding update in pg_depend. We have a function that can handle that, but it was not called by AlterOperator(). To fix this without enlarging pg_operator.h's #include list beyond what clients can safely include, split off the function definitions into a new file pg_operator_fn.h, similarly to what we've done for some other catalog header files. It's not entirely clear whether any client-side code needs to include pg_operator.h, but it seems prudent to assume that there is some such code somewhere.
* Dept of second thoughts: the !scan_all exit mustn't increase scanned_pages.Tom Lane2015-12-30
| | | | | | In the extreme edge case where contended pages are the only ones that escape being scanned, the previous commit would have allowed us to think that relfrozenxid could be advanced, which is exactly wrong.
* Avoid useless truncation attempts during VACUUM.Tom Lane2015-12-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | VACUUM can skip heap pages altogether when there's a run of consecutive pages that are all-visible according to the visibility map. This causes it to not update its nonempty_pages count, just as if those pages were empty, which means that at the end we will think they are candidates for deletion. Thus, we may take the table's AccessExclusive lock only to find that no pages are really truncatable. This usually causes no real problems on a master server, thanks to the lock being acquired only conditionally; but on hot-standby servers, the same lock must be acquired unconditionally which can result in unnecessary query cancellations. To improve matters, force examination of the table's last page whenever we reach there with a nonempty_pages count that would allow a truncation attempt. If it's not empty, we'll advance nonempty_pages and thereby prevent the truncation attempt. If we are unable to acquire cleanup lock on that page, there's no need to force it, unless we're doing an anti-wraparound vacuum. We can just check for tuples with a shared buffer lock and then give up. (When we are doing an anti-wraparound vacuum, and decide it's okay to skip the page because it contains no freezable tuples, this patch still improves matters because nonempty_pages is properly updated, which it was not before.) Since only the last page is special-cased in this way, we might attempt a truncation that will release many fewer pages than the normal heuristic would suggest; at worst, only one page would be truncated. But that seems all right, because the situation won't repeat during the next vacuum. The real problem with the old logic is that the useless truncation attempt happens every time we vacuum, so long as the state of the last few dozen pages doesn't change. This is a longstanding deficiency, but since the consequences aren't very severe in most scenarios, I'm not going to risk a back-patch. Jeff Janes and Tom Lane
* Add some comments about division of labor between rewriter and planner.Tom Lane2015-12-29
| | | | | | The rationale for the way targetlist processing is done wasn't clearly stated anywhere, and I for one had forgotten some of the details. Having just painfully re-learned them, add some breadcrumbs for the next person.
* Put back one copyObject() in rewriteTargetView().Tom Lane2015-12-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 6f8cb1e23485bd6d tried to centralize rewriteTargetView's copying of a target view's Query struct. However, it ignored the fact that the jointree->quals field was used twice. This only accidentally failed to fail immediately because the same ChangeVarNodes mutation is applied in both cases, so that we end up with logically identical expression trees for both uses (and, as the code stands, the second ChangeVarNodes call actually does nothing). However, we end up linking *physically* identical expression trees into both an RTE's securityQuals list and the WithCheckOption list. That's pretty dangerous, mainly because prepsecurity.c is utterly cavalier about further munging such structures without copying them first. There may be no live bug in HEAD as a consequence of the fact that we apply preprocess_expression in between here and prepsecurity.c, and that will make a copy of the tree anyway. Or it may just be that the regression tests happen to not trip over it. (I noticed this only because things fell over pretty badly when I tried to relocate the planner's call of expand_security_quals to before expression preprocessing.) In any case it's very fragile because if anyone tried to make the securityQuals and WithCheckOption trees diverge before we reach preprocess_expression, it would not work. The fact that the current code will preprocess securityQuals and WithCheckOptions lists at completely different times in different query levels does nothing to increase my trust that that can't happen. In view of the fact that 9.5.0 is almost upon us and the aforesaid commit has seen exactly zero field testing, the prudent course is to make an extra copy of the quals so that the behavior is not different from what has been in the field during beta.
* Rename (new|old)estCommitTs to (new|old)estCommitTsXidJoe Conway2015-12-28
| | | | | | | | | | | | | The variables newestCommitTs and oldestCommitTs sound as if they are timestamps, but in fact they are the transaction Ids that correspond to the newest and oldest timestamps rather than the actual timestamps. Rename these variables to reflect that they are actually xids: to wit newestCommitTsXid and oldestCommitTsXid respectively. Also modify related code in a similar fashion, particularly the user facing output emitted by pg_controldata and pg_resetxlog. Complaint and patch by me, review by Tom Lane and Alvaro Herrera. Backpatch to 9.5 where these variables were first introduced.
* Fix omission of -X (--no-psqlrc) in some psql invocations.Tom Lane2015-12-28
| | | | | | | | | | | As of commit d5563d7df, psql -c no longer implies -X, but not all of our regression testing scripts had gotten that memo. To ensure consistency of results across different developers, make sure that *all* invocations of psql in all scripts in our tree use -X, even where this is not what previously happened. Michael Paquier and Tom Lane
* Fix translation domain in pg_basebackupAlvaro Herrera2015-12-28
| | | | | | | | | | | For some reason, we've been overlooking the fact that pg_receivexlog and pg_recvlogical are using wrong translation domains all along, so their output hasn't ever been translated. The right domain is pg_basebackup, not their own executable names. Noticed by Ioseph Kim, who's been working on the Korean translation. Backpatch pg_receivexlog to 9.2 and pg_recvlogical to 9.4.
* Include typmod when complaining about inherited column type mismatches.Tom Lane2015-12-26
| | | | | | | | | | | | | | | | MergeAttributes() rejects cases where columns to be merged have the same type but different typmod, which is correct; but the error message it printed didn't show either typmod, which is unhelpful. Changing this requires using format_type_with_typemod() in place of TypeNameToString(), which will have some minor side effects on the way some type names are printed, but on balance this is an improvement: the old code sometimes printed one type according to one set of rules and the other type according to the other set, which could be confusing in its own way. Oddly, there were no regression test cases covering any of this behavior, so add some. Complaint and fix by Amit Langote