aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
...
* Remove wrong CloseHandle() call.Noah Misch2016-12-03
| | | | | | | | | | | In accordance with its own documentation, invoke CloseHandle() only when directed in the documentation for the function that furnished the handle. GetModuleHandle() does not so direct. We have been issuing this call only in the rare event that a CRT DLL contains no "_putenv" symbol, so lack of bug reports is uninformative. Back-patch to 9.2 (all supported versions). Christian Ullrich, reviewed by Michael Paquier.
* Refine win32env.c cosmetics.Noah Misch2016-12-03
| | | | | | | | | Replace use of plain 0 as a null pointer constant. In comments, update terminology and lessen redundancy. Back-patch to 9.2 (all supported versions) for the convenience of back-patching the next two commits. Christian Ullrich and Noah Misch, reviewed (in earlier versions) by Michael Paquier.
* Permit dump/reload of not-too-large >1GB tuplesAlvaro Herrera2016-12-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our documentation states that our maximum field size is 1 GB, and that our maximum row size of 1.6 TB. However, while this might be attainable in theory with enough contortions, it is not workable in practice; for starters, pg_dump fails to dump tables containing rows larger than 1 GB, even if individual columns are well below the limit; and even if one does manage to manufacture a dump file containing a row that large, the server refuses to load it anyway. This commit enables dumping and reloading of such tuples, provided two conditions are met: 1. no single column is larger than 1 GB (in output size -- for bytea this includes the formatting overhead) 2. the whole row is not larger than 2 GB There are three related changes to enable this: a. StringInfo's API now has two additional functions that allow creating a string that grows beyond the typical 1GB limit (and "long" string). ABI compatibility is maintained. We still limit these strings to 2 GB, though, for reasons explained below. b. COPY now uses long StringInfos, so that pg_dump doesn't choke trying to emit rows longer than 1GB. c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation for the input tuple, which means that large tuples are accepted on input. Note that at this point we do not apply any further limit to the input tuple size. The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit length words to describe each row; and because the documentation is ambiguous on its signedness and libpq does consider it signed, we cannot use the highest-order bit. Additionally, the StringInfo API uses "int" (which is 4 bytes wide in most platforms) in many places, so we'd need to change that API too in order to improve, which has lots of fallout. Backpatch to 9.5, which is the oldest that has MemoryContextAllocExtended, a necessary piece of infrastructure. We could apply to 9.4 with very minimal additional effort, but any further than that would require backpatching "huge" allocations too. This is the largest set of changes we could find that can be back-patched without breaking compatibility with existing systems. Fixing a bigger set of problems (for example, dumping tuples bigger than 2GB, or dumping fields bigger than 1GB) would require changing the FE/BE protocol and/or changing the StringInfo API in an ABI-incompatible way, neither of which would be back-patchable. Authors: Daniel Vérité, Álvaro Herrera Reviewed by: Tomas Vondra Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
* Doc: improve description of trim() and related functions.Tom Lane2016-11-30
| | | | | | | | | | | | | | | | Per bug #14441 from Mark Pether, the documentation could be misread, mainly because some of the examples failed to show what happens with a multicharacter "characters to trim" string. Also, while the text description in most of these entries was fairly clear that the "characters" argument is a set of characters not a substring to match, some of them used variant wording that was a bit less clear. trim() itself suffered from both deficiencies and was thus pretty misinterpretable. Also fix failure to explain which of LEADING/TRAILING/BOTH is the default. Discussion: https://postgr.es/m/20161130011710.6539.53657@wrigleys.postgresql.org
* Fix bogus handling of JOIN_UNIQUE_OUTER/INNER cases for parallel joins.Tom Lane2016-11-29
| | | | | | | | | | | | | | | | | | | | consider_parallel_nestloop passed the wrong jointype down to its subroutines for JOIN_UNIQUE_INNER cases (it should pass JOIN_INNER), and it thought that it could pass paths other than innerrel->cheapest_total_path to create_unique_path, which create_unique_path is not on board with. These bugs would lead to assertion failures or other errors, suggesting that this code path hasn't been tested much. hash_inner_and_outer's code for parallel join effectively treated both JOIN_UNIQUE_OUTER and JOIN_UNIQUE_INNER the same as JOIN_INNER (for different reasons :-(), leading to incorrect plans that treated a semijoin as if it were a plain join. Michael Day submitted a test case demonstrating that hash_inner_and_outer failed for JOIN_UNIQUE_OUTER, and I found the other cases through code review. Report: https://postgr.es/m/D0E8A029-D1AC-42E8-979A-5DE4A77E4413@rcmail.com
* Fix incorrect variable type in set_rel_consider_parallel().Tom Lane2016-11-29
| | | | | | func_parallel() returns char not Oid. Harmless, but still wrong. Amit Langote
* Clarify pg_dump -b documentationStephen Frost2016-11-29
| | | | | | | | | | | | | The documentation around the -b/--blobs option to pg_dump seemed to imply that it might be possible to add blobs to a "schema-only" dump or similar. Clarify that blobs are data and therefore will only be included in dumps where data is being included, even when -b is used to request blobs be included. The -b option has been around since before 9.2, so back-patch to all supported branches. Discussion: https://postgr.es/m/20161119173316.GA13284@tamriel.snowman.net
* Correct psql documentation exampleStephen Frost2016-11-29
| | | | | | | | | | | An example in the psql documentation had an incorrect field name from what the command actually produced. Pointed out by Fabien COELHO Back-patch to 9.6 where the example was added. Discussion: https://postgr.es/m/alpine.DEB.2.20.1611291349400.19314@lancre
* Fix get_relation_info name typo'ed in a commentAlvaro Herrera2016-11-28
| | | | | | | Plus add a missing comment about this in get_relation_info itself. Author: Amit Langote Discussion: https://postgr.es/m/e46c0569-0449-afa0-e2fe-f3776e4b3fd5@lab.ntt.co.jp
* Fix busted tab-completion pattern for ALTER TABLE t ALTER c DROP ...Tom Lane2016-11-28
| | | | | | Evidently a thinko in commit 9b181b036. Kyotaro Horiguchi
* Mention server start requirement for ssl parametersMagnus Hagander2016-11-27
| | | | | | | Fix that the documentation for three ssl related parameters did not specify that they can only be changed at server start. Michael Paquier
* Fix test about ignoring extension dependencies during extension scripts.Tom Lane2016-11-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 08dd23cec introduced an exception to the rule that extension member objects can only be dropped as part of dropping the whole extension, intending to allow such drops while running the extension's own creation or update scripts. However, the exception was only applied at the outermost recursion level, because it was modeled on a pre-existing check to ignore dependencies on objects listed in pendingObjects. Bug #14434 from Philippe Beaudoin shows that this is inadequate: in some cases we can reach an extension member object by recursion from another one. (The bug concerns the serial-sequence case; I'm not sure if there are other cases, but there might well be.) To fix, revert 08dd23cec's changes to findDependentObjects() and instead apply the creating_extension exception regardless of stack level. Having seen this example, I'm a bit suspicious that the pendingObjects logic is also wrong and such cases should likewise be allowed at any recursion level. However, changing that would interact in subtle ways with the recursion logic (at least it would need to be moved to after the recursing-from check). Given that the code's been like that a long time, I'll refrain from touching it without a clear example showing it's wrong. Back-patch to all active branches. In HEAD and 9.6, where suitable test infrastructure exists, add a regression test case based on the bug report. Report: <20161125151448.6529.33039@wrigleys.postgresql.org> Discussion: <13224.1480177514@sss.pgh.pa.us>
* Bring some clarity to the defaults for the xxx_flush_after parameters.Tom Lane2016-11-25
| | | | | | | | | | | | | | | | | | | | | | | | | | Instead of confusingly stating platform-dependent defaults for these parameters in the comments in postgresql.conf.sample (with the main entry being a lie on Linux), teach initdb to install the correct platform-dependent value in postgresql.conf, similarly to the way we handle other platform-dependent defaults. This won't do anything for existing 9.6 installations, but since it's effectively only a documentation improvement, that seems OK. Since this requires initdb to have access to the default values, move the #define's for those to pg_config_manual.h; the original placement in bufmgr.h is unworkable because that file can't be included by frontend programs. Adjust the default value for wal_writer_flush_after so that it is 1MB regardless of XLOG_BLCKSZ, conforming to what is stated in both the SGML docs and postgresql.conf. (We could alternatively make it scale with XLOG_BLCKSZ, but I'm not sure I see the point.) Copy-edit related SGML documentation. Fabien Coelho and Tom Lane, per a gripe from Tomas Vondra. Discussion: <30ebc6e3-8358-09cf-44a8-578252938424@2ndquadrant.com>
* Mark a query's topmost Paths parallel-unsafe if they will have initPlans.Tom Lane2016-11-25
| | | | | | | | | | | | | Andreas Seltenreich found another case where we were being too optimistic about allowing a plan to be considered parallelizable despite it containing initPlans. It seems like the real issue here is that if we know we are going to tack initPlans onto the topmost Plan node for a subquery, we had better mark that subquery's result Paths as not-parallel-safe. That fixes this problem and allows reversion of a kluge (added in commit 7b67a0a49 and extended in f24cf960d) to not trust the parallel_safe flag at top level. Discussion: <874m2w4k5d.fsf@ex.ansel.ydns.eu>
* Check for pending trigger events on far end when dropping an FK constraint.Tom Lane2016-11-25
| | | | | | | | | | | | | When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT, we refuse the drop if there are any pending trigger events on the named table; this ensures that we won't remove the pg_trigger row that will be consulted by those events. But we should make the same check for the referenced relation, else we might remove a due-to-be-referenced pg_trigger row for that relation too, resulting in "could not find trigger NNN" or "relation NNN has no triggers" errors at commit. Per bug #14431 from Benjie Gillam. Back-patch to all supported branches. Report: <20161124114911.6530.31200@wrigleys.postgresql.org>
* Fix typo in commentMagnus Hagander2016-11-25
| | | | Thomas Munro
* Fix commit_ts for FrozenXid and BootstrapXidAlvaro Herrera2016-11-24
| | | | | | | | | | | | | | | | Previously, requesting commit timestamp for transactions FrozenTransactionId and BootstrapTransactionId resulted in an error. But since those values can validly appear in committed tuples' Xmin, this behavior is unhelpful and error prone: each caller would have to special-case those values before requesting timestamp data for an Xid. We already have a perfectly good interface for returning "the Xid you requested is too old for us to have commit TS data for it", so let's use that instead. Backpatch to 9.5, where commit timestamps appeared. Author: Craig Ringer Discussion: https://www.postgresql.org/message-id/CAMsr+YFM5Q=+ry3mKvWEqRTxrB0iU3qUSRnS28nz6FJYtBwhJg@mail.gmail.com
* Make sure ALTER TABLE preserves index tablespaces.Tom Lane2016-11-23
| | | | | | | | | | | | | | | | | | | | | | | | When rebuilding an existing index, ALTER TABLE correctly kept the physical file in the same tablespace, but it messed up the pg_class entry if the index had been in the database's default tablespace and "default_tablespace" was set to some non-default tablespace. This led to an inaccessible index. Fix by fixing pg_get_indexdef_string() to always include a tablespace clause, whether or not the index is in the default tablespace. The previous behavior was installed in commit 537e92e41, and I think it just wasn't thought through very clearly; certainly the possible effect of default_tablespace wasn't considered. There's some risk in changing the behavior of this function, but there are no other call sites in the core code. Even if it's being used by some third party extension, it's fairly hard to envision a usage that is okay with a tablespace clause being appended some of the time but can't handle it being appended all the time. Back-patch to all supported versions. Code fix by me, investigation and test cases by Michael Paquier. Discussion: <1479294998857-5930602.post@n3.nabble.com>
* Doc: in back branches, don't call it a row constructor if it isn't really.Tom Lane2016-11-22
| | | | | | | | | | | | | Before commit 906bfcad7, we were not actually processing the righthand side of a multiple-column assignment in UPDATE as a row constructor: it was just a parenthesized list of expressions. Call it that rather than risking confusion by people who would expect the documented behaviors of row constructors to apply. Back-patch to 9.5; before that, the text correctly described the construct as a "list of independent expressions". Discussion: <16288.1479610770@sss.pgh.pa.us>
* Doc: improve documentation about composite-value usage.Tom Lane2016-11-22
| | | | | | | | | | Create a section specifically for the syntactic rules around whole-row variable usage, such as expansion of "foo.*". This was previously documented only haphazardly, with some critical info buried in unexpected places like xfunc-sql-composite-functions. Per repeated questions in different mailing lists. Discussion: <16288.1479610770@sss.pgh.pa.us>
* Doc: add a section in Part II concerning RETURNING.Tom Lane2016-11-22
| | | | | | | | There are assorted references to RETURNING in Part II, but nothing that would qualify as an explanation of the feature, which seems like an oversight considering how useful it is. Add something. Noted while looking for a place to point a cross-reference to ...
* Make contrib/test_decoding regression tests safe for CZ locale.Tom Lane2016-11-21
| | | | | | | | A little COLLATE "C" goes a long way. Pavel Stehule, per suggestion from Craig Ringer Discussion: <CAFj8pRA8nJZcozgxN=RMSqMmKuHVOkcGAAKPKdFeiMWGDSUDLA@mail.gmail.com>
* Fix PGLC_localeconv() to handle errors better.Tom Lane2016-11-21
| | | | | | | | | | | | | | | | | | | | | | | The code was intentionally not very careful about leaking strdup'd strings in case of an error. That was forgivable probably, but it also failed to notice strdup() failures, which could lead to subsequent null-pointer-dereference crashes, since many callers unsurprisingly didn't check for null pointers in the struct lconv fields. An even worse problem is that it could throw error while we were setlocale'd to a non-C locale, causing unwanted behavior in subsequent libc calls. Rewrite to ensure that we cannot throw elog(ERROR) until after we've restored the previous locale settings, or at least attempted to. (I'm sorely tempted to make restore failure be a FATAL error, but will refrain for the moment.) Having done that, it's not much more work to ensure that we clean up strdup'd storage on the way out, too. This code is substantially the same in all supported branches, so back-patch all the way. Michael Paquier and Tom Lane Discussion: <CAB7nPqRMbGqa_mesopcn4MPyTs34eqtVEK7ELYxvvV=oqS00YA@mail.gmail.com>
* Fix test for subplans in force-parallel mode.Tom Lane2016-11-21
| | | | | | | | | | | | | We mustn't force parallel mode if the query has any subplans, since ExecSerializePlan doesn't transmit them to workers. Testing top_plan->initPlan is inadequate because (1) there might be initPlans attached to lower plan nodes, and (2) non-initPlan subplans don't work either. There's certainly room for improvement in those restrictions, but for the moment that's what we've got. Amit Kapila, per report from Andreas Seltenreich Discussion: <8737im6pmh.fsf@credativ.de>
* Prevent multicolumn expansion of "foo.*" in an UPDATE source expression.Tom Lane2016-11-20
| | | | | | | | | | | | | | | | | | | | | | | | Because we use transformTargetList() for UPDATE as well as SELECT tlists, the code accidentally tried to expand a "*" reference into several columns. This is nonsensical, because the UPDATE syntax provides exactly one target column to put the value into. The immediate result was that transformUpdateTargetList() got confused and reported "UPDATE target count mismatch --- internal error". It seems better to treat such a reference as a plain whole-row variable, as it would be in other contexts. (This could produce useful results when the target column is of composite type.) Fix by tweaking transformTargetList() to perform *-expansion only conditionally, depending on its exprKind parameter. Back-patch to 9.3. The problem exists further back, but a fix would be much more invasive before that, because transformTargetList() wasn't told what kind of list it was working on. Doesn't seem worth the trouble given the lack of field reports. (I only noticed it because I was checking the code while trying to improve the documentation about how we handle "foo.*".) Discussion: <4308.1479595330@sss.pgh.pa.us>
* Code review for GUC serialization/deserialization code.Tom Lane2016-11-19
| | | | | | | | | | | | | | | | | | | | | | | | | The serialization code dumped core for a string-valued GUC whose value is NULL, which is a legal state. The infrastructure isn't capable of transmitting that state exactly, but fortunately, transmitting an empty string instead should be close enough (compare, eg, commit e45e990e4). The code potentially underestimated the space required to format a real-valued variable, both because it made an unwarranted assumption that %g output would never be longer than %e output, and because it didn't count right even for %e format. In practice this would pretty much always be masked by overestimates for other variables, but it's still wrong. Also fix boundary-case error in read_gucstate, incorrect handling of the case where guc_sourcefile is non-NULL but zero length (not clear that can happen, but if it did, this code would get totally confused), and confusingly useless check for a NULL result from read_gucstate. Andreas Seltenreich discovered the core dump; other issues noted while reading nearby code. Back-patch to 9.5 where this code was introduced. Michael Paquier and Tom Lane Discussion: <871sy78wno.fsf@credativ.de>
* Improve pg_dump/pg_restore --create --if-exists logic.Tom Lane2016-11-17
| | | | | | | | | | | | | Teach it not to complain if the dropStmt attached to an archive entry is actually spelled CREATE OR REPLACE VIEW, since that will happen due to an upcoming bug fix. Also, if it doesn't recognize a dropStmt, have it print a WARNING and then emit the dropStmt unmodified. That seems like a much saner behavior than Assert'ing or dumping core due to a null-pointer dereference, which is what would happen before :-(. Back-patch to 9.4 where this option was introduced. Discussion: <19092.1479325184@sss.pgh.pa.us>
* Avoid pin scan for replay of XLOG_BTREE_VACUUM in all casesAlvaro Herrera2016-11-17
| | | | | | | | | | | | | | | | | | | | | | | Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to require complex interlocking that matched the requirements on the master. This required an O(N) operation that became a significant problem with large indexes, causing replication delays of seconds or in some cases minutes while the XLOG_BTREE_VACUUM was replayed. This commit skips the “pin scan” that was previously required, by observing in detail when and how it is safe to do so, with full documentation. The pin scan is skipped only in replay; the VACUUM code path on master is not touched here. No tests included. Manual tests using an additional patch to view WAL records and their timing have shown the change in WAL records and their handling has successfully reduced replication delay. This is a back-patch of commits 687f2cd7a015, 3e4b7d87988f, b60284261375 by Simon Riggs, to branches 9.4 and 9.5. No further backpatch is possible because this depends on catalog scans being MVCC. I (Álvaro) additionally updated a slight problem in the README, which explains why this touches the 9.6 and master branches.
* Allow DOS-style line endings in ~/.pgpass files.Tom Lane2016-11-15
| | | | | | | | | | | | | | | On Windows, libc will mask \r\n line endings for us, since we read the password file in text mode. But that doesn't happen on Unix. People who share password files across both systems might have \r\n line endings in a file they use on Unix, so as a convenience, ignore trailing \r. Per gripe from Josh Berkus. In passing, put the existing check for empty line somewhere where it's actually useful, ie after stripping the newline not before. Vik Fearing, adjusted a bit by me Discussion: <0de37763-5843-b2cc-855e-5d0e5df25807@agliodbs.com>
* Account for catalog snapshot in PGXACT->xmin updates.Tom Lane2016-11-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting for whether MyPgXact->xmin could be cleared or advanced. In normal transactions this was masked by the fact that the transaction snapshot would be older, but during backend startup and certain utility commands it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had been cleared, meaning that recently-deleted rows could be pruned even though this snapshot could still see them, causing unexpected catalog lookup failures. This effect appears to be the explanation for a recent failure on buildfarm member piculet. To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever it is valid. In the previous logic, it was possible for the CatalogSnapshot to remain valid across waits for client input, but with this change that would mean it delays advance of global xmin in cases where it did not before. To avoid possibly causing new table-bloat problems with clients that sit idle for long intervals, add code to invalidate the CatalogSnapshot before waiting for client input. (When the backend is busy, it's unlikely that the CatalogSnapshot would be the oldest snap for very long, so we don't worry about forcing early invalidation of it otherwise.) In passing, remove the CatalogSnapshotStale flag in favor of using "CatalogSnapshot != NULL" to represent validity, as we do for the other special snapshots in snapmgr.c. And improve some obsolete comments. No regression test because I don't know a deterministic way to cause this failure. But the stress test shown in the original discussion provokes "cache lookup failed for relation 1255" within a few dozen seconds for me. Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's quite easy to produce similar failures with the same test case in branches before 9.4. But MVCC catalog scans were supposed to fix that.) Discussion: <16447.1478818294@sss.pgh.pa.us>
* Fix typo in commentMagnus Hagander2016-11-14
| | | | | The function was renamed in 908e23473, but the comment never learned about it.
* Fix duplication in ALTER MATERIALIZE VIEW synopsisAlvaro Herrera2016-11-14
| | | | | | | | | Commit 3c4cf080879b should have removed SET TABLESPACE from the synopsis of ALTER MATERIALIZE VIEW as a possible "action" when it added a separate line for it in the main command listing, but failed to. Repair. Backpatch to 9.4, like the aforementioned commit.
* Doc: remove obsolete example.Tom Lane2016-11-13
| | | | | | | | | The documentation for ts_headline() recommends using a sub-select to avoid extra evaluations of ts_headline() in a query with ORDER BY+LIMIT. Since commit 9118d03a8 this contortionism is unnecessary, so remove the recommendation. Noted by Oleg Bartunov. Discussion: <CAF4Au4w6rrH_j1bvVhzpOsRiHCog7sGJ3LSX0tY8ZdwhHT88LQ@mail.gmail.com>
* Doc: fix data types of FuncCallContext's call_cntr and max_calls fields.Tom Lane2016-11-11
| | | | | | | | Commit 23a27b039 widened these from uint32 to uint64, but I overlooked that the documentation explicitly showed them as uint32. Per report from Vicky Vergara. Report: <20161111135422.8761.36733@wrigleys.postgresql.org>
* Re-allow user_catalog_table option for materialized views.Tom Lane2016-11-10
| | | | | | | | | | The reloptions stuff allows this option to be set on a matview. While it's questionable whether that is useful or was really intended, it does work, and we shouldn't change that in minor releases. Commit e3e66d8a9 disabled the option since I didn't realize that it was possible for it to be set on a matview. Tweak the test to re-allow it. Discussion: <19749.1478711862@sss.pgh.pa.us>
* Fix partial aggregation for the case of a degenerate GROUP BY clause.Tom Lane2016-11-10
| | | | | | | | | | | The plan generated for sorted partial aggregation with "GROUP BY constant" included a Sort node with no sort keys, which the executor does not like. Per report from Steve Randall. I'd add a regression test case if I could think of a compact one, but it doesn't seem worth expending lots of cycles on. Report: <CABVd52UAdGXpg_rCk46egpNKYdXOzCjuJ1zG26E2xBe_8bj+Fg@mail.gmail.com>
* Fix typoMagnus Hagander2016-11-08
|
* Change qr/foo$/m to qr/foo\n/m, for Perl 5.8.8.Noah Misch2016-11-07
| | | | | | | | | | | | | | | | | | In each case, absence of a trailing newline would itself constitute a PostgreSQL bug. Therefore, this slightly enhances the changed tests. This works around a bug that last appeared in Perl 5.8.8, fixing src/test/modules/test_pg_dump when run against that version. Commit e7293e3271bf618eeb2d4779a15fc516a69fe463 worked around the bug, but the subsequent addition of test_pg_dump introduced affected code. As that commit had shown, slight increases in pattern complexity can suppress the bug. This commit edits qr/foo$/m patterns too complex to encounter the bug today, for style consistency and robustness against unrelated pattern changes. Back-patch to 9.6, where test_pg_dump was introduced. As of this writing, a fresh MSYS installation includes an affected Perl 5.8.8. The Perl 5.8.8 in Red Hat Enterprise Linux 5.11 carries a patch that renders it unaffected, but the Perl 5.8.5 of Red Hat Enterprise Linux 4.4 is affected.
* Band-aid fix for incorrect use of view options as StdRdOptions.Tom Lane2016-11-07
| | | | | | | | | | We really ought to make StdRdOptions and the other decoded forms of reloptions self-identifying, but for the moment, assume that only plain relations could possibly be user_catalog_tables. Fixes problem with bogus "ON CONFLICT is not supported on table ... used as a catalog table" error when target is a view with cascade option. Discussion: <26681.1477940227@sss.pgh.pa.us>
* pg_rewing pg_upgrade: Fix translation markersPeter Eisentraut2016-11-07
| | | | | In pg_log_v(), we need to translate the fmt before processing, not the formatted message afterwards.
* Fix handling of symlinked pg_stat_tmp and pg_replslotMagnus Hagander2016-11-07
| | | | | | | | | | | This was already fixed in HEAD as part of 6ad8ac60 but was not backpatched. Also change the way pg_xlog is handled to be the same as the other directories. Patch from me with pg_xlog addition from Michael Paquier, test updates from David Steele.
* Rationalize and document pltcl's handling of magic ".tupno" array element.Tom Lane2016-11-06
| | | | | | | | | | | | | | | | | | | | | | For a very long time, pltcl's spi_exec and spi_execp commands have had a behavior of storing the current row number as an element of output arrays, but this was never documented. Fix that. For an equally long time, pltcl_trigger_handler had a behavior of silently ignoring ".tupno" as an output column name, evidently so that the result of spi_exec could be used directly as a trigger result tuple. Not sure how useful that really is, but in any case it's bad that it would break attempts to use ".tupno" as an actual column name. We can fix it by not checking for ".tupno" until after we check for a column name match. This comports with the effective behavior of spi_exec[p] that ".tupno" is only magic when you don't have an actual column named that. In passing, wordsmith the description of returning modified tuples from a pltcl trigger. Noted while working on Jim Nasby's patch to support composite results from pltcl. The inability to return trigger tuples using ".tupno" as a column name is a bug, so back-patch to all supported branches.
* Need to do SPI_push/SPI_pop around expression evaluation in plpgsql.Tom Lane2016-11-06
| | | | | | | | | | | | | | | We must do this in case the expression evaluation results in calling another plpgsql function (or, really, anything using SPI). I missed the need for this when I converted exec_cast_value() from doing a simple InputFunctionCall() to doing ExecEvalExpr() in commit 1345cc67b. There is a SPI_push_conditional in InputFunctionCall(), so that there was no bug before that. Per bug #14414 from Marcos Castedo. Add a regression test based on his example, which was that a plpgsql function in a domain check constraint didn't work when assigning to a domain-type variable within plpgsql. Report: <20161106010947.1387.66380@wrigleys.postgresql.org>
* More zic cleanup.Tom Lane2016-11-06
| | | | | | | | | | | | | | | | The workaround the IANA guys chose to get rid of the clang warning we'd silenced in commit 23ed2ba81 turns out not to satisfy Coverity. Go back to the previous solution, ie, remove the useless comparison to SIZE_MAX. (In principle, there could be machines out there where it's not useless because ptrdiff_t is wider than size_t. But the whole thing is pretty academic anyway, as we could never approach this limit for any sane estimate of the amount of data that zic will ever be asked to work with.) Also, s/lineno/lineno_t/g, because if we accept their decision to start using "lineno" as a typedef, it is going to have very unpleasant consequences in our next pgindent run. Noted that while fooling with pltcl yesterday.
* Remove duplicate macro definition.Tom Lane2016-11-05
| | | | | | | Seems to be a copy-and-pasteo. Odd that we heard no reports of compiler warnings about it. Thomas Munro
* Back-patch portability fixes for contrib/pageinspect/ginfuncs.c.Tom Lane2016-11-04
| | | | Back-patch commits 84ad68d64 and 367b99bbb.
* Sync our copy of the timezone library with IANA tzcode master.Tom Lane2016-11-04
| | | | | | | | | | | | | | | | | | | | | This patch absorbs some unreleased fixes for symlink manipulation bugs introduced in tzcode 2016g. Ordinarily I'd wait around for a released version, but in this case it seems like we could do with extra testing, in particular checking whether it works in EDB's VMware build environment. This corresponds to commit aec59156abbf8472ba201b6c7ca2592f9c10e077 in https://github.com/eggert/tz. Per a report from Sandeep Thakkar, building in an environment where hard links are not supported in the timezone data installation directory failed, because upstream code refactoring had broken the case of symlinking from an existing symlink. Further experimentation also showed that the symlinks were sometimes made incorrectly, with too many or too few "../"'s in the symlink contents. Back-patch of commit 1f87181e12beb067d21b79493393edcff14c190b. Report: <CANFyU94_p6mqRQc2i26PFp5QAOQGB++AjGX=FO8LDpXw0GSTjw@mail.gmail.com> Discussion: http://mm.icann.org/pipermail/tz/2016-November/024431.html
* Don't make FK-based selectivity estimates in inheritance situations.Tom Lane2016-11-02
| | | | | | | | | | | | | | | | | | | The foreign-key-aware logic for estimation of join sizes (added in commit 100340e2d) blindly tried to apply the concept to rels that are actually parents of inheritance trees. This is just plain wrong so far as the referenced relation is concerned, since the inheritance scan may well produce lots of rows that are not participating in the constraint. It's wrong for the referencing relation too, for the same reason; although on that end we could conceivably detect whether all members of the inheritance tree have equivalent FK constraints pointing to the same referenced rel, and then proceed more or less as we do now. But pending somebody writing code to do that, we must disable this, because it's producing completely silly estimates when there's an FK linking the heads of inheritance trees. Per bug #14404 from Clinton Adams. Back-patch to 9.6 where the new estimation logic came in. Report: <20161028200412.15987.96482@wrigleys.postgresql.org>
* Don't convert Consts into Vars during setrefs.c processing.Tom Lane2016-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | While converting expressions in an upper-level plan node so that they reference Vars and expressions provided by the input plan node(s), don't convert plain Const items, even if there happens to be a matching Const in the input. It's silly to do so because a Var is more expensive to execute than a Const. Moreover, converting can fool ExecCheckPlanOutput's check that an insert or update query inserts nulls into dropped columns, leading to "query provides a value for a dropped column" errors during INSERT or UPDATE on a table with a dropped column. We could solve this by making that check more complicated, but I don't see the point; this fix should save a marginal number of cycles, and it also makes for less messy EXPLAIN output, as shown by the ensuing regression test result changes. Per report from Pavel Hanák. I have not incorporated a test case based on that example, as there doesn't seem to be a simple way of checking this in isolation without making a bunch of assumptions about other planner and SQL-function behavior. Back-patch to 9.6. This setrefs.c behavior exists much further back, but there is not currently reason to think that it causes problems before 9.6. Discussion: <83shraampf.fsf@is-it.eu>
* Fix portability bug in gin_page_opaque_info().Tom Lane2016-11-02
| | | | | | Somebody apparently thought that "if Int32GetDatum is good, Int64GetDatum must be better". Per buildfarm failures now that Peter has added some regression tests here.