aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Sanitize newlines in object names in "pg_restore -l" output.Tom Lane2017-03-10
| | | | | | | | | | | | | | | | | | | | Commits 89e0bac86 et al replaced newlines with spaces in object names printed in SQL comments, but we neglected to consider that the same names are also printed by "pg_restore -l", and a newline would render the output unparseable by "pg_restore -L". Apply the same replacement in "-l" output. Since "pg_restore -L" doesn't actually examine any object names, only the dump ID field that starts each line, this is enough to fix things for its purposes. The previous fix was treated as a security issue, and we might have done that here as well, except that the issue was reported publicly to start with. Anyway it's hard to see how this could be exploited for SQL injection; "pg_restore -L" doesn't do much with the file except parse it for leading integers. Per bug #14587 from Milos Urbanek. Back-patch to all supported versions. Discussion: https://postgr.es/m/20170310155318.1425.30483@wrigleys.postgresql.org
* Fix a potential double-free in ecpg.Michael Meskes2017-03-10
|
* Fix timestamptz regression test to still work with latest IANA zone data.Tom Lane2017-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | The IANA timezone crew continues to chip away at their project of removing timezone abbreviations that have no real-world currency from their database. The tzdata2017a update removes all such abbreviations for South American zones, as well as much of the Pacific. This breaks some test cases in timestamptz.sql that were expecting America/Santiago and America/Caracas to have non-numeric abbreviations. The test cases involving America/Santiago seem to have selected that zone more or less at random, so just replace it with America/New_York, which is of similar longitude. The cases involving America/Caracas are harder since they were chosen to test a time-varying zone abbreviation around a point where it changed meaning in the backwards direction. Fortunately, Europe/Moscow has a similar case in 2014, and the MSK/MSD abbreviations are well enough attested that IANA seems unlikely to decide to remove them from the database in future. With these changes, this regression test should pass when using any IANA zone database from 2015 or later. One could wish that there were a few years more daylight on how out-of-date your zone database can be ... but really the --with-system-tzdata option is only meant for use on platforms where the zone database is kept up-to-date pretty faithfully, so I do not think this is a big objection. Discussion: https://postgr.es/m/6749.1489087470@sss.pgh.pa.us
* Use doubly-linked block lists in aset.c to reduce large-chunk overhead.Tom Lane2017-03-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Large chunks (those too large for any palloc freelist) are managed as separate blocks. Formerly, realloc'ing or pfree'ing such a chunk required O(N) time in a context with N blocks, since we had to traipse down the singly-linked block list to locate the block's predecessor before we could fix the list links. This can result in O(N^2) runtime in situations where large numbers of such chunks are manipulated within one context. Cases like that were not foreseen in the original design of aset.c, and indeed didn't arise until fairly recently. But such problems can now occur in reorderbuffer.c and in hash joining, both of which make repeated large requests without scaling up their request size as they do so, and which will free their requests in not-necessarily-LIFO order. To fix, change the block list from singly-linked to doubly-linked. This adds another 4 or 8 bytes to ALLOC_BLOCKHDRSZ, but that doesn't seem like unacceptable overhead, since aset.c's blocks are normally 8K or more, and never less than 1K in current practice. In passing, get rid of some redundant AllocChunkGetPointer() calls in AllocSetRealloc (the compiler might be smart enough to optimize these away anyway, but no need to assume that) and improve AllocSetCheck's checking of block header fields. Back-patch to 9.4 where reorderbuffer.c appeared. We could take this further back, but currently there's no evidence that it would be useful. Discussion: https://postgr.es/m/CAMkU=1x1hvue1XYrZoWk_omG0Ja5nBvTdvgrOeVkkeqs71CV8g@mail.gmail.com
* pg_xlogdump: Remove extra newline in error messagePeter Eisentraut2017-03-08
| | | | fatal_error() already prints out a trailing newline.
* Fix pgbench's failure to honor the documented long-form option "--builtin".Tom Lane2017-03-07
| | | | | | | | | | | | | Not only did it not accept --builtin as a synonym for -b, but what it did accept as a synonym was --tpc-b (huh?), which it got even further wrong by marking as no_argument, so that if you did try that you got a core dump. I suppose this is leftover from some early design for the new switches added by commit 8bea3d221, but it's still pretty sloppy work. Per bug #14580 from Stepan Pesternikov. Back-patch to 9.6 where the error was introduced. Report: https://postgr.es/m/20170307123347.25054.73207@wrigleys.postgresql.org
* pg_dump: Properly handle public schema ACLs with --cleanStephen Frost2017-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_dump has always handled the public schema in a special way when it comes to the "--clean" option. To wit, we do not drop or recreate the public schema in "normal" mode, but when we are run in "--clean" mode then we do drop and recreate the public schema. When running in "--clean" mode, the public schema is dropped and then recreated and it is recreated with the normal schema-default privileges of "nothing". This is unlike how the public schema starts life, which is to have CREATE and USAGE GRANT'd to the PUBLIC role, and that is what is recorded in pg_init_privs. Due to this, in "--clean" mode, pg_dump would mistakenly only dump out the set of privileges required to go from the initdb-time privileges on the public schema to whatever the current-state privileges are. If the privileges were not changed from initdb time, then no privileges would be dumped out for the public schema, but with the schema being dropped and recreated, the result was that the public schema would have no ACLs on it instead of what it should have, which is the initdb-time privileges. Practically speaking, this meant that pg_dump with --clean mode dumping a database where the ACLs on the public schema were not changed from the default would, upon restore, result in a public schema with *no* privileges GRANT'd, not matching the state of the existing database (where the initdb-time privileges would have been CREATE and USAGE to the PUBLIC role for the public schema). To fix, adjust the query in getNamespaces() to ignore the pg_init_privs entry for the public schema when running in "--clean" mode, meaning that the privileges for the public schema would be dumped, correctly, as if it was going from a newly-created schema to the current state (which is, indeed, what will happen during the restore thanks to the DROP/CREATE). Only the public schema is handled in this special way by pg_dump, no other initdb-time objects are dropped/recreated in --clean mode. Back-patch to 9.6 where the bug was introduced. Discussion: https://postgr.es/m/3534542.o3cNaKiDID%40techfox
* Repair incorrect pg_dump labeling for some comments and security labels.Tom Lane2017-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We attached no schema label to comments for procedural languages, casts, transforms, operator classes, operator families, or text search objects. The first three categories of objects don't really have schemas, but pg_dump treats them as if they do, and it seems like the TocEntry fields for their comments had better match the TocEntry fields for the parent objects. (As an example of a possible hazard, the type names in a CAST will be formatted with the assumption of a particular search_path, so failing to ensure that this same path is active for the COMMENT ON command could lead to an error or to attaching the comment to the wrong cast.) In the last six cases, this was a flat-out error --- possibly mine to begin with, but it was a long time ago. The security label for a procedural language was likewise not correctly labeled as to schema, and both the comment and security label for a procedural language were not correctly labeled as to owner. In simple cases the restore would accidentally work correctly anyway, since these comments and security labels would normally get emitted right after the owning object, and so the search path and active user would be correct anyhow. But it could fail in corner cases; for example a schema-selective restore would omit comments it should include. Giuseppe Broccolo noted the oversight, and proposed the correct fix, for text search dictionary objects; I found the rest by cross-checking other dumpComment() calls. These oversights are ancient, so back-patch all the way. Discussion: https://postgr.es/m/CAFzmHiWwwzLjzwM4x5ki5s_PDMR6NrkipZkjNnO3B0xEpBgJaA@mail.gmail.com
* pg_upgrade: Fix large object COMMENTS, SECURITY LABELSStephen Frost2017-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When performing a pg_upgrade, we copy the files behind pg_largeobject and pg_largeobject_metadata, allowing us to avoid having to dump out and reload the actual data for large objects and their ACLs. Unfortunately, that isn't all of the information which can be associated with large objects. Currently, we also support COMMENTs and SECURITY LABELs with large objects and these were being silently dropped during a pg_upgrade as pg_dump would skip everything having to do with a large object and pg_upgrade only copied the tables mentioned to the new cluster. As the file copies happen after the catalog dump and reload, we can't simply include the COMMENTs and SECURITY LABELs in pg_dump's binary-mode output but we also have to include the actual large object definition as well. With the definition, comments, and security labels in the pg_dump output and the file copies performed by pg_upgrade, all of the data and metadata associated with large objects is able to be successfully pulled forward across a pg_upgrade. In 9.6 and master, we can simply adjust the dump bitmask to indicate which components we don't want. In 9.5 and earlier, we have to put explciit checks in in dumpBlob() and dumpBlobs() to not include the ACL or the data when in binary-upgrade mode. Adjustments made to the privileges regression test to allow another test (large_object.sql) to be added which explicitly leaves a large object with a comment in place to provide coverage of that case with pg_upgrade. Back-patch to all supported branches. Discussion: https://postgr.es/m/20170221162655.GE9812@tamriel.snowman.net
* Avoid dangling pointer to relation name in RLS code path in DoCopy().Tom Lane2017-03-06
| | | | | | | | | | | | | | | | | With RLS active, "COPY tab TO ..." failed under -DRELCACHE_FORCE_RELEASE, and would sometimes fail without that, because it used the relation name directly from the relcache as part of the parsetree it's building. That becomes a potentially-dangling pointer as soon as the relcache entry is closed, a bit further down. Typical symptom if the relcache entry chanced to get cleared would be "relation does not exist" error with a garbage relation name, or possibly a core dump; but if you were really truly unlucky, the COPY might copy from the wrong table. Per report from Andrew Dunstan that regression tests fail with -DRELCACHE_FORCE_RELEASE. The core tests now pass for me (but have not tried "make check-world" yet). Discussion: https://postgr.es/m/7b52f900-0579-cda9-ae2e-de5da17090e6@2ndQuadrant.com
* In rebuild_relation(), don't access an already-closed relcache entry.Tom Lane2017-03-04
| | | | | | | | | | | | | | | | This reliably fails with -DRELCACHE_FORCE_RELEASE, as reported by Andrew Dunstan, and could sometimes fail in normal operation, resulting in a wrong persistence value being used for the transient table. It's not immediately clear to me what effects that might have beyond the risk of a crash while accessing OldHeap->rd_rel->relpersistence, but it's probably not good. Bug introduced by commit f41872d0c, and made substantially worse by commit 85b506bbf, which added a second such access significantly later than the heap_close. I doubt the first reference could fail in a production scenario, but the second one definitely could. Discussion: https://postgr.es/m/7b52f900-0579-cda9-ae2e-de5da17090e6@2ndQuadrant.com
* Handle unaligned SerializeSnapshot() buffer.Noah Misch2017-03-02
| | | | | | | | | | Likewise in RestoreSnapshot(). Do so by copying between the user buffer and a stack buffer of known alignment. Back-patch to 9.6, where this last applies cleanly. In master, the select_parallel test dies with SIGBUS on "Oracle Solaris 10 1/13 s10s_u11wos_24a SPARC", building 32-bit with gcc 4.9.2. In 9.6 and 9.5, the buffers in question happen to be sufficiently-aligned, and this change is mere insurance against future 9.6 changes or extension code compromising that.
* Fix timeouts in PostgresNode::psqlPeter Eisentraut2017-03-01
| | | | | | | | | | | | | | | | Newer Perl or IPC::Run versions default to appending the filename to string exceptions, e.g. the exception psql timed out is thrown as psql timed out at /usr/share/perl5/vendor_perl/IPC/Run.pm line 2961. To handle this, match exceptions with !~ rather than ne. From: Craig Ringer <craig@2ndquadrant.com> Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
* Fix incorrect variable datatypeMagnus Hagander2017-02-28
| | | | | | | Both datatypes map to the same underlying one which is why it still worked, but we should use the correct type. Author: Kyotaro HORIGUCHI
* Fix unportable definition of BSWAP64() macro.Tom Lane2017-02-24
| | | | | | | | | | | | We have a portable way of writing uint64 constants, but whoever wrote this macro didn't know about it. While at it, fix unsafe under-parenthesization of arguments. That might be moot, because there are already good reasons not to use the macro on anything more complicated than a simple variable, but it's still poor practice. Per buildfarm warnings.
* Make walsender always initialize the buffers.Fujii Masao2017-02-22
| | | | | | | | | | | | | Walsender uses the local buffers for each outgoing and incoming message. Previously when creating replication slot, walsender forgot to initialize one of them and which can cause the segmentation fault error. To fix this issue, this commit changes walsender so that it always initialize them before it executes the requested replication command. Back-patch to 9.4 where replication slot was introduced. Problem report and initial patch by Stas Kelvich, modified by me. Report: https://www.postgresql.org/message-id/A1E9CB90-1FAC-4CAD-8DBA-9AA62A6E97C5@postgrespro.ru
* Fix sloppy handling of corner-case errors in fd.c.Tom Lane2017-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Several places in fd.c had badly-thought-through handling of error returns from lseek() and close(). The fact that those would seldom fail on valid FDs is probably the reason we've not noticed this up to now; but if they did fail, we'd get quite confused. LruDelete and LruInsert actually just Assert'd that lseek never fails, which is pretty awful on its face. In LruDelete, we indeed can't throw an error, because that's likely to get called during error abort and so throwing an error would probably just lead to an infinite loop. But by the same token, throwing an error from the close() right after that was ill-advised, not to mention that it would've left the LRU state corrupted since we'd already unlinked the VFD from the list. I also noticed that really, most of the time, we should know the current seek position and it shouldn't be necessary to do an lseek here at all. As patched, if we don't have a seek position and an lseek attempt doesn't give us one, we'll close the file but then subsequent re-open attempts will fail (except in the somewhat-unlikely case that a FileSeek(SEEK_SET) call comes between and allows us to re-establish a known target seek position). This isn't great but it won't result in any state corruption. Meanwhile, having an Assert instead of an honest test in LruInsert is really dangerous: if that lseek failed, a subsequent read or write would read or write from the start of the file, not where the caller expected, leading to data corruption. In both LruDelete and FileClose, if close() fails, just LOG that and mark the VFD closed anyway. Possibly leaking an FD is preferable to getting into an infinite loop or corrupting the VFD list. Besides, as far as I can tell from the POSIX spec, it's unspecified whether or not the file has been closed, so treating it as still open could be the wrong thing anyhow. I also fixed a number of other places that were being sloppy about behaving correctly when the seekPos is unknown. Also, I changed FileSeek to return -1 with EINVAL for the cases where it detects a bad offset, rather than throwing a hard elog(ERROR). It seemed pretty inconsistent that some bad-offset cases would get a failure return while others got elog(ERROR). It was missing an offset validity check for the SEEK_CUR case on a closed file, too. Back-patch to all supported branches, since all this code is fundamentally identical in all of them. Discussion: https://postgr.es/m/2982.1487617365@sss.pgh.pa.us
* Make src/interfaces/libpq/test clean up after itself.Tom Lane2017-02-19
| | | | | It failed to remove a .o file during "make clean", and it lacked a .gitignore file entirely.
* Adjust PL/Tcl regression test to dodge a possible bug or zone dependency.Tom Lane2017-02-19
| | | | | | | | | | | One case in the PL/Tcl tests is observed to fail on RHEL5 with a Turkish time zone setting. It's not clear if this is an old Tcl bug or something odd about the zone data, but in any case that test is meant to see if the Tcl [clock] command works at all, not what its corner-case behaviors are. Therefore we have no need to test exactly which week a Sunday midnight is considered to fall into. Probe the following Tuesday instead. Discussion: https://postgr.es/m/797.1487517822@sss.pgh.pa.us
* Fix help message for pg_basebackup -RMagnus Hagander2017-02-18
| | | | | | The recovery.conf file that's generated is specifically for replication, and not needed (or wanted) for regular backup restore, so indicate that in the message.
* Document usage of COPT environment variable for adjusting configure flags.Tom Lane2017-02-17
| | | | | | | Also add to the existing rather half-baked description of PROFILE, which does exactly the same thing, but I think people use it differently. Discussion: https://postgr.es/m/16461.1487361849@sss.pgh.pa.us
* pg_dump: Fix typo in queryPeter Eisentraut2017-02-17
| | | | | This could lead to incorrect dumping of language privileges in some cases, which is probably a rare situation.
* Formatting and docs corrections for logical decoding output plugins.Tom Lane2017-02-15
| | | | | | | | | Make the typedefs for output plugins consistent with project style; they were previously not even consistent with each other as to layout or inclusion of parameter names. Make the documentation look the same, and fix errors therein (missing and misdescribed parameters). Back-patch because of the documentation bugs.
* Make sure that hash join's bulk-tuple-transfer loops are interruptible.Tom Lane2017-02-15
| | | | | | | | | | | | | | | | | | The loops in ExecHashJoinNewBatch(), ExecHashIncreaseNumBatches(), and ExecHashRemoveNextSkewBucket() are all capable of iterating over many tuples without ever doing a CHECK_FOR_INTERRUPTS, so that the backend might fail to respond to SIGINT or SIGTERM for an unreasonably long time. Fix that. In the case of ExecHashJoinNewBatch(), it seems useful to put the added CHECK_FOR_INTERRUPTS into ExecHashJoinGetSavedTuple() rather than directly in the loop, because that will also ensure that both principal code paths through ExecHashJoinOuterGetTuple() will do a CHECK_FOR_INTERRUPTS, which seems like a good idea to avoid surprises. Back-patch to all supported branches. Tom Lane and Thomas Munro Discussion: https://postgr.es/m/6044.1487121720@sss.pgh.pa.us
* Fix tab completion for "ALTER SYSTEM SET variable ...".Tom Lane2017-02-15
| | | | | | | | | | | | | It wouldn't complete "TO" after the variable name, which is certainly minor enough. But since we do complete "TO" after "SET variable ...", and since this case used to work pre-9.6, I think this is a bug. Also, fix the query used to collect the variable names; whoever last touched it evidently didn't understand how the pieces are supposed to fit together. It accidentally worked anyway, because readline ignores irrelevant completions, but it was randomly unlike the ones around it, and could be a source of actual bugs if someone copied it as a prototype for another query.
* Fix YA unwanted behavioral difference with operator_precedence_warning.Tom Lane2017-02-15
| | | | | | | | | | | | | | | | Jeff Janes noted that the error cursor position shown for some errors would vary when operator_precedence_warning is turned on. We'd prefer that option to have no undocumented effects, so this isn't desirable. To fix, make sure that an AEXPR_PAREN node has the same exprLocation as its child node. (Note: it would be a little cheaper to use @2 here instead of an exprLocation call, but there are cases where that wouldn't produce the identical answer, so don't do it like that.) Back-patch to 9.5 where this feature was introduced. Discussion: https://postgr.es/m/CAMkU=1ykK+VhhcQ4Ky8KBo9FoaUJH3f3rDQB8TkTXi-ZsBRUkQ@mail.gmail.com
* Ignore tablespace ACLs when ignoring schema ACLs.Noah Misch2017-02-12
| | | | | | | | | | | The ALTER TABLE ALTER TYPE implementation can issue DROP INDEX and CREATE INDEX to refit existing indexes for the new column type. Since this CREATE INDEX is an implementation detail of an index alteration, the ensuing DefineIndex() should skip ACL checks specific to index creation. It already skips the namespace ACL check. Make it skip the tablespace ACL check, too. Back-patch to 9.2 (all supported versions). Reviewed by Tom Lane.
* Blind try to fix portability issue in commit 8f93bd851 et al.Tom Lane2017-02-09
| | | | | | | | | | | | | | | | | The S/390 members of the buildfarm are showing failures indicating that they're having trouble with the rint() calls I added yesterday. There's no good reason for that, and I wonder if it is a compiler bug similar to the one we worked around in d9476b838. Try to fix it using the same method as before, namely to store the result of rint() back into a "double" variable rather than immediately converting to int64. (This isn't entirely waving a dead chicken, since on machines with wider-than-double float registers, the extra store forces a width conversion. I don't know if S/390 is like that, but it seems worth trying.) In passing, merge duplicate ereport() calls in float8_timestamptz(). Per buildfarm.
* Fix roundoff problems in float8_timestamptz() and make_interval().Tom Lane2017-02-08
| | | | | | | | | | | | | | | | | | | | | | When converting a float value to integer microseconds, we should be careful to round the value to the nearest integer, typically with rint(); simply assigning to an int64 variable will truncate, causing apparently off-by-one values in cases that should work. Most places in the datetime code got this right, but not these two. float8_timestamptz() is new as of commit e511d878f (9.6). Previous versions effectively depended on interval_mul() to do roundoff correctly, which it does, so this fixes an accuracy regression in 9.6. The problem in make_interval() dates to its introduction in 9.4. Aside from being careful to round not truncate, let's incorporate the hours and minutes inputs into the result with exact integer arithmetic, rather than risk introducing roundoff error where there need not have been any. float8_timestamptz() problem reported by Erik Nordström, though this is not his proposed patch. make_interval() problem found by me. Discussion: https://postgr.es/m/CAHuQZDS76jTYk3LydPbKpNfw9KbACmD=49dC4BrzHcfPv6yA1A@mail.gmail.com
* Stamp 9.6.2.REL9_6_2Tom Lane2017-02-06
|
* Avoid returning stale attribute bitmaps in RelationGetIndexAttrBitmap().Tom Lane2017-02-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The problem with the original coding here is that we might receive (and clear) a relcache invalidation signal for the target relation down inside one of the index_open calls we're doing. Since the target is open, we would not drop the relcache entry, just reset its rd_indexvalid and rd_indexlist fields. But RelationGetIndexAttrBitmap() kept going, and would eventually cache and return potentially-obsolete attribute bitmaps. The case where this matters is where the inval signal was from a CREATE INDEX CONCURRENTLY telling us about a new index on a formerly-unindexed column. (In all other cases, the lock we hold on the target rel should prevent any concurrent change in index state.) Even just returning the stale attribute bitmap is not such a problem, because it shouldn't matter during the transaction in which we receive the signal. What hurts is caching the stale data, because it can survive into later transactions, breaking CREATE INDEX CONCURRENTLY's expectation that later transactions will not create new broken HOT chains. The upshot is that there's a window for building corrupted indexes during CREATE INDEX CONCURRENTLY. This patch fixes the problem by rechecking that the set of index OIDs is still the same at the end of RelationGetIndexAttrBitmap() as it was at the start. If not, we loop back and try again. That's a little more than is strictly necessary to fix the bug --- in principle, we could return the stale data but not cache it --- but it seems like a bad idea on general principles for relcache to return data it knows is stale. There might be more hazards of the same ilk, or there might be a better way to fix this one, but this patch definitely improves matters and seems unlikely to make anything worse. So let's push it into today's releases even as we continue to study the problem. Pavan Deolasee and myself Discussion: https://postgr.es/m/CABOikdM2MUq9cyZJi1KyLmmkCereyGp5JQ4fuwKoyKEde_mzkQ@mail.gmail.com
* Translation updatesPeter Eisentraut2017-02-06
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 7a27441a7432f1a9d12f2b1b517497c73ee5d20d
* Add missing newline to error messagesPeter Eisentraut2017-02-06
| | | | Also improve the message style a bit while we're here.
* Fix typos in comments.Heikki Linnakangas2017-02-06
| | | | | | | | | Backpatch to all supported versions, where applicable, to make backpatching of future fixes go more smoothly. Josh Soref Discussion: https://www.postgresql.org/message-id/CACZqfqCf+5qRztLPgmmosr-B0Ye4srWzzw_mo4c_8_B_mtjmJQ@mail.gmail.com
* Fix placement of initPlans when forcibly materializing a subplan.Tom Lane2017-02-02
| | | | | | | | | | | | | | | | | | | | If we forcibly place a Material node atop a finished subplan, we need to move any initPlans attached to the subplan up to the Material node, in order to keep SS_finalize_plan() happy. I'd figured this out in commit 7b67a0a49 for the case of materializing a cursor plan, but out of an abundance of caution, I put the initPlan movement hack at the call site for that case, rather than inside materialize_finished_plan(). That was the wrong thing, because it turns out to also be necessary for the only other caller of materialize_finished_plan(), ie subselect.c. We lacked any test cases that exposed the mistake, but bug#14524 from Wei Congrui shows that it's possible to get an initPlan reference into the top tlist in that case too, and then SS_finalize_plan() complains. Hence, move the hack into materialize_finished_plan(). In HEAD, also relocate some recently-added tests in subselect.sql, which I'd unthinkingly dropped into the middle of a sequence of related tests. Report: https://postgr.es/m/20170202060020.1400.89021@wrigleys.postgresql.org
* Add KOI8-U map files to Makefile.Heikki Linnakangas2017-02-02
| | | | | | | These were left out by mistake back when support for KOI8-U encoding was added. Extracted from Kyotaro Horiguchi's larger patch.
* Don't count background workers against a user's connection limit.Andrew Dunstan2017-02-01
| | | | | | | | | | Doing so doesn't seem to be within the purpose of the per user connection limits, and has particularly unfortunate effects in conjunction with parallel queries. Backpatch to 9.6 where parallel queries were introduced. David Rowley, reviewed by Robert Haas and Albe Laurenz.
* pg_dump: Fix handling of ALTER DEFAULT PRIVILEGESStephen Frost2017-01-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit 23f34fa, we changed how ACLs were handled to use the new pg_init_privs catalog and to dump out the ACL commands as REVOKE+GRANT combinations instead of trying to REVOKE all rights always and then GRANT back just the ones which were in place. Unfortunately, the DEFAULT PRIVILEGES system didn't quite get the correct treatment with this change and ended up (incorrectly) only including positive GRANTs instead of both the REVOKEs and GRANTs necessary to preserve the correct privileges. There are only a couple cases where such REVOKEs are possible because, generally speaking, there's few rights which exist on objects by default to be revoked. Examples of REVOKEs which weren't being correctly preserved are when privileges are REVOKE'd from the creator/owner, like so: ALTER DEFAULT PRIVILEGES FOR ROLE myrole REVOKE SELECT ON TABLES FROM myrole; or when other default privileges are being revoked, such as EXECUTE rights granted to public for functions: ALTER DEFAULT PRIVILEGES FOR ROLE myrole REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC; Fix this by correctly working out what the correct REVOKE statements are (if any) and dump them out, just as we do for everything else. Noticed while developing additional regression tests for pg_dump, which will be landing shortly. Back-patch to 9.6 where the bug was introduced.
* test_pg_dump: perltidy cleanupStephen Frost2017-01-31
| | | | | | | | | | | | | As pointed out by Alvaro, we actually use perltidy on the perl scripts in the source tree, so go back to the results of a perltidy run for the test_pg_dump TAP script. To make it look slightly less tragic, I changed most of the independent arguments into long-form single arguments (eg: -f file.sql changed to be --file=file.sql) to avoid having them confusingly split across lines due to perltidy. Back-patch to 9.6, as the last patch was.
* Update time zone data files to tzdata release 2016j.Tom Lane2017-01-30
| | | | | | | | DST law changes in northern Cyprus (new zone Asia/Famagusta), Russia (new zone Europe/Saratov), Tonga, Antarctica/Casey. Historical corrections for Asia/Aqtau, Asia/Atyrau, Asia/Gaza, Asia/Hebron, Italy, Malta. Replace invented zone abbreviation "TOT" for Tonga with numeric UTC offset; but as in the past, we'll keep accepting "TOT" for input.
* Handle ALTER EXTENSION ADD/DROP with pg_init_privsStephen Frost2017-01-29
| | | | | | | | | | | | | | | | | | | | | | | | In commit 6c268df, pg_init_privs was added to track the initial privileges of catalog objects and extensions. Unfortunately, that commit didn't include understanding of ALTER EXTENSION ADD/DROP, which allows the objects associated with an extension to be changed after the initial CREATE EXTENSION script has been run. The result of this meant that ACLs for objects added through ALTER EXTENSION ADD were not recorded into pg_init_privs and we would end up including those ACLs in pg_dump when we shouldn't have. This commit corrects that by making sure to have pg_init_privs updated when ALTER EXTENSION ADD/DROP is run, recording the permissions as they are at ALTER EXTENSION ADD time, and removing any if/when ALTER EXTENSION DROP is called. This issue was pointed out by Moshe Jacobson as commentary on bug #14456 (which was actually a bug about versions prior to 9.6 not handling custom ACLs on extensions correctly, an issue now addressed with pg_init_privs in 9.6). Back-patch to 9.6 where pg_init_privs was introduced.
* test_pg_dump TAP test whitespace cleanupStephen Frost2017-01-29
| | | | | | | | | | | The formatting of the perl hashes used in the TAP tests for test_pg_dump was rather horribly inconsistent and made it more difficult than it really should have been to add new tests or adjust what tests are for what runs, etc. Reformat to clean that all up. Whitespace-only changes.
* Orthography fixes for new castNode() macro.Tom Lane2017-01-27
| | | | | | Clean up hastily-composed comment. Normalize whitespace. Erik Rijkers and myself
* Check interrupts during hot standby waitsSimon Riggs2017-01-27
|
* Add castNode(type, ptr) for safe casting between NodeTag based types.Andres Freund2017-01-26
| | | | | | | | | | | | | | | | | | | | | | | | The new function allows to cast from one NodeTag based type to another, while asserting that the conversion is valid. This replaces the common pattern of doing a cast and a Assert(IsA(ptr, type)) close-by. As this seems likely to be used pervasively, we decided to backpatch this change the addition of this macro. Otherwise backpatched fixes are more likely not to work on back-branches. On branches before 9.6, where we do not yet rely on inline functions being available, the type assertion is only performed if PG_USE_INLINE support is detected. The cast obviously is performed regardless. For the benefit of verifying the macro compiles in the back-branches, this commit contains a single use of the new macro. On master, a somewhat larger conversion will be committed separately. Author: Peter Eisentraut and Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/c5d387d9-3440-f5e0-f9d4-71d53b9fbe52@2ndquadrant.com Backpatch: 9.2-
* Remove test for COMMENT ON DATABASEAlvaro Herrera2017-01-26
| | | | | | | | | | | | Our current DDL only allows a database name to be specified in COMMENT ON DATABASE, which Andrew Dunstan reports to make this test fail on the buildfarm. Remove the line until we gain a DDL command that allows the current database to be operated on without having the specify it by name. Backpatch to 9.5, where these tests appeared. Discussion: https://postgr.es/m/e6084b89-07a7-7e57-51ee-d7b8fc9ec864@2ndQuadrant.com
* Reset hot standby xmin after restartSimon Riggs2017-01-26
| | | | | | | | | | Hot_standby_feedback could be reset by reload and worked correctly, but if the server was restarted rather than reloaded the xmin was not reset. Force reset always if hot_standby_feedback is enabled at startup. Ants Aasma, Craig Ringer Reported-by: Ants Aasma
* Ensure that a tsquery like '!foo' matches empty tsvectors.Tom Lane2017-01-26
| | | | | | | | | | | | | | | | | !foo means "the tsvector does not contain foo", and therefore it should match an empty tsvector. ts_match_vq() overenthusiastically supposed that an empty tsvector could never match any query, so it forcibly returned FALSE, the wrong answer. Remove the premature optimization. Our behavior on this point was inconsistent, because while seqscans and GIST index searches both failed to match empty tsvectors, GIN index searches would find them, since GIN scans don't rely on ts_match_vq(). That makes this certainly a bug, not a debatable definition disagreement, so back-patch to all supported branches. Report and diagnosis by Tom Dunstan (bug #14515); added test cases by me. Discussion: https://postgr.es/m/20170126025524.1434.97828@wrigleys.postgresql.org
* Fix comments in StrategyNotifyBgWriter().Tatsuo Ishii2017-01-24
| | | | | | | | The interface for the function was changed in d72731a70450b5e7084991b9caa15cb58a2820df but the comments of the function was not updated. Patch by Yugo Nagata.
* Avoid useless respawining the autovacuum launcher at high speed.Robert Haas2017-01-20
| | | | | | | | | | | | | | | | | | | | | | When (1) autovacuum = off and (2) there's at least one database with an XID age greater than autovacuum_freeze_max_age and (3) all tables in that database that need vacuuming are already being processed by a worker and (4) the autovacuum launcher is started, a kind of infinite loop occurs. The launcher starts a worker and immediately exits. The worker, finding no worker to do, immediately starts the launcher, supposedly so that the next database can be processed. But because datfrozenxid for that database hasn't been advanced yet, the new worker gets put right back into the same database as the old one, where it once again starts the launcher and exits. High-speed ping pong ensues. There are several possible ways to break the cycle; this seems like the safest one. Amit Khandekar (code) and Robert Haas (comments), reviewed by Álvaro Herrera. Discussion: http://postgr.es/m/CAJ3gD9eWejf72HKquKSzax0r+epS=nAbQKNnykkMA0E8c+rMDg@mail.gmail.com