aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Remove the last vestige of server-side autocommit.Tom Lane2014-11-05
| | | | | | | | | | | | | | | | Long ago we briefly had an "autocommit" GUC that turned server-side autocommit on and off. That behavior was removed in 7.4 after concluding that it broke far too much client-side logic, and making clients cope with both behaviors was impractical. But the GUC variable was left behind, so as not to break any client code that might be trying to read its value. Enough time has now passed that we should remove the GUC completely. Whatever vestigial backwards-compatibility benefit it had is outweighed by the risk of confusion for newbies who assume it ought to do something, as per a recent complaint from Wolfgang Wilhelm. In passing, adjust what seemed to me a rather confusing documentation reference to libpq's autocommit behavior. libpq as such knows nothing about autocommit, so psql is probably what was meant.
* Fix thinko in commit 2bd9e412f92bc6a68f3e8bcb18e04955cc35001d.Robert Haas2014-11-05
| | | | | | | Obviously, every translation unit should not be declaring this separately. It needs to be PGDLLIMPORT as well, to avoid breaking third-party code that uses any of the functions that the commit mentioned above changed to macros.
* Make CREATE TYPE print warnings if a datatype's I/O functions are volatile.Tom Lane2014-11-05
| | | | | | | | | | | | This is a followup to commit 43ac12c6e6e397fd9142ed908447eba32d3785b2, which added regression tests checking that I/O functions of built-in types are not marked volatile. Complaining in CREATE TYPE should push developers of add-on types to fix any misdeclared functions in their types. It's just a warning not an error, to avoid creating upgrade problems for what might be just cosmetic mis-markings. Aside from adding the warning code, fix a number of types that were sloppily created in the regression tests.
* Drop no-longer-needed buffers during ALTER DATABASE SET TABLESPACE.Tom Lane2014-11-04
| | | | | | | | | | | | | The previous coding assumed that we could just let buffers for the database's old tablespace age out of the buffer arena naturally. The folly of that is exposed by bug #11867 from Marc Munro: the user could later move the database back to its original tablespace, after which any still-surviving buffers would match lookups again and appear to contain valid data. But they'd be missing any changes applied while the database was in the new tablespace. This has been broken since ALTER SET TABLESPACE was introduced, so back-patch to all supported branches.
* Switch to CRC-32C in WAL and other places.Heikki Linnakangas2014-11-04
| | | | | | | | | | | | | | | | | | | | | | | The old algorithm was found to not be the usual CRC-32 algorithm, used by Ethernet et al. We were using a non-reflected lookup table with code meant for a reflected lookup table. That's a strange combination that AFAICS does not correspond to any bit-wise CRC calculation, which makes it difficult to reason about its properties. Although it has worked well in practice, seems safer to use a well-known algorithm. Since we're changing the algorithm anyway, we might as well choose a different polynomial. The Castagnoli polynomial has better error-correcting properties than the traditional CRC-32 polynomial, even if we had implemented it correctly. Another reason for picking that is that some new CPUs have hardware support for calculating CRC-32C, but not CRC-32, let alone our strange variant of it. This patch doesn't add any support for such hardware, but a future patch could now do that. The old algorithm is kept around for tsquery and pg_trgm, which use the values in indexes that need to remain compatible so that pg_upgrade works. While we're at it, share the old lookup table for CRC-32 calculation between hstore, ltree and core. They all use the same table, so might as well.
* Support frontend-backend protocol communication using a shm_mq.Robert Haas2014-10-31
| | | | | | | | | | | | | A background worker can use pq_redirect_to_shm_mq() to direct protocol that would normally be sent to the frontend to a shm_mq so that another process may read them. The receiving process may use pq_parse_errornotice() to parse an ErrorResponse or NoticeResponse from the background worker and, if it wishes, ThrowErrorData() to propagate the error (with or without further modification). Patch by me. Review by Andres Freund.
* Extend dsm API with a new function dsm_unpin_mapping.Robert Haas2014-10-30
| | | | | | | | | This reassociates a dynamic shared memory handle previous passed to dsm_pin_mapping with the current resource owner, so that it will be cleaned up at the end of the current query. Patch by me. Review of the function name by Andres Freund, Amit Kapila, Jim Nasby, Petr Jelinek, and Álvaro Herrera.
* Test IsInTransactionChain, not IsTransactionBlock, in vac_update_relstats.Tom Lane2014-10-30
| | | | | | | | | | | As noted by Noah Misch, my initial cut at fixing bug #11638 didn't cover all cases where ANALYZE might be invoked in an unsafe context. We need to test the result of IsInTransactionChain not IsTransactionBlock; which is notationally a pain because IsInTransactionChain requires an isTopLevel flag, which would have to be passed down through several levels of callers. I chose to pass in_outer_xact (ie, the result of IsInTransactionChain) rather than isTopLevel per se, as that seemed marginally more apropos for the intermediate functions to know about.
* "Pin", rather than "keep", dynamic shared memory mappings and segments.Robert Haas2014-10-30
| | | | | | | | | | Nobody seemed concerned about this naming when it originally went in, but there's a pending patch that implements the opposite of dsm_keep_mapping, and the term "unkeep" was judged unpalatable. "unpin" has existing precedent in the PostgreSQL code base, and the English language, so use this terminology instead. Per discussion, back-patch to 9.4.
* Avoid corrupting tables when ANALYZE inside a transaction is rolled back.Tom Lane2014-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | VACUUM and ANALYZE update the target table's pg_class row in-place, that is nontransactionally. This is OK, more or less, for the statistical columns, which are mostly nontransactional anyhow. It's not so OK for the DDL hint flags (relhasindex etc), which might get changed in response to transactional changes that could still be rolled back. This isn't a problem for VACUUM, since it can't be run inside a transaction block nor in parallel with DDL on the table. However, we allow ANALYZE inside a transaction block, so if the transaction had earlier removed the last index, rule, or trigger from the table, and then we roll back the transaction after ANALYZE, the table would be left in a corrupted state with the hint flags not set though they should be. To fix, suppress the hint-flag updates if we are InTransactionBlock(). This is safe enough because it's always OK to postpone hint maintenance some more; the worst-case consequence is a few extra searches of pg_index et al. There was discussion of instead using a transactional update, but that would change the behavior in ways that are not all desirable: in most scenarios we're better off keeping ANALYZE's statistical values even if the ANALYZE itself rolls back. In any case we probably don't want to change this behavior in back branches. Per bug #11638 from Casey Shobe. This has been broken for a good long time, so back-patch to all supported branches. Tom Lane and Michael Paquier, initial diagnosis by Andres Freund
* Avoid setup work for invalidation messages at start-of-(sub)xact.Robert Haas2014-10-29
| | | | | | | | | | | | Instead of initializing a new TransInvalidationInfo for every transaction or subtransaction, we can just do it for those transactions or subtransactions that actually need to queue invalidation messages. That also avoids needing to free those entries at the end of a transaction or subtransaction that does not generate any invalidation messages, which is by far the common case. Patch by me. Review by Simon Riggs and Andres Freund.
* Remove obsolete commentary.Tom Lane2014-10-28
| | | | | | | Since we got rid of non-MVCC catalog scans, the fourth reason given for using a non-transactional update in index_update_stats() is obsolete. The other three are still good, so we're not going to change the code, but fix the comment.
* Remove unnecessary assignment.Heikki Linnakangas2014-10-28
| | | | Reported by MauMau.
* Fix two bugs in tsquery @> operator.Heikki Linnakangas2014-10-27
| | | | | | | | | | | | | 1. The comparison for matching terms used only the CRC to decide if there's a match. Two different terms with the same CRC gave a match. 2. It assumed that if the second operand has more terms than the first, it's never a match. That assumption is bogus, because there can be duplicate terms in either operand. Rewrite the implementation in a way that doesn't have those bugs. Backpatch to all supported versions.
* Improve planning of btree index scans using ScalarArrayOpExpr quals.Tom Lane2014-10-26
| | | | | | | | | | | | | | | | | | | | | | Since we taught btree to handle ScalarArrayOpExpr quals natively (commit 9e8da0f75731aaa7605cf4656c21ea09e84d2eb1), the planner has always included ScalarArrayOpExpr quals in index conditions if possible. However, if the qual is for a non-first index column, this could result in an inferior plan because we can no longer take advantage of index ordering (cf. commit 807a40c551dd30c8dd5a0b3bd82f5bbb1e7fd285). It can be better to omit the ScalarArrayOpExpr qual from the index condition and let it be done as a filter, so that the output doesn't need to get sorted. Indeed, this is true for the query introduced as a test case by the latter commit. To fix, restructure get_index_paths and build_index_paths so that we consider paths both with and without ScalarArrayOpExpr quals in non-first index columns. Redesign the API of build_index_paths so that it reports what it found, saving useless second or third calls. Report and patch by Andrew Gierth (though rather heavily modified by me). Back-patch to 9.2 where this code was introduced, since the issue can result in significant performance regressions compared to plans produced by 9.1 and earlier.
* Fix off-by-one error in 2781b4bea7db357be59f9a5fd73ca1eb12ff5a79.Robert Haas2014-10-24
| | | | Spotted by Tom Lane.
* Update README.tuplockAlvaro Herrera2014-10-23
| | | | | | | This file was documenting an older version of patch 0ac5ad5134; update it to match what was really committed Author: Florian Pflug
* Improve ispell dictionary's defenses against bad affix files.Tom Lane2014-10-23
| | | | | | | | | | | | | Don't crash if an ispell dictionary definition contains flags but not any compound affixes. (This isn't a security issue since only superusers can install affix files, but still it's a bad thing.) Also, be more careful about detecting whether an affix-file FLAG command is old-format (ispell) or new-format (myspell/hunspell). And change the error message about mixed old-format and new-format commands into something intelligible. Per bug #11770 from Emre Hasegeli. Back-patch to all supported branches.
* Perform less setup work for AFTER triggers at transaction start.Robert Haas2014-10-23
| | | | | | | | | | Testing reveals that the memory allocation we do at transaction start has small but measurable overhead on simple transactions. To cut down on that overhead, defer some of that work to the point when AFTER triggers are first used, thus avoiding it altogether if they never are. Patch by me. Review by Andres Freund.
* Add a function to get the authenticated user ID.Robert Haas2014-10-23
| | | | | | | | Previously, this was not exposed outside of miscinit.c. It is needed for the pending pg_background patch, and will also be needed for parallelism. Without it, there's no way for a background worker to re-create the exact authentication environment that was present in the process that started it, which could lead to security exposures.
* Prevent the already-archived WAL file from being archived again.Fujii Masao2014-10-23
| | | | | | | | | | | | | | | | | | Previously the archive recovery always created .ready file for the last WAL file of the old timeline at the end of recovery even when it's restored from the archive and has .done file. That is, there was the case where the WAL file had both .ready and .done files. This caused the already-archived WAL file to be archived again. This commit prevents the archive recovery from creating .ready file for the last WAL file if it has .done file, in order to prevent it from being archived again. This bug was added when cascading replication feature was introduced, i.e., the commit 5286105800c7d5902f98f32e11b209c471c0c69c. So, back-patch to 9.2, where cascading replication was added. Reviewed by Michael Paquier
* Minimize calls of pg_class_aclcheck to minimum necessaryPeter Eisentraut2014-10-22
| | | | | | | | | | In a couple of code paths, pg_class_aclcheck is called in succession with multiple different modes set. This patch combines those modes to have a single call of this function and reduce a bit process overhead for permission checking. Author: Michael Paquier <michael@otacoo.com> Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
* Update comment.Heikki Linnakangas2014-10-22
| | | | | | | The _bt_tuplecompare() function mentioned in comment hasn't existed for a long time. Peter Geoghegan
* Allow input format xxxx-xxxx-xxxx for macaddr typePeter Eisentraut2014-10-21
| | | | | Author: Herwin Weststrate <herwin@quarantainenet.nl> Reviewed-by: Ali Akbar <the.apaan@gmail.com>
* Don't duplicate log_checkpoint messages for both of restart and checkpoints.Andres Freund2014-10-21
| | | | | | | | | | | | | | The duplication originated in cdd46c765, where restartpoints were introduced. In LogCheckpointStart's case the duplication actually lead to the compiler's format string checking not to be effective because the format string wasn't constant. Arguably these messages shouldn't be elog(), but ereport() style messages. That'd even allow to translate the messages... But as there's more mistakes of that kind in surrounding code, it seems better to change that separately.
* Renumber CHECKPOINT_* flags.Andres Freund2014-10-21
| | | | | | | | | | | Commit 7dbb6069382 added a new CHECKPOINT_FLUSH_ALL flag. As that commit needed to be backpatched I didn't change the numeric values of the existing flags as that could lead to nastly problems if any external code issued checkpoints. That's not a concern on master, so renumber them there. Also add a comment about CHECKPOINT_FLUSH_ALL above CreateCheckPoint().
* Flush unlogged table's buffers when copying or moving databases.Andres Freund2014-10-20
| | | | | | | | | | | | | | | | | | | | | | CREATE DATABASE and ALTER DATABASE .. SET TABLESPACE copy the source database directory on the filesystem level. To ensure the on disk state is consistent they block out users of the affected database and force a checkpoint to flush out all data to disk. Unfortunately, up to now, that checkpoint didn't flush out dirty buffers from unlogged relations. That bug means there could be leftover dirty buffers in either the template database, or the database in its old location. Leading to problems when accessing relations in an inconsistent state; and to possible problems during shutdown in the SET TABLESPACE case because buffers belonging files that don't exist anymore are flushed. This was reported in bug #10675 by Maxim Boguk. Fix by Pavan Deolasee, modified somewhat by me. Reviewed by MauMau and Fujii Masao. Backpatch to 9.1 where unlogged tables were introduced.
* Fix mishandling of FieldSelect-on-whole-row-Var in nested lateral queries.Tom Lane2014-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | If an inline-able SQL function taking a composite argument is used in a LATERAL subselect, and the composite argument is a lateral reference, the planner could fail with "variable not found in subplan target list", as seen in bug #11703 from Karl Bartel. (The outer function call used in the bug report and in the committed regression test is not really necessary to provoke the bug --- you can get it if you manually expand the outer function into "LATERAL (SELECT inner_function(outer_relation))", too.) The cause of this is that we generate the reltargetlist for the referenced relation before doing eval_const_expressions() on the lateral sub-select's expressions (cf find_lateral_references()), so what's scheduled to be emitted by the referenced relation is a whole-row Var, not the simplified single-column Var produced by optimizing the function's FieldSelect on the whole-row Var. Then setrefs.c fails to match up that lateral reference to what's available from the outer scan. Preserving the FieldSelect optimization in such cases would require either major planner restructuring (to recursively do expression simplification on sub-selects much earlier) or some amazingly ugly kluge to change the reltargetlist of a possibly-already-planned relation. It seems better just to skip the optimization when the Var is from an upper query level; the case is not so common that it's likely anyone will notice a few wasted cycles. AFAICT this problem only occurs for uplevel LATERAL references, so back-patch to 9.3 where LATERAL was added.
* Fix typos.Robert Haas2014-10-20
| | | | David Rowley
* Fix typos.Robert Haas2014-10-20
| | | | Etsuro Fujita
* Allow setting effective_io_concurrency even on unsupported systemsPeter Eisentraut2014-10-18
| | | | | | | This matches the behavior of other parameters that are unsupported on some systems (e.g., ssl). Also document the default value.
* Shorten warning about hash creationBruce Momjian2014-10-18
| | | | Also document that PITR is also affected.
* interval: tighten precision specificationBruce Momjian2014-10-18
| | | | | | | | | | interval precision can only be specified after the "interval" keyword if no units are specified. Previously we incorrectly checked the units to see if the precision was legal, causing confusion. Report by Alvaro Herrera
* Avoid core dump in _outPathInfo() for Path without a parent RelOptInfo.Tom Lane2014-10-17
| | | | | | | | Nearly all Paths have parents, but a ResultPath representing an empty FROM clause does not. Avoid a core dump in such cases. I believe this is only a hazard for debugging usage, not for production, else we'd have heard about it before. Nonetheless, back-patch to 9.1 where the troublesome code was introduced. Noted while poking at bug #11703.
* Support timezone abbreviations that sometimes change.Tom Lane2014-10-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now, PG has assumed that any given timezone abbreviation (such as "EDT") represents a constant GMT offset in the usage of any particular region; we had a way to configure what that offset was, but not for it to be changeable over time. But, as with most things horological, this view of the world is too simplistic: there are numerous regions that have at one time or another switched to a different GMT offset but kept using the same timezone abbreviation. Almost the entire Russian Federation did that a few years ago, and later this month they're going to do it again. And there are similar examples all over the world. To cope with this, invent the notion of a "dynamic timezone abbreviation", which is one that is referenced to a particular underlying timezone (as defined in the IANA timezone database) and means whatever it currently means in that zone. For zones that use or have used daylight-savings time, the standard and DST abbreviations continue to have the property that you can specify standard or DST time and get that time offset whether or not DST was theoretically in effect at the time. However, the abbreviations mean what they meant at the time in question (or most recently before that time) rather than being absolutely fixed. The standard abbreviation-list files have been changed to use this behavior for abbreviations that have actually varied in meaning since 1970. The old simple-numeric definitions are kept for abbreviations that have not changed, since they are a bit faster to resolve. While this is clearly a new feature, it seems necessary to back-patch it into all active branches, because otherwise use of Russian zone abbreviations is going to become even more problematic than it already was. This change supersedes the changes in commit 513d06ded et al to modify the fixed meanings of the Russian abbreviations; since we've not shipped that yet, this will avoid an undesirably incompatible (not to mention incorrect) change in behavior for timestamps between 2011 and 2014. This patch makes some cosmetic changes in ecpglib to keep its usage of datetime lookup tables as similar as possible to the backend code, but doesn't do anything about the increasingly obsolete set of timezone abbreviation definitions that are hard-wired into ecpglib. Whatever we do about that will likely not be appropriate material for back-patching. Also, a potential free() of a garbage pointer after an out-of-memory failure in ecpglib has been fixed. This patch also fixes pre-existing bugs in DetermineTimeZoneOffset() that caused it to produce unexpected results near a timezone transition, if both the "before" and "after" states are marked as standard time. We'd only ever thought about or tested transitions between standard and DST time, but that's not what's happening when a zone simply redefines their base GMT offset. In passing, update the SGML documentation to refer to the Olson/zoneinfo/ zic timezone database as the "IANA" database, since it's now being maintained under the auspices of IANA.
* Print planning time only in EXPLAIN ANALYZE, not plain EXPLAIN.Tom Lane2014-10-15
| | | | | | | | | | | | We've gotten enough push-back on that change to make it clear that it wasn't an especially good idea to do it like that. Revert plain EXPLAIN to its previous behavior, but keep the extra output in EXPLAIN ANALYZE. Per discussion. Internally, I set this up as a separate flag ExplainState.summary that controls printing of planning time and execution time. For now it's just copied from the ANALYZE option, but we could consider exposing it to users.
* Fix deadlock with LWLockAcquireWithVar and LWLockWaitForVar.Heikki Linnakangas2014-10-14
| | | | | | | | | | | | | LWLockRelease should release all backends waiting with LWLockWaitForVar, even when another backend has already been woken up to acquire the lock, i.e. when releaseOK is false. LWLockWaitForVar can return as soon as the protected value changes, even if the other backend will acquire the lock. Fix that by resetting releaseOK to true in LWLockWaitForVar, whenever adding itself to the wait queue. This should fix the bug reported by MauMau, where the system occasionally hangs when there is a lot of concurrent WAL activity and a checkpoint. Backpatch to 9.4, where this code was added.
* C comments: adjust execTuples.c for new structureBruce Momjian2014-10-13
| | | | Report by Peter Geoghegan
* Consistently use NULL for invalid GUC unit stringsBruce Momjian2014-10-13
| | | | Patch by Euler Taveira
* Increase number of hash join buckets for underestimate.Kevin Grittner2014-10-13
| | | | | | | | | | | | | | | | | | | | | | | | | | If we expect batching at the very beginning, we size nbuckets for "full work_mem" (see how many tuples we can get into work_mem, while not breaking NTUP_PER_BUCKET threshold). If we expect to be fine without batching, we start with the 'right' nbuckets and track the optimal nbuckets as we go (without actually resizing the hash table). Once we hit work_mem (considering the optimal nbuckets value), we keep the value. At the end of the first batch, we check whether (nbuckets != nbuckets_optimal) and resize the hash table if needed. Also, we keep this value for all batches (it's OK because it assumes full work_mem, and it makes the batchno evaluation trivial). So the resize happens only once. There could be cases where it would improve performance to allow the NTUP_PER_BUCKET threshold to be exceeded to keep everything in one batch rather than spilling to a second batch, but attempts to generate such a case have so far been unsuccessful; that issue may be addressed with a follow-on patch after further investigation. Tomas Vondra with minor format and comment cleanup by me Reviewed by Robert Haas, Heikki Linnakangas, and Kevin Grittner
* Message improvementsPeter Eisentraut2014-10-12
|
* Fix bogus optimization in JSONB containment tests.Tom Lane2014-10-11
| | | | | | | | | | | | | | When determining whether one JSONB object contains another, it's okay to make a quick exit if the first object has fewer pairs than the second: because we de-duplicate keys within objects, it is impossible that the first object has all the keys the second does. However, the code was applying this rule to JSONB arrays as well, where it does *not* hold because arrays can contain duplicate entries. The test was really in the wrong place anyway; we should do it within JsonbDeepContains, where it can be applied to nested objects not only top-level ones. Report and test cases by Alexander Korotkov; fix by Peter Geoghegan and Tom Lane.
* Split builtins.h to a new header ruleutils.hAlvaro Herrera2014-10-08
| | | | | | | The new header contains many prototypes for functions in ruleutils.c that are not exposed to the SQL level. Reviewed by Andres Freund and Michael Paquier.
* Extend shm_mq API with new functions shm_mq_sendv, shm_mq_set_handle.Robert Haas2014-10-08
| | | | | | | | | | | | | | | | | | | | shm_mq_sendv sends a message to the queue assembled from multiple locations. This is expected to be used by forthcoming patches to allow frontend/backend protocol messages to be sent via shm_mq, but might be useful for other purposes as well. shm_mq_set_handle associates a BackgroundWorkerHandle with an already-existing shm_mq_handle. This solves a timing problem when creating a shm_mq to communicate with a newly-launched background worker: if you attach to the queue first, and the background worker fails to start, you might block forever trying to do I/O on the queue; but if you start the background worker first, but then die before attaching to the queue, the background worrker might block forever trying to do I/O on the queue. This lets you attach before starting the worker (so that the worker is protected) and then associate the BackgroundWorkerHandle later (so that you are also protected). Patch by me, reviewed by Stephen Frost.
* Implement SKIP LOCKED for row-level locksAlvaro Herrera2014-10-07
| | | | | | | | | | | | | | | | | | This clause changes the behavior of SELECT locking clauses in the presence of locked rows: instead of causing a process to block waiting for the locks held by other processes (or raise an error, with NOWAIT), SKIP LOCKED makes the new reader skip over such rows. While this is not appropriate behavior for general purposes, there are some cases in which it is useful, such as queue-like tables. Catalog version bumped because this patch changes the representation of stored rules. Reviewed by Craig Ringer (based on a previous attempt at an implementation by Simon Riggs, who also provided input on the syntax used in the current patch), David Rowley, and Álvaro Herrera. Author: Thomas Munro
* Fix typo in elog message.Robert Haas2014-10-07
|
* Translation updatesPeter Eisentraut2014-10-05
|
* Eliminate one background-worker-related flag variable.Robert Haas2014-10-04
| | | | | | | | | | | | | | | | | | | | | Teach sigusr1_handler() to use the same test for whether a worker might need to be started as ServerLoop(). Aside from being perhaps a bit simpler, this prevents a potentially-unbounded delay when starting a background worker. On some platforms, select() doesn't return when interrupted by a signal, but is instead restarted, including a reset of the timeout to the originally-requested value. If signals arrive often enough, but no connection requests arrive, sigusr1_handler() will be executed repeatedly, but the body of ServerLoop() won't be reached. This change ensures that, even in that case, background workers will eventually get launched. This is far from a perfect fix; really, we need select() to return control to ServerLoop() after an interrupt, either via the self-pipe trick or some other mechanism. But that's going to require more work and discussion, so let's do this for now to at least mitigate the damage. Per investigation of test_shm_mq failures on buildfarm member anole.
* Update time zone data files to tzdata release 2014h.Tom Lane2014-10-04
| | | | | | | | | | | | | | | | | | | | | | Most zones in the Russian Federation are subtracting one or two hours as of 2014-10-26. Update the meanings of the abbreviations IRKT, KRAT, MAGT, MSK, NOVT, OMST, SAKT, VLAT, YAKT, YEKT to match. The IANA timezone database has adopted abbreviations of the form AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into our "Default" timezone abbreviation set. The "Australia" abbreviation set now contains only CST,EAST,EST,SAST,SAT,WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South Africa Standard Time in the "Default" abbreviation set. Add zone abbreviations SRET (Asia/Srednekolymsk) and XJT (Asia/Urumqi), and use WSST/WSDT for western Samoa. Also a DST law change in the Turks & Caicos Islands (America/Grand_Turk), and numerous corrections for historical time zone data.
* Fix CreatePolicy, pg_dump -v; psql and doc updatesStephen Frost2014-10-03
| | | | | | | | | | | | | | | | | | | | Peter G pointed out that valgrind was, rightfully, complaining about CreatePolicy() ending up copying beyond the end of the parsed policy name. Name is a fixed-size type and we need to use namein (through DirectFunctionCall1()) to flush out the entire array before we pass it down to heap_form_tuple. Michael Paquier pointed out that pg_dump --verbose was missing a newline and Fabrízio de Royes Mello further pointed out that the schema was also missing from the messages, so fix those also. Also, based on an off-list comment from Kevin, rework the psql \d output to facilitate copy/pasting into a new CREATE or ALTER POLICY command. Lastly, improve the pg_policies view and update the documentation for it, along with a few other minor doc corrections based on an off-list discussion with Adam Brightwell.