aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Support timezone abbreviations that sometimes change.Tom Lane2014-10-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now, PG has assumed that any given timezone abbreviation (such as "EDT") represents a constant GMT offset in the usage of any particular region; we had a way to configure what that offset was, but not for it to be changeable over time. But, as with most things horological, this view of the world is too simplistic: there are numerous regions that have at one time or another switched to a different GMT offset but kept using the same timezone abbreviation. Almost the entire Russian Federation did that a few years ago, and later this month they're going to do it again. And there are similar examples all over the world. To cope with this, invent the notion of a "dynamic timezone abbreviation", which is one that is referenced to a particular underlying timezone (as defined in the IANA timezone database) and means whatever it currently means in that zone. For zones that use or have used daylight-savings time, the standard and DST abbreviations continue to have the property that you can specify standard or DST time and get that time offset whether or not DST was theoretically in effect at the time. However, the abbreviations mean what they meant at the time in question (or most recently before that time) rather than being absolutely fixed. The standard abbreviation-list files have been changed to use this behavior for abbreviations that have actually varied in meaning since 1970. The old simple-numeric definitions are kept for abbreviations that have not changed, since they are a bit faster to resolve. While this is clearly a new feature, it seems necessary to back-patch it into all active branches, because otherwise use of Russian zone abbreviations is going to become even more problematic than it already was. This change supersedes the changes in commit 513d06ded et al to modify the fixed meanings of the Russian abbreviations; since we've not shipped that yet, this will avoid an undesirably incompatible (not to mention incorrect) change in behavior for timestamps between 2011 and 2014. This patch makes some cosmetic changes in ecpglib to keep its usage of datetime lookup tables as similar as possible to the backend code, but doesn't do anything about the increasingly obsolete set of timezone abbreviation definitions that are hard-wired into ecpglib. Whatever we do about that will likely not be appropriate material for back-patching. Also, a potential free() of a garbage pointer after an out-of-memory failure in ecpglib has been fixed. This patch also fixes pre-existing bugs in DetermineTimeZoneOffset() that caused it to produce unexpected results near a timezone transition, if both the "before" and "after" states are marked as standard time. We'd only ever thought about or tested transitions between standard and DST time, but that's not what's happening when a zone simply redefines their base GMT offset. In passing, update the SGML documentation to refer to the Olson/zoneinfo/ zic timezone database as the "IANA" database, since it's now being maintained under the auspices of IANA.
* Print planning time only in EXPLAIN ANALYZE, not plain EXPLAIN.Tom Lane2014-10-15
| | | | | | | | | | | | We've gotten enough push-back on that change to make it clear that it wasn't an especially good idea to do it like that. Revert plain EXPLAIN to its previous behavior, but keep the extra output in EXPLAIN ANALYZE. Per discussion. Internally, I set this up as a separate flag ExplainState.summary that controls printing of planning time and execution time. For now it's just copied from the ANALYZE option, but we could consider exposing it to users.
* Blind attempt at fixing Win32 pg_dump issuesAlvaro Herrera2014-10-14
| | | | Per buildfarm failures
* pg_dump: Reduce use of global variablesAlvaro Herrera2014-10-14
| | | | | | | | | | | | | | | Most pg_dump.c global variables, which were passed down individually to dumping routines, are now grouped as members of the new DumpOptions struct, which is used as a local variable and passed down into routines that need it. This helps future development efforts; in particular it is said to enable a mode in which a parallel pg_dump run can output multiple streams, and have them restored in parallel. Also take the opportunity to clean up the pg_dump header files somewhat, to avoid circularity. Author: Joachim Wieland, revised by Álvaro Herrera Reviewed by Peter Eisentraut
* Fix deadlock with LWLockAcquireWithVar and LWLockWaitForVar.Heikki Linnakangas2014-10-14
| | | | | | | | | | | | | LWLockRelease should release all backends waiting with LWLockWaitForVar, even when another backend has already been woken up to acquire the lock, i.e. when releaseOK is false. LWLockWaitForVar can return as soon as the protected value changes, even if the other backend will acquire the lock. Fix that by resetting releaseOK to true in LWLockWaitForVar, whenever adding itself to the wait queue. This should fix the bug reported by MauMau, where the system occasionally hangs when there is a lot of concurrent WAL activity and a checkpoint. Backpatch to 9.4, where this code was added.
* psql: Fix \? output alignmentPeter Eisentraut2014-10-13
| | | | This was inadvertently changed in commit c64e68fd.
* C comments: adjust execTuples.c for new structureBruce Momjian2014-10-13
| | | | Report by Peter Geoghegan
* Consistently use NULL for invalid GUC unit stringsBruce Momjian2014-10-13
| | | | Patch by Euler Taveira
* Increase number of hash join buckets for underestimate.Kevin Grittner2014-10-13
| | | | | | | | | | | | | | | | | | | | | | | | | | If we expect batching at the very beginning, we size nbuckets for "full work_mem" (see how many tuples we can get into work_mem, while not breaking NTUP_PER_BUCKET threshold). If we expect to be fine without batching, we start with the 'right' nbuckets and track the optimal nbuckets as we go (without actually resizing the hash table). Once we hit work_mem (considering the optimal nbuckets value), we keep the value. At the end of the first batch, we check whether (nbuckets != nbuckets_optimal) and resize the hash table if needed. Also, we keep this value for all batches (it's OK because it assumes full work_mem, and it makes the batchno evaluation trivial). So the resize happens only once. There could be cases where it would improve performance to allow the NTUP_PER_BUCKET threshold to be exceeded to keep everything in one batch rather than spilling to a second batch, but attempts to generate such a case have so far been unsuccessful; that issue may be addressed with a follow-on patch after further investigation. Tomas Vondra with minor format and comment cleanup by me Reviewed by Robert Haas, Heikki Linnakangas, and Kevin Grittner
* Fix quoting in the add_to_path Makefile macro.Noah Misch2014-10-12
| | | | | | The previous quoting caused "make -C src/bin check" to ignore, rather than add to, any LD_LIBRARY_PATH content from the environment. Back-patch to 9.4, where the macro was introduced.
* pg_ctl: Cast DWORD values to avoid -Wformat warnings.Noah Misch2014-10-12
| | | | | This affects pg_ctl alone, because pg_ctl takes the exceptional step of calling Windows API functions in a Cygwin build.
* Suppress dead, unportable src/port/crypt.c code.Noah Misch2014-10-12
| | | | | | | This file used __int64, which is specific to native Windows, rather than int64. Suppress the long-unused union field of this type. Noticed on Cygwin x86_64 with -lcrypt not installed. Back-patch to 9.0 (all supported versions).
* pg_recvlogical: Improve --help outputPeter Eisentraut2014-10-12
| | | | | | | | | List the actions first, as they are the most important options. Group the other options more sensibly, consistent with the man page. Correct a few typographical errors, clarify some things. Also update the pg_receivexlog --help output to make it a bit more consistent with that of pg_recvlogical.
* Message improvementsPeter Eisentraut2014-10-12
|
* regression: adjust polygon diagrams to not use tabsBruce Momjian2014-10-11
| | | | | | Also, small diagram adjustments Patch by Emre Hasegeli
* Fix bogus optimization in JSONB containment tests.Tom Lane2014-10-11
| | | | | | | | | | | | | | When determining whether one JSONB object contains another, it's okay to make a quick exit if the first object has fewer pairs than the second: because we de-duplicate keys within objects, it is impossible that the first object has all the keys the second does. However, the code was applying this rule to JSONB arrays as well, where it does *not* hold because arrays can contain duplicate entries. The test was really in the wrong place anyway; we should do it within JsonbDeepContains, where it can be applied to nested objects not only top-level ones. Report and test cases by Alexander Korotkov; fix by Peter Geoghegan and Tom Lane.
* Split builtins.h to a new header ruleutils.hAlvaro Herrera2014-10-08
| | | | | | | The new header contains many prototypes for functions in ruleutils.c that are not exposed to the SQL level. Reviewed by Andres Freund and Michael Paquier.
* Extend shm_mq API with new functions shm_mq_sendv, shm_mq_set_handle.Robert Haas2014-10-08
| | | | | | | | | | | | | | | | | | | | shm_mq_sendv sends a message to the queue assembled from multiple locations. This is expected to be used by forthcoming patches to allow frontend/backend protocol messages to be sent via shm_mq, but might be useful for other purposes as well. shm_mq_set_handle associates a BackgroundWorkerHandle with an already-existing shm_mq_handle. This solves a timing problem when creating a shm_mq to communicate with a newly-launched background worker: if you attach to the queue first, and the background worker fails to start, you might block forever trying to do I/O on the queue; but if you start the background worker first, but then die before attaching to the queue, the background worrker might block forever trying to do I/O on the queue. This lets you attach before starting the worker (so that the worker is protected) and then associate the BackgroundWorkerHandle later (so that you are also protected). Patch by me, reviewed by Stephen Frost.
* Implement SKIP LOCKED for row-level locksAlvaro Herrera2014-10-07
| | | | | | | | | | | | | | | | | | This clause changes the behavior of SELECT locking clauses in the presence of locked rows: instead of causing a process to block waiting for the locks held by other processes (or raise an error, with NOWAIT), SKIP LOCKED makes the new reader skip over such rows. While this is not appropriate behavior for general purposes, there are some cases in which it is useful, such as queue-like tables. Catalog version bumped because this patch changes the representation of stored rules. Reviewed by Craig Ringer (based on a previous attempt at an implementation by Simon Riggs, who also provided input on the syntax used in the current patch), David Rowley, and Álvaro Herrera. Author: Thomas Munro
* Fix typo in elog message.Robert Haas2014-10-07
|
* Fix array overrun in ecpg's version of ParseDateTime().Tom Lane2014-10-06
| | | | | | | | | | | | The code wrote a value into the caller's field[] array before checking to see if there was room, which of course is backwards. Per report from Michael Paquier. I fixed the equivalent bug in the backend's version of this code way back in 630684d3a130bb93, but failed to think about ecpg's copy. Fortunately this doesn't look like it would be exploitable for anything worse than a core dump: an external attacker would have no control over the single word that gets written.
* Clean up Create/DropReplicationSlot query bufferStephen Frost2014-10-06
| | | | | | | | | | | | CreateReplicationSlot() and DropReplicationSlot() were not cleaning up the query buffer in some cases (mostly error conditions) which meant a small leak. Not generally an issue as the error case would result in an immediate exit, but not difficult to fix either and reduces the number of false positives from code analyzers. In passing, also add appropriate PQclear() calls to RunIdentifySystem(). Pointed out by Coverity.
* Add support for managing physical replication slots to pg_receivexlog.Andres Freund2014-10-06
| | | | | | | | | | | | | pg_receivexlog already has the capability to use a replication slot to reserve WAL on the upstream node. But the used slot currently has to be created via SQL. To allow using slots directly, without involving SQL, add --create-slot and --drop-slot actions, analogous to the logical slot manipulation support in pg_recvlogical. Author: Michael Paquier Discussion: CABUevEx+zrOHZOQg+dPapNPFRJdsk59b=TSVf30Z71GnFXhQaw@mail.gmail.com
* Rename pg_recvlogical's --create/--drop to --create-slot/--drop-slot.Andres Freund2014-10-06
| | | | | | | | | | | | | | | A future patch (9.5 only) adds slot management to pg_receivexlog. The verbs create/drop don't seem descriptive enough there. It seems better to rename pg_recvlogical's commands now, in beta, than live with the inconsistency forever. The old form (e.g. --drop) will still be accepted by virtue of most getopt_long() options accepting abbreviations for long commands. Backpatch to 9.4 where pg_recvlogical was introduced. Author: Michael Paquier and Andres Freund Discussion: CAB7nPqQtt79U6FmhwvgqJmNyWcVCbbV-nS72j_jyPEopERg9rg@mail.gmail.com
* Translation updatesPeter Eisentraut2014-10-05
|
* Eliminate one background-worker-related flag variable.Robert Haas2014-10-04
| | | | | | | | | | | | | | | | | | | | | Teach sigusr1_handler() to use the same test for whether a worker might need to be started as ServerLoop(). Aside from being perhaps a bit simpler, this prevents a potentially-unbounded delay when starting a background worker. On some platforms, select() doesn't return when interrupted by a signal, but is instead restarted, including a reset of the timeout to the originally-requested value. If signals arrive often enough, but no connection requests arrive, sigusr1_handler() will be executed repeatedly, but the body of ServerLoop() won't be reached. This change ensures that, even in that case, background workers will eventually get launched. This is far from a perfect fix; really, we need select() to return control to ServerLoop() after an interrupt, either via the self-pipe trick or some other mechanism. But that's going to require more work and discussion, so let's do this for now to at least mitigate the damage. Per investigation of test_shm_mq failures on buildfarm member anole.
* Update time zone data files to tzdata release 2014h.Tom Lane2014-10-04
| | | | | | | | | | | | | | | | | | | | | | Most zones in the Russian Federation are subtracting one or two hours as of 2014-10-26. Update the meanings of the abbreviations IRKT, KRAT, MAGT, MSK, NOVT, OMST, SAKT, VLAT, YAKT, YEKT to match. The IANA timezone database has adopted abbreviations of the form AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into our "Default" timezone abbreviation set. The "Australia" abbreviation set now contains only CST,EAST,EST,SAST,SAT,WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South Africa Standard Time in the "Default" abbreviation set. Add zone abbreviations SRET (Asia/Srednekolymsk) and XJT (Asia/Urumqi), and use WSST/WSDT for western Samoa. Also a DST law change in the Turks & Caicos Islands (America/Grand_Turk), and numerous corrections for historical time zone data.
* Update time zone abbreviations lists.Tom Lane2014-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This updates known_abbrevs.txt to be what it should have been already, were my -P patch not broken; and updates some tznames/ entries that missed getting any love in previous timezone data updates because zic failed to flag the change of abbreviation. The non-cosmetic updates: * Remove references to "ADT" as "Arabia Daylight Time", an abbreviation that's been out of use since 2007; therefore, claiming there is a conflict with "Atlantic Daylight Time" doesn't seem especially helpful. (We have left obsolete entries in the files when they didn't conflict with anything, but that seems like a different situation.) * Fix entirely incorrect GMT offsets for CKT (Cook Islands), FJT, FJST (Fiji); we didn't even have them on the proper side of the date line. (Seems to have been aboriginal errors in our tznames data; there's no evidence anything actually changed recently.) * FKST (Falkland Islands Summer Time) is now used all year round, so don't mark it as a DST abbreviation. * Update SAKT (Sakhalin) to mean GMT+11 not GMT+10. In cosmetic changes, I fixed a bunch of wrong (or at least obsolete) claims about abbreviations not being present in the zic files, and tried to be consistent about how obsolete abbreviations are labeled. Note the underlying timezone/data files are still at release 2014e; this is just trying to get us in sync with what those files actually say before we go to the next update.
* Fix CreatePolicy, pg_dump -v; psql and doc updatesStephen Frost2014-10-03
| | | | | | | | | | | | | | | | | | | | Peter G pointed out that valgrind was, rightfully, complaining about CreatePolicy() ending up copying beyond the end of the parsed policy name. Name is a fixed-size type and we need to use namein (through DirectFunctionCall1()) to flush out the entire array before we pass it down to heap_form_tuple. Michael Paquier pointed out that pg_dump --verbose was missing a newline and Fabrízio de Royes Mello further pointed out that the schema was also missing from the messages, so fix those also. Also, based on an off-list comment from Kevin, rework the psql \d output to facilitate copy/pasting into a new CREATE or ALTER POLICY command. Lastly, improve the pg_policies view and update the documentation for it, along with a few other minor doc corrections based on an off-list discussion with Adam Brightwell.
* Fix bogus logic for zic -P option.Tom Lane2014-10-03
| | | | | | | | | | | | | | The quick hack I added to zic to dump out currently-in-use timezone abbreviations turns out to have a nasty bug: within each zone, it was printing the last "struct ttinfo" to be *defined*, not necessarily the last one in use. This was mainly a problem in zones that had changed the meaning of their zone abbreviation (to another GMT offset value) and later changed it back. As a result of this error, we'd missed out updating the tznames/ files for some jurisdictions that have changed their zone abbreviations since the tznames/ files were originally created. I'll address the missing data updates in a separate commit.
* Don't balance vacuum cost delay when per-table settings are in effectAlvaro Herrera2014-10-03
| | | | | | | | | | | | | | | | | | | | | | When there are cost-delay-related storage options set for a table, trying to make that table participate in the autovacuum cost-limit balancing algorithm produces undesirable results: instead of using the configured values, the global values are always used, as illustrated by Mark Kirkwood in http://www.postgresql.org/message-id/52FACF15.8020507@catalyst.net.nz Since the mechanism is already complicated, just disable it for those cases rather than trying to make it cope. There are undesirable side-effects from this too, namely that the total I/O impact on the system will be higher whenever such tables are vacuumed. However, this is seen as less harmful than slowing down vacuum, because that would cause bloat to accumulate. Anyway, in the new system it is possible to tweak options to get the precise behavior one wants, whereas with the previous system one was simply hosed. This has been broken forever, so backpatch to all supported branches. This might affect systems where cost_limit and cost_delay have been set for individual tables.
* Fix typos in comments.Robert Haas2014-10-03
| | | | Etsuro Fujita
* Still another typo fix for 0709b7ee72e4bc71ad07b7120acd117265ab51d0.Robert Haas2014-10-03
| | | | Buildfarm member anole caught this one.
* Check for GiST index tuples that don't fit on a page.Heikki Linnakangas2014-10-03
| | | | | | | | | The page splitting code would go into infinite recursion if you try to insert an index tuple that doesn't fit even on an empty page. Per analysis and suggested fix by Andrew Gierth. Fixes bug #11555, reported by Bryan Seitz (analysis happened over IRC). Backpatch to all supported versions.
* Increase the number of buffer mapping partitions to 128.Robert Haas2014-10-02
| | | | | | | | | Testing by Amit Kapila, Andres Freund, and myself, with and without other patches that also aim to improve scalability, seems to indicate that this change is a significant win over the current value and over smaller values such as 64. It's not clear how high we can push this value before it starts to have negative side-effects elsewhere, but going this far looks OK.
* Install all headers for the new atomics API.Andres Freund2014-10-02
| | | | | | Previously, by mistake, only atomics.h was installed. Kohei KaiGai
* Fix some more problems with nested append relations.Tom Lane2014-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As of commit a87c72915 (which later got backpatched as far as 9.1), we're explicitly supporting the notion that append relations can be nested; this can occur when UNION ALL constructs are nested, or when a UNION ALL contains a table with inheritance children. Bug #11457 from Nelson Page, as well as an earlier report from Elvis Pranskevichus, showed that there were still nasty bugs associated with such cases: in particular the EquivalenceClass mechanism could try to generate "join" clauses connecting an appendrel child to some grandparent appendrel, which would result in assertion failures or bogus plans. Upon investigation I concluded that all current callers of find_childrel_appendrelinfo() need to be fixed to explicitly consider multiple levels of parent appendrels. The most complex fix was in processing of "broken" EquivalenceClasses, which are ECs for which we have been unable to generate all the derived equality clauses we would like to because of missing cross-type equality operators in the underlying btree operator family. That code path is more or less entirely untested by the regression tests to date, because no standard opfamilies have such holes in them. So I wrote a new regression test script to try to exercise it a bit, which turned out to be quite a worthwhile activity as it exposed existing bugs in all supported branches. The present patch is essentially the same as far back as 9.2, which is where parameterized paths were introduced. In 9.0 and 9.1, we only need to back-patch a small fragment of commit 5b7b5518d, which fixes failure to propagate out the original WHERE clauses when a broken EC contains constant members. (The regression test case results show that these older branches are noticeably stupider than 9.2+ in terms of the quality of the plans generated; but we don't really care about plan quality in such cases, only that the plan not be outright wrong. A more invasive fix in the older branches would not be a good idea anyway from a plan-stability standpoint.)
* Refactor replication connection code of various pg_basebackup utilities.Andres Freund2014-10-01
| | | | | | | | | Move some more code to manage replication connection command to streamutil.c. A later patch will introduce replication slot via pg_receivexlog and this avoid duplicating relevant code between pg_receivexlog and pg_recvlogical. Author: Michael Paquier, with some editing by me.
* pg_recvlogical.c code review.Andres Freund2014-10-01
| | | | | | | | | | | | | Several comments still referred to 'initiating', 'freeing', 'stopping' replication slots. These were terms used during different phases of the development of logical decoding, but are no long accurate. Also rename StreamLog() to StreamLogicalLog() and add 'void' to the prototype. Author: Michael Paquier, with some editing by me. Backpatch to 9.4 where pg_recvlogical was introduced.
* Remove num_xloginsert_locks GUC, replace with a #defineHeikki Linnakangas2014-10-01
| | | | | | | I left the GUC in place for the beta period, so that people could experiment with different values. No-one's come up with any data that a different value would be better under some circumstances, so rather than try to document to users what the GUC, let's just hard-code the current value, 8.
* Block signals while computing the sleep time in postmaster's main loop.Andres Freund2014-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | DetermineSleepTime() was previously called without blocked signals. That's not good, because it allows signal handlers to interrupt its workings. DetermineSleepTime() was added in 9.3 with the addition of background workers (da07a1e856511), where it only read from BackgroundWorkerList. Since 9.4, where dynamic background workers were added (7f7485a0cde), the list is also manipulated in DetermineSleepTime(). That's bad because the list now can be persistently corrupted if modified by both a signal handler and DetermineSleepTime(). This was discovered during the investigation of hangs on buildfarm member anole. It's unclear whether this bug is the source of these hangs or not, but it's worth fixing either way. I have confirmed that it can cause crashes. It luckily looks like this only can cause problems when bgworkers are actively used. Discussion: 20140929193733.GB14400@awork2.anarazel.de Backpatch to 9.3 where background workers were introduced.
* Improve documentation about binary/textual output mode for output plugins.Andres Freund2014-10-01
| | | | | | | | | | | Also improve related error message as it contributed to the confusion. Discussion: CAB7nPqQrqFzjqCjxu4GZzTrD9kpj6HMn9G5aOOMwt1WZ8NfqeA@mail.gmail.com, CAB7nPqQXc_+g95zWnqaa=mVQ4d3BVRs6T41frcEYi2ocUrR3+A@mail.gmail.com Per discussion between Michael Paquier, Robert Haas and Andres Freund Backpatch to 9.4 where logical decoding was introduced.
* Rename CACHE_LINE_SIZE to PG_CACHE_LINE_SIZE.Andres Freund2014-10-01
| | | | | | | | | | | | | | | As noted in http://bugs.debian.org/763098 there is a conflict between postgres' definition of CACHE_LINE_SIZE and the definition by various *bsd platforms. It's debatable who has the right to define such a name, but postgres' use was only introduced in 375d8526f290 (9.4), so it seems like a good idea to rename it. Discussion: 20140930195756.GC27407@msg.df7cb.de Per complaint of Christoph Berg in the above email, although he's not the original bug reporter. Backpatch to 9.4 where the define was introduced.
* Fix pg_dump's --if-exists for large objectsAlvaro Herrera2014-09-30
| | | | | | | | This was born broken in 9067310cc5dd590e36c2c3219dbf3961d7c9f8cb. Per trouble report from Joachim Wieland. Pavel Stěhule and Álvaro Herrera
* Also revert e3ec0728, JSON regression testsStephen Frost2014-09-29
| | | | | | | Managed to forget to update the other JSON regression test output, again. Revert the commit which fixed it before. Per buildfarm.
* Revert 95d737ff to add 'ignore_nulls'Stephen Frost2014-09-29
| | | | | | | | | | Per discussion, revert the commit which added 'ignore_nulls' to row_to_json. This capability would be better added as an independent function rather than being bolted on to row_to_json. Additionally, the implementation didn't address complex JSON objects, and so was incomplete anyway. Pointed out by Tom and discussed with Andrew and Robert.
* Change JSONB's on-disk format for improved performance.Tom Lane2014-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original design used an array of offsets into the variable-length portion of a JSONB container. However, such an array is basically uncompressible by simple compression techniques such as TOAST's LZ compressor. That's bad enough, but because the offset array is at the front, it tended to trigger the give-up-after-1KB heuristic in the TOAST code, so that the entire JSONB object was stored uncompressed; which was the root cause of bug #11109 from Larry White. To fix without losing the ability to extract a random array element in O(1) time, change this scheme so that most of the JEntry array elements hold lengths rather than offsets. With data that's compressible at all, there tend to be fewer distinct element lengths, so that there is scope for compression of the JEntry array. Every N'th entry is still an offset. To determine the length or offset of any specific element, we might have to examine up to N preceding JEntrys, but that's still O(1) so far as the total container size is concerned. Testing shows that this cost is negligible compared to other costs of accessing a JSONB field, and that the method does largely fix the incompressible-data problem. While at it, rearrange the order of elements in a JSONB object so that it's "all the keys, then all the values" not alternating keys and values. This doesn't really make much difference right at the moment, but it will allow providing a fast path for extracting individual object fields from large JSONB values stored EXTERNAL (ie, uncompressed), analogously to the existing optimization for substring extraction from large EXTERNAL text values. Bump catversion to denote the incompatibility in on-disk format. We will need to fix pg_upgrade to disallow upgrading jsonb data stored with 9.4 betas 1 and 2. Heikki Linnakangas and Tom Lane
* Fix relcache for policies, and doc updatesStephen Frost2014-09-26
| | | | | | | | | | | | | | | | | | Andres pointed out that there was an extra ';' in equalPolicies, which made me realize that my prior testing with CLOBBER_CACHE_ALWAYS was insufficient (it didn't always catch the issue, just most of the time). Thanks to that, a different issue was discovered, specifically in equalRSDescs. This change corrects eqaulRSDescs to return 'true' once all policies have been confirmed logically identical. After stepping through both functions to ensure correct behavior, I ran this for about 12 hours of CLOBBER_CACHE_ALWAYS runs of the regression tests with no failures. In addition, correct a few typos in the documentation which were pointed out by Thom Brown (thanks!) and improve the policy documentation further by adding a flushed out usage example based on a unix passwd file. Lastly, clean up a few comments in the regression tests and pg_dump.h.
* Fix identify_locking_dependencies for schema-only dumps.Robert Haas2014-09-26
| | | | | | | | Without this fix, parallel restore of a schema-only dump can deadlock, because when the dump is schema-only, the dependency will still be pointing at the TABLE item rather than the TABLE DATA item. Robert Haas and Tom Lane
* Further atomic ops portability improvements and bug fixes.Andres Freund2014-09-26
| | | | | | | | | | | * Don't play tricks for a more efficient pg_atomic_clear_flag() in the generic gcc implementation. The old version was broken on gcc < 4.7 on !x86 platforms. Per buildfarm member chipmunk. * Make usage of __atomic() fences depend on HAVE_GCC__ATOMIC_INT32_CAS instead of HAVE_GCC__ATOMIC_INT64_CAS - there's platforms with 32bit support that don't support 64bit atomics. * Blindly fix two superflous #endif in generic-xlc.h * Check for --disable-atomics in platforms but x86.