aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils/adt
Commit message (Collapse)AuthorAge
* Fix typos in comments.Heikki Linnakangas2014-05-21
|
* Update obsolete comment.Tom Lane2014-05-19
| | | | Peter Geoghegan
* Make sure chr(int) can't create invalid UTF8 sequences.Tom Lane2014-05-16
| | | | | | | | | | | | | | | | | | Several years ago we changed chr(int) so that if the database encoding is UTF8, it would interpret its argument as a Unicode code point and expand it into the appropriate multibyte sequence. However, we weren't sufficiently careful about checking validity of the input. According to RFC3629, UTF8 disallows code points above U+10FFFF (note that the predecessor standard RFC2279 was more liberal). Also, both versions of the UTF8 spec agree that Unicode surrogate-pair codes should never appear in UTF8. Because our encoding validity checks follow RFC3629, our failure to enforce these restrictions in chr() means it could be used to produce text strings that will be rejected when the database is dumped and reloaded. To ensure consistency with the input functions, let's actually apply pg_utf8_islegal() to the proposed output of chr(). Per discussion, this seems like too much of a behavioral change to back-patch, but it's not too late to squeeze it into 9.4.
* Handle duplicate XIDs in txid_snapshot.Heikki Linnakangas2014-05-15
| | | | | | | | | | The proc array can contain duplicate XIDs, when a transaction is just being prepared for two-phase commit. To cope, remove any duplicates in txid_current_snapshot(). Also ignore duplicates in the input functions, so that if e.g. you have an old pg_dump file that already contains duplicates, it will be accepted. Report and fix by Jan Wieck. Backpatch to all supported versions.
* Fix harmless access to uninitialized memory.Heikki Linnakangas2014-05-13
| | | | | | | | | When cache invalidations arrive while ri_LoadConstraintInfo() is busy filling a new cache entry, InvalidateConstraintCacheCallBack() compares the - not yet initialized - oidHashValue field with the to-be-invalidated hash value. To fix, check whether the entry is already marked as invalid. Andres Freund
* Rename jsonb_hash_ops to jsonb_path_ops.Tom Lane2014-05-11
| | | | | | | | | | | | | There's no longer much pressure to switch the default GIN opclass for jsonb, but there was still some unhappiness with the name "jsonb_hash_ops", since hashing is no longer a distinguishing property of that opclass, and anyway it seems like a relatively minor detail. At the suggestion of Heikki Linnakangas, we'll use "jsonb_path_ops" instead; that captures the important characteristic that each index entry depends on the entire path from the document root to the indexed value. Also add a user-facing explanation of the implementation properties of these two opclasses.
* Fix broken allocation logic in recently-rewritten jsonb_util.c.Tom Lane2014-05-09
| | | | | | | | | | | | reserveFromBuffer() failed to consider the possibility that it needs to more-than-double the current buffer size. Beyond that, it seems likely that we'd someday need to worry about integer overflow of the buffer length variable. Rather than reinvent the logic that's already been debugged in stringinfo.c, let's go back to using that logic. We can still have the same targeted API, but we'll rely on stringinfo.c to manage reallocation. Per report from Alexander Korotkov.
* Get rid of bogus dependency on typcategory in to_json() and friends.Tom Lane2014-05-09
| | | | | | | | | | | | | | | | | | | These functions were relying on typcategory to identify arrays and composites, which is not reliable and not the normal way to do it. Using typcategory to identify boolean, numeric types, and json itself is also pretty questionable, though the code in those cases didn't seem to be at risk of anything worse than wrong output. Instead, use the standard lsyscache functions to identify arrays and composites, and rely on a direct check of the type OID for the other cases. In HEAD, also be sure to look through domains so that a domain is treated the same as its base type for conversions to JSON. However, this is a small behavioral change; given the lack of field complaints, we won't back-patch it. In passing, refactor so that there's only one copy of the code that decides which conversion strategy to apply, not multiple copies that could (and have) gotten out of sync.
* Teach add_json() that jsonb is of TYPCATEGORY_JSON.Tom Lane2014-05-09
| | | | | | | This code really needs to be refactored so that there aren't so many copies that can diverge. Not to mention that this whole approach is probably wrong. But for the moment I'll just stick my finger in the dike. Per report from Michael Paquier.
* More jsonb cleanup.Heikki Linnakangas2014-05-09
| | | | | | | | | | | | | | | | Fix JSONB_MAX_ELEMS and JSONB_MAX_PAIRS macros to use CB_MASK in the calculation. JENTRY_POSMASK happens to have the same value at the moment, but that's just coincidental. Refactor jsonb iterator functions, for readability. Get rid of the JENTRY_ISFIRST flag. Whenever we handle JEntrys, we have access to the whole array and have enough context information to know which entry is the first. This frees up one bit in the JEntry header for future use. While we're at it, shuffle the JEntry bits so that boolean true and false go together, for aesthetic reasons. Bump catalog version as this changes the on-disk format slightly.
* Improve key representation for GIN jsonb_ops, and fix existence-search bug.Tom Lane2014-05-09
| | | | | | | | | | | | | | | | | | | | | | Change the key representation so that values that would exceed 127 bytes are hashed into short strings, and so that the original JSON datatype of each value is recorded in the index. The hashing rule eliminates the major objection to having this opclass be the default for jsonb, namely that it could fail for plausible input data (due to GIN's restrictions on maximum key length). Preserving datatype information doesn't really buy us much right now, but it requires no extra space compared to the previous way, and it might be useful later. Also, change the consistency-checking functions to request recheck for exists (jsonb ? text) and related operators. The original analysis that this is an exactly checkable query was incorrect, since the index does not preserve information about whether a key appears at top level in the indexed JSON object. Add a test case demonstrating the problem. Make some other, mostly cosmetic improvements to the code in jsonb_gin.c as well. catversion bump due to on-disk data format change in jsonb_ops indexes.
* Minor cleanup of jsonb_util.cHeikki Linnakangas2014-05-09
| | | | | | Move the functions around to group related functions together. Remove binequal argument from lengthCompareJsonbStringValue, moving that responsibility to lengthCompareJsonbPair. Fix typo in comment.
* Avoid some pnstrdup()s when constructing jsonbHeikki Linnakangas2014-05-09
| | | | | This speeds up text to jsonb parsing and hstore to jsonb conversions somewhat.
* Clean up jsonb code.Heikki Linnakangas2014-05-07
| | | | | | | | | | | | | | | | | | | | | | | The main target of this cleanup is the convertJsonb() function, but I also touched a lot of other things that I spotted into in the process. The new convertToJsonb() function uses an output buffer that's resized on demand, so the code to estimate of the size of JsonbValue is removed. The on-disk format was not changed, even though I refactored the structs used to handle it. The term "superheader" is replaced with "container". The jsonb_exists_any and jsonb_exists_all functions no longer sort the input array. That was a premature optimization, the idea being that if there are duplicates in the input array, you only need to check them once. Also, sorting the array saves some effort in the binary search used to find a key within an object. But there were drawbacks too: the sorting and deduplicating obviously isn't free, and in the typical case there are no duplicates to remove, and the gain in the binary search was minimal. Remove all that, which makes the code simpler too. This includes a bug-fix; the total length of the elements in a jsonb array or object mustn't exceed 2^28. That is now checked.
* Fix some more confusion between uint32 and Datum.Tom Lane2014-05-06
|
* hash_any returns Datum, not uint32 (and definitely not "int").Tom Lane2014-05-06
| | | | | | | | | | The coding in JsonbHashScalarValue might have accidentally failed to fail given current representational choices, but the key word there would be "accidental". Insert the appropriate datatype conversion macro. And use the right conversion macro for hash_numeric's result, too. In passing make the code a bit cleaner and less repetitive by factoring out the xor step from the switch.
* With ecpg exclusion removed, re-run pgindent for 9.4Bruce Momjian2014-05-06
| | | | Report by Tom Lane
* pgindent run for 9.4Bruce Momjian2014-05-06
| | | | | This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
* Fix yet another corner case in dumping rules/views with USING clauses.Tom Lane2014-05-01
| | | | | | | | | | | | | | | | ruleutils.c tries to cope with additions/deletions/renamings of columns in tables referenced by views, by means of adding machine-generated aliases to the printed form of a view when needed to preserve the original semantics. A recent blog post by Marko Tiikkaja pointed out a case I'd missed though: if one input of a join with USING is itself a join, there is nothing to stop the user from adding a column of the same name as the USING column to whichever side of the sub-join didn't provide the USING column. And then there'll be an error when the view is re-parsed, since now the sub-join exposes two columns matching the USING specification. We were catching a lot of related cases, but not this one, so add some logic to cope with it. Back-patch to 9.3, which is the first release that makes any serious attempt to cope with such cases (cf commit 2ffa740be and follow-ons).
* Fix failure to detoast fields in composite elements of structured types.Tom Lane2014-05-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we have an array of records stored on disk, the individual record fields cannot contain out-of-line TOAST pointers: the tuptoaster.c mechanisms are only prepared to deal with TOAST pointers appearing in top-level fields of a stored row. The same applies for ranges over composite types, nested composites, etc. However, the existing code only took care of expanding sub-field TOAST pointers for the case of nested composites, not for other structured types containing composites. For example, given a command such as UPDATE tab SET arraycol = ARRAY[(ROW(x,42)::mycompositetype] ... where x is a direct reference to a field of an on-disk tuple, if that field is long enough to be toasted out-of-line then the TOAST pointer would be inserted as-is into the array column. If the source record for x is later deleted, the array field value would become a dangling pointer, leading to errors along the line of "missing chunk number 0 for toast value ..." when the value is referenced. A reproducible test case for this was provided by Jan Pecek, but it seems likely that some of the "missing chunk number" reports we've heard in the past were caused by similar issues. Code-wise, the problem is that PG_DETOAST_DATUM() is not adequate to produce a self-contained Datum value if the Datum is of composite type. Seen in this light, the problem is not just confined to arrays and ranges, but could also affect some other places where detoasting is done in that way, for example form_index_tuple(). I tried teaching the array code to apply toast_flatten_tuple_attribute() along with PG_DETOAST_DATUM() when the array element type is composite, but this was messy and imposed extra cache lookup costs whether or not any TOAST pointers were present, indeed sometimes when the array element type isn't even composite (since sometimes it takes a typcache lookup to find that out). The idea of extending that approach to all the places that currently use PG_DETOAST_DATUM() wasn't attractive at all. This patch instead solves the problem by decreeing that composite Datum values must not contain any out-of-line TOAST pointers in the first place; that is, we expand out-of-line fields at the point of constructing a composite Datum, not at the point where we're about to insert it into a larger tuple. This rule is applied only to true composite Datums, not to tuples that are being passed around the system as tuples, so it's not as invasive as it might sound at first. With this approach, the amount of code that has to be touched for a full solution is greatly reduced, and added cache lookup costs are avoided except when there actually is a TOAST pointer that needs to be inlined. The main drawback of this approach is that we might sometimes dereference a TOAST pointer that will never actually be used by the query, imposing a rather large cost that wasn't there before. On the other side of the coin, if the field value is used multiple times then we'll come out ahead by avoiding repeat detoastings. Experimentation suggests that common SQL coding patterns are unaffected either way, though. Applications that are very negatively affected could be advised to modify their code to not fetch columns they won't be using. In future, we might consider reverting this solution in favor of detoasting only at the point where data is about to be stored to disk, using some method that can drill down into multiple levels of nested structured types. That will require defining new APIs for structured types, though, so it doesn't seem feasible as a back-patchable fix. Note that this patch changes HeapTupleGetDatum() from a macro to a function call; this means that any third-party code using that macro will not get protection against creating TOAST-pointer-containing Datums until it's recompiled. The same applies to any uses of PG_RETURN_HEAPTUPLEHEADER(). It seems likely that this is not a big problem in practice: most of the tuple-returning functions in core and contrib produce outputs that could not possibly be toasted anyway, and the same probably holds for third-party extensions. This bug has existed since TOAST was invented, so back-patch to all supported branches.
* Rationalize common/relpath.[hc].Tom Lane2014-04-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a73018392636ce832b09b5c31f6ad1f18a4643ea created rather a mess by putting dependencies on backend-only include files into include/common. We really shouldn't do that. To clean it up: * Move TABLESPACE_VERSION_DIRECTORY back to its longtime home in catalog/catalog.h. We won't consider this symbol part of the FE/BE API. * Push enum ForkNumber from relfilenode.h into relpath.h. We'll consider relpath.h as the source of truth for fork numbers, since relpath.c was already partially serving that function, and anyway relfilenode.h was kind of a random place for that enum. * So, relfilenode.h now includes relpath.h rather than vice-versa. This direction of dependency is fine. (That allows most, but not quite all, of the existing explicit #includes of relpath.h to go away again.) * Push forkname_to_number from catalog.c to relpath.c, just to centralize fork number stuff a bit better. * Push GetDatabasePath from catalog.c to relpath.c; it was rather odd that the previous commit didn't keep this together with relpath(). * To avoid needing relfilenode.h in common/, redefine the underlying function (now called GetRelationPath) as taking separate OID arguments, and make the APIs using RelFileNode or RelFileNodeBackend into macro wrappers. (The macros have a potential multiple-eval risk, but none of the existing call sites have an issue with that; one of them had such a risk already anyway.) * Fix failure to follow the directions when "init" fork type was added; specifically, the errhint in forkname_to_number wasn't updated, and neither was the SGML documentation for pg_relation_size(). * Fix tablespace-path-too-long check in CreateTableSpace() to account for fork-name component of maximum-length pathnames. This requires putting FORKNAMECHARS into a header file, but it was rather useless (and actually unreferenced) where it was. The last couple of items are potentially back-patchable bug fixes, if anyone is sufficiently excited about them; but personally I'm not. Per a gripe from Christoph Berg about how include/common wasn't self-contained.
* Check for interrupts and stack overflow during rule/view dumps.Tom Lane2014-04-30
| | | | | | | | | Since ruleutils.c recurses, it could be driven to stack overflow by deeply nested constructs. Very large queries might also take long enough to deparse that a check for interrupts seems like a good idea. Stick appropriate tests into a couple of key places. Noted by Greg Stark. Back-patch to all supported branches.
* Reduce indentation/parenthesization of set operations in rule/view dumps.Tom Lane2014-04-30
| | | | | | | | | | | | | | | | | | | | | | | | | A query such as "SELECT x UNION SELECT y UNION SELECT z UNION ..." produces a left-deep nested parse tree, which we formerly showed in its full nested glory and with all the possible parentheses. This does little for readability, though, and long UNION lists resulting in excessive indentation are common. Instead, let's omit parentheses and indent all the subqueries at the same level in such cases. This patch skips indentation/parenthesization whenever the lefthand input of a SetOperationStmt is another SetOperationStmt of the same kind and ALL/DISTINCT property. We could teach the code the exact syntactic precedence of set operations and thereby avoid parenthesization in some more cases, but it's not clear that that'd be a readability win: it seems better to parenthesize if the set operation changes. (As an example, if there's one UNION in a long list of UNION ALL, it now stands out like a sore thumb, which seems like a good thing.) Back-patch to 9.3. This completes our response to a complaint from Greg Stark that since commit 62e666400d there's a performance problem in pg_dump for views containing long UNION sequences (or other types of deeply nested constructs). The previous commit 0601cb54dac14d979d726ab2ebeda251ae36e857 handles the general problem, but this one makes the specific case of UNION lists look a lot nicer.
* Limit overall indentation in rule/view dumps.Tom Lane2014-04-30
| | | | | | | | | | | | | | | | | Continuing to indent no matter how deeply nested we get doesn't really do anything for readability; what's worse, it results in O(N^2) total whitespace, which can become a performance and memory-consumption issue. To address this, once we get past 40 characters of indentation, reduce the indentation step distance 4x, and also limit the maximum indentation by reducing it modulo 40. This latter choice is a bit weird at first glance, but it seems to preserve readability better than a simple cap would do. Back-patch to 9.3, because since commit 62e666400d the performance issue is a hazard for pg_dump. Greg Stark and Tom Lane
* Fix indentation of JOIN clauses in rule/view dumps.Tom Lane2014-04-30
| | | | | | | | | | | | | | | | | | | | The code attempted to outdent JOIN clauses further left than the parent FROM keyword, which was odd in any case, and led to inconsistent formatting since in simple cases the clauses couldn't be moved any further left than that. And it left a permanent decrement of the indentation level, causing subsequent lines to be much further left than they should be (again, this couldn't be seen in simple cases for lack of indentation to give up). After a little experimentation I chose to make it indent JOIN keywords two spaces from the parent FROM, which is one space more than the join's lefthand input in cases where that appears on a different line from FROM. Back-patch to 9.3. This is a purely cosmetic change, and the bug is quite old, so that may seem arbitrary; but we are going to be making some other changes to the indentation behavior in both HEAD and 9.3, so it seems reasonable to include this in 9.3 too. I committed this one first because its effects are more visible in the regression test results as they currently stand than they will be later.
* Add to_regprocedure() and to_regoperator().Robert Haas2014-04-16
| | | | | | | | | These are natural complements to the functions added by commit 0886fc6a5c75b294544263ea979b9cf6195407d9, but they weren't included in the original patch for some reason. Add them. Patch by me, per a complaint by Tom Lane. Review by Tatsuo Ishii.
* Provide moving-aggregate support for boolean aggregates.Tom Lane2014-04-13
| | | | David Rowley and Florian Pflug, reviewed by Dean Rasheed
* Provide moving-aggregate support for a bunch of numerical aggregates.Tom Lane2014-04-12
| | | | | | | | | | | | | First installment of the promised moving-aggregate support in built-in aggregates: count(), sum(), avg(), stddev() and variance() for assorted datatypes, though not for float4/float8. In passing, remove a 2001-vintage kluge in interval_accum(): interval array elements have been properly aligned since around 2003, but nobody remembered to take out this workaround. Also, fix a thinko in the opr_sanity tests for moving-aggregate catalog entries. David Rowley and Florian Pflug, reviewed by Dean Rasheed
* Add an in-core GiST index opclass for inet/cidr types.Tom Lane2014-04-08
| | | | | | | | | | | | | | | | | | | | | | This operator class can accelerate subnet/supernet tests as well as btree-equivalent ordered comparisons. It also handles a new network operator inet && inet (overlaps, a/k/a "is supernet or subnet of"), which is expected to be useful in exclusion constraints. Ideally this opclass would be the default for GiST with inet/cidr data, but we can't mark it that way until we figure out how to do a more or less graceful transition from the current situation, in which the really-completely-bogus inet/cidr opclasses in contrib/btree_gist are marked as default. Having the opclass in core and not default is better than not having it at all, though. While at it, add new documentation sections to allow us to officially document GiST/GIN/SP-GiST opclasses, something there was never a clear place to do before. I filled these in with some simple tables listing the existing opclasses and the operators they support, but there's certainly scope to put more information there. Emre Hasegeli, reviewed by Andreas Karlsson, further hacking by me
* Add new to_reg* functions for error-free OID lookups.Robert Haas2014-04-08
| | | | | | | | | These functions won't throw an error if the object doesn't exist, or if (for functions and operators) there's more than one matching object. Yugo Nagata and Nozomi Anzai, reviewed by Amit Khandekar, Marti Raudsepp, Amit Kapila, and me.
* Reduce lock levels of some ALTER TABLE cmdsSimon Riggs2014-04-06
| | | | | | | | | | | | | | | | | VALIDATE CONSTRAINT CLUSTER ON SET WITHOUT CLUSTER ALTER COLUMN SET STATISTICS ALTER COLUMN SET () ALTER COLUMN RESET () All other sub-commands use AccessExclusiveLock Simon Riggs and Noah Misch Reviews by Robert Haas and Andres Freund
* Fix non-equivalence of VARIADIC and non-VARIADIC function call formats.Tom Lane2014-04-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For variadic functions (other than VARIADIC ANY), the syntaxes foo(x,y,...) and foo(VARIADIC ARRAY[x,y,...]) should be considered equivalent, since the former is converted to the latter at parse time. They have indeed been equivalent, in all releases before 9.3. However, commit 75b39e790 made an ill-considered decision to record which syntax had been used in FuncExpr nodes, and then to make equal() test that in checking node equality --- which caused the syntaxes to not be seen as equivalent by the planner. This is the underlying cause of bug #9817 from Dmitry Ryabov. It might seem that a quick fix would be to make equal() disregard FuncExpr.funcvariadic, but the same commit made that untenable, because the field actually *is* semantically significant for some VARIADIC ANY functions. This patch instead adopts the approach of redefining funcvariadic (and aggvariadic, in HEAD) as meaning that the last argument is a variadic array, whether it got that way by parser intervention or was supplied explicitly by the user. Therefore the value will always be true for non-ANY variadic functions, restoring the principle of equivalence. (However, the planner will continue to consider use of VARIADIC as a meaningful difference for VARIADIC ANY functions, even though some such functions might disregard it.) In HEAD, this change lets us simplify the decompilation logic in ruleutils.c, since the funcvariadic/aggvariadic flag tells directly whether to print VARIADIC. However, in 9.3 we have to continue to cope with existing stored rules/views that might contain the previous definition. Fortunately, this just means no change in ruleutils.c, since its existing behavior effectively ignores funcvariadic for all cases other than VARIADIC ANY functions. In HEAD, bump catversion to reflect the fact that FuncExpr.funcvariadic changed meanings; this is sort of pro forma, since I don't believe any built-in views are affected. Unfortunately, this patch doesn't magically fix everything for affected 9.3 users. After installing 9.3.5, they might need to recreate their rules/views/indexes containing variadic function calls in order to get everything consistent with the new definition. As in the cited bug, the symptom of a problem would be failure to use a nominally matching index that has a variadic function call in its definition. We'll need to mention this in the 9.3.5 release notes.
* Code review for commit d26888bc4d1e539a82f21382b0000fe5bbf889d9.Tom Lane2014-04-03
| | | | | Mostly, copy-edit the comments; but also fix it to not reject domains over arrays.
* De-anonymize the union in JsonbValue.Tom Lane2014-04-02
| | | | Needed for strict C89 compliance.
* Fix bugs in manipulation of PgBackendStatus.st_clienthostname.Tom Lane2014-04-01
| | | | | | | | | | Initialization of this field was not being done according to the st_changecount protocol (it has to be done within the changecount increment range, not outside). And the test to see if the value should be reported as null was wrong. Noted while perusing uses of Port.remote_hostname. This was wrong from the introduction of this code (commit 4a25bc145), so back-patch to 9.1.
* Rename GinLogicValue to GinTernaryValue.Heikki Linnakangas2014-03-31
| | | | | It's more descriptive. Also, get rid of the enum, and use #defines instead, per Greg Stark's suggestion.
* Silence compiler warnings in new jsonb code.Heikki Linnakangas2014-03-27
| | | | Amit Kapila.
* Fix uninitialized variables in json's populate_record_worker().Andrew Dunstan2014-03-26
| | | | Peter Geoghegan.
* Cleanup around json_to_record/json_to_recordsetAndrew Dunstan2014-03-26
| | | | | | | | | Set function parameter names and defaults. Add jsonb versions (which the code already provided for so the actual new code is trivial). Add jsonb regression tests and docs. Bump catalog version (which I apparently forgot to do when jsonb was committed).
* Fix 'recheck' flag in tsquery's GIN tri-consistent function.Heikki Linnakangas2014-03-26
| | | | | | | It needs to be initialized, like in the boolean gin_tsquery_consistent version. Peter Geoghegan.
* Tidy up the populate/to_record{set} code for json a bit.Andrew Dunstan2014-03-25
| | | | In the process fix a small bug.
* Fix ts_rank_cd() to ignore stripped lexemesBruce Momjian2014-03-24
| | | | | | | Previously, stripped lexemes got a default location and could be considered if mixed with non-stripped lexemes. BACKWARD INCOMPATIBILITY CHANGE
* Introduce jsonb, a structured format for storing json.Andrew Dunstan2014-03-23
| | | | | | | | | | | | | | | | | | | | | | The new format accepts exactly the same data as the json type. However, it is stored in a format that does not require reparsing the orgiginal text in order to process it, making it much more suitable for indexing and other operations. Insignificant whitespace is discarded, and the order of object keys is not preserved. Neither are duplicate object keys kept - the later value for a given key is the only one stored. The new type has all the functions and operators that the json type has, with the exception of the json generation functions (to_json, json_agg etc.) and with identical semantics. In addition, there are operator classes for hash and btree indexing, and two classes for GIN indexing, that have no equivalent in the json type. This feature grew out of previous work by Oleg Bartunov and Teodor Sigaev, which was intended to provide similar facilities to a nested hstore type, but which in the end proved to have some significant compatibility issues. Authors: Oleg Bartunov, Teodor Sigaev, Peter Geoghegan and Andrew Dunstan. Review: Andres Freund
* Fix typos in comments.Fujii Masao2014-03-17
| | | | Thom Brown
* C comments: remove odd blank lines after #ifdef WIN32 linesBruce Momjian2014-03-13
|
* Items on GIN data pages are no longer always 6 bytes; update gincostestimate.Heikki Linnakangas2014-03-12
| | | | Also improve the comments a bit.
* Allow opclasses to provide tri-valued GIN consistent functions.Heikki Linnakangas2014-03-12
| | | | | | | | | | | | | | | With the GIN "fast scan" feature, GIN can skip items without fetching all the keys for them, if it can prove that they don't match regardless of those keys. So far, it has done the proving by calling the boolean consistent function with all combinations of TRUE/FALSE for the unfetched keys, but since that's O(n^2), it becomes unfeasible with more than a few keys. We can avoid calling consistent with all the combinations, if we can tell the operator class implementation directly which keys are unknown. This commit includes a triConsistent function for the built-in array and tsvector opclasses. Alexander Korotkov, with some changes by me.
* Avoid memcpy() with same source and destination address.Heikki Linnakangas2014-03-07
| | | | | | | The behavior of that is undefined, although unlikely to lead to problems in practice. Found by running regression tests with Valgrind.
* Avoid getting more than AccessShareLock when deparsing a query.Tom Lane2014-03-06
| | | | | | | | | | | | | | | | | | | | In make_ruledef and get_query_def, we have long used AcquireRewriteLocks to ensure that the querytree we are about to deparse is up-to-date and the schemas of the underlying relations aren't changing. Howwever, that function thinks the query is about to be executed, so it acquires locks that are stronger than necessary for the purpose of deparsing. Thus for example, if pg_dump asks to deparse a rule that includes "INSERT INTO t", we'd acquire RowExclusiveLock on t. That results in interference with concurrent transactions that might for example ask for ShareLock on t. Since pg_dump is documented as being purely read-only, this is unexpected. (Worse, it used to actually be read-only; this behavior dates back only to 8.1, cf commit ba4200246.) Fix this by adding a parameter to AcquireRewriteLocks to tell it whether we want the "real" execution locks or only AccessShareLock. Report, diagnosis, and patch by Dean Rasheed. Back-patch to all supported branches.
* isdigit() needs an unsigned char argument.Heikki Linnakangas2014-03-06
| | | | | | | Per the C standard, the routine should be passed an int, with a value that's representable as an unsigned char or EOF. Passing a signed char is wrong, because a negative value is not representable as an unsigned char. Unfortunately no compiler warns about that.