| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
| |
HeapTupleHeaderGetCmax() asserts that it is only used if the tuple has
been updated by the current transaction. That check is correct and
sensible but requires allocating memory if xmax is a multixact. When
wal_level is set to logical cmax needs to be included in a wal record
, generated inside a critical section, which can trigger the assertion
added in 4a170ee9e.
Reported-By: Steve Singer
|
|
|
|
|
|
|
|
|
|
| |
Define padding bytes in SharedInvalidationMessage structs to be
defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by
multiple processes, will cause spurious valgrind warnings about
undefined memory being used. That's because valgrind remembers the
undefined bytes from the last local process's store, not realizing
that another process has written since, filling the previously
uninitialized bytes.
|
| |
|
|
|
|
| |
Peter Geoghegan
|
|
|
|
|
|
| |
That's what I get for not fully retesting the final version of the patch.
The replace_allowed cross-check needs an additional special case for
bootstrapping.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RelationCacheInsert() ignored the possibility that hash_search(HASH_ENTER)
might find a hashtable entry already present for the same OID. However,
that can in fact occur during recursive relcache load scenarios. When it
did happen, we overwrote the pointer to the pre-existing Relation, causing
a session-lifespan leakage of that entire structure. As far as is known,
the pre-existing Relation would always have reference count zero by the
time we arrive back at the outer insertion, so add code that deletes the
pre-existing Relation if so. If by some chance its refcount is positive,
elog a WARNING and allow the pre-existing Relation to be leaked as before.
Also, AttrDefaultFetch() was sloppy about leaking the cstring form of the
pg_attrdef.adbin value it's copying into the relcache structure. This is
only a query-lifespan leakage, and normally not very significant, but it
adds up during CLOBBER_CACHE testing.
These bugs are of very ancient vintage, but I'll refrain from back-patching
since there's no evidence that these leaks amount to anything in ordinary
usage.
|
|
|
|
| |
Per testing with a compiler that whines about this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several years ago we changed chr(int) so that if the database encoding is
UTF8, it would interpret its argument as a Unicode code point and expand it
into the appropriate multibyte sequence. However, we weren't sufficiently
careful about checking validity of the input. According to RFC3629, UTF8
disallows code points above U+10FFFF (note that the predecessor standard
RFC2279 was more liberal). Also, both versions of the UTF8 spec agree
that Unicode surrogate-pair codes should never appear in UTF8. Because
our encoding validity checks follow RFC3629, our failure to enforce these
restrictions in chr() means it could be used to produce text strings that
will be rejected when the database is dumped and reloaded. To ensure
consistency with the input functions, let's actually apply
pg_utf8_islegal() to the proposed output of chr().
Per discussion, this seems like too much of a behavioral change to
back-patch, but it's not too late to squeeze it into 9.4.
|
|
|
|
|
|
|
|
|
|
| |
The proc array can contain duplicate XIDs, when a transaction is just being
prepared for two-phase commit. To cope, remove any duplicates in
txid_current_snapshot(). Also ignore duplicates in the input functions, so
that if e.g. you have an old pg_dump file that already contains duplicates,
it will be accepted.
Report and fix by Jan Wieck. Backpatch to all supported versions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rd_replidindex should be managed the same as rd_oidindex, and rd_keyattr
and rd_idattr should be managed like rd_indexattr. Omissions in this area
meant that the bitmapsets computed for rd_keyattr and rd_idattr would be
leaked during any relcache flush, resulting in a slow but permanent leak in
CacheMemoryContext. There was also a tiny probability of relcache entry
corruption if we ran out of memory at just the wrong point in
RelationGetIndexAttrBitmap. Otherwise, the fields were not zeroed where
expected, which would not bother the code any AFAICS but could greatly
confuse anyone examining the relcache entry while debugging.
Also, create an API function RelationGetReplicaIndex rather than letting
non-relcache code be intimate with the mechanisms underlying caching of
that value (we won't even mention the memory leak there).
Also, fix a relcache flush hazard identified by Andres Freund:
RelationGetIndexAttrBitmap must not assume that rd_replidindex stays valid
across index_open.
The aspects of this involving rd_keyattr date back to 9.3, so back-patch
those changes.
|
|
|
|
|
|
|
|
|
| |
When cache invalidations arrive while ri_LoadConstraintInfo() is busy
filling a new cache entry, InvalidateConstraintCacheCallBack() compares
the - not yet initialized - oidHashValue field with the to-be-invalidated
hash value. To fix, check whether the entry is already marked as invalid.
Andres Freund
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original coding for ALTER SYSTEM made a fundamentally bogus assumption
that postgresql.auto.conf could be sought relative to the main config file
if we hadn't yet determined the value of data_directory. This fails for
common arrangements with the config file elsewhere, as reported by
Christoph Berg.
The simplest fix is to not try to read postgresql.auto.conf until after
SelectConfigFiles has chosen (and locked down) the data_directory setting.
Because of the logic in ProcessConfigFile for handling resetting of GUCs
that've been removed from the config file, we cannot easily read the main
and auto config files separately; so this patch adopts a brute force
approach of reading the main config file twice during postmaster startup.
That's a tad ugly, but the actual time cost is likely to be negligible,
and there's no time for a more invasive redesign before beta.
With this patch, any attempt to set data_directory via ALTER SYSTEM
will be silently ignored. It would probably be better to throw an
error, but that can be dealt with later. This bug, however, would
prevent any testing of ALTER SYSTEM by a significant fraction of the
userbase, so it seems important to get it fixed before beta.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's no longer much pressure to switch the default GIN opclass for
jsonb, but there was still some unhappiness with the name "jsonb_hash_ops",
since hashing is no longer a distinguishing property of that opclass,
and anyway it seems like a relatively minor detail. At the suggestion of
Heikki Linnakangas, we'll use "jsonb_path_ops" instead; that captures the
important characteristic that each index entry depends on the entire path
from the document root to the indexed value.
Also add a user-facing explanation of the implementation properties of
these two opclasses.
|
|
|
|
|
|
|
|
|
|
|
|
| |
reserveFromBuffer() failed to consider the possibility that it needs to
more-than-double the current buffer size. Beyond that, it seems likely
that we'd someday need to worry about integer overflow of the buffer
length variable. Rather than reinvent the logic that's already been
debugged in stringinfo.c, let's go back to using that logic. We can
still have the same targeted API, but we'll rely on stringinfo.c to
manage reallocation.
Per report from Alexander Korotkov.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These functions were relying on typcategory to identify arrays and
composites, which is not reliable and not the normal way to do it.
Using typcategory to identify boolean, numeric types, and json itself is
also pretty questionable, though the code in those cases didn't seem to be
at risk of anything worse than wrong output. Instead, use the standard
lsyscache functions to identify arrays and composites, and rely on a direct
check of the type OID for the other cases.
In HEAD, also be sure to look through domains so that a domain is treated
the same as its base type for conversions to JSON. However, this is a
small behavioral change; given the lack of field complaints, we won't
back-patch it.
In passing, refactor so that there's only one copy of the code that decides
which conversion strategy to apply, not multiple copies that could (and
have) gotten out of sync.
|
|
|
|
|
|
|
| |
This code really needs to be refactored so that there aren't so many copies
that can diverge. Not to mention that this whole approach is probably
wrong. But for the moment I'll just stick my finger in the dike.
Per report from Michael Paquier.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix JSONB_MAX_ELEMS and JSONB_MAX_PAIRS macros to use CB_MASK in the
calculation. JENTRY_POSMASK happens to have the same value at the moment,
but that's just coincidental.
Refactor jsonb iterator functions, for readability.
Get rid of the JENTRY_ISFIRST flag. Whenever we handle JEntrys, we have
access to the whole array and have enough context information to know
which entry is the first. This frees up one bit in the JEntry header for
future use. While we're at it, shuffle the JEntry bits so that boolean
true and false go together, for aesthetic reasons.
Bump catalog version as this changes the on-disk format slightly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change the key representation so that values that would exceed 127 bytes
are hashed into short strings, and so that the original JSON datatype of
each value is recorded in the index. The hashing rule eliminates the major
objection to having this opclass be the default for jsonb, namely that it
could fail for plausible input data (due to GIN's restrictions on maximum
key length). Preserving datatype information doesn't really buy us much
right now, but it requires no extra space compared to the previous way,
and it might be useful later.
Also, change the consistency-checking functions to request recheck for
exists (jsonb ? text) and related operators. The original analysis that
this is an exactly checkable query was incorrect, since the index does
not preserve information about whether a key appears at top level in
the indexed JSON object. Add a test case demonstrating the problem.
Make some other, mostly cosmetic improvements to the code in jsonb_gin.c
as well.
catversion bump due to on-disk data format change in jsonb_ops indexes.
|
|
|
|
|
|
| |
Move the functions around to group related functions together. Remove
binequal argument from lengthCompareJsonbStringValue, moving that
responsibility to lengthCompareJsonbPair. Fix typo in comment.
|
|
|
|
|
| |
This speeds up text to jsonb parsing and hstore to jsonb conversions
somewhat.
|
|
|
|
|
|
|
|
|
|
| |
Per discussion, the old value of 128MB is ridiculously small on modern
machines; in fact, it's not even any larger than the default value of
shared_buffers, which it certainly should be. Increase to 4GB, which
is unlikely to be any worse than the old default for anyone, and should
be noticeably better for most. Eventually we might have an autotuning
scheme for this setting, but the recent attempt crashed and burned,
so for now just do this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit ee1e5662d8d8330726eaef7d3110cb7add24d058, as well as
a remarkably large number of followup commits, which were mostly concerned
with the fact that the implementation didn't work terribly well. It still
doesn't: we probably need some rather basic work in the GUC infrastructure
if we want to fully support GUCs whose default varies depending on the
value of another GUC. Meanwhile, it also emerged that there wasn't really
consensus in favor of the definition the patch tried to implement (ie,
effective_cache_size should default to 4 times shared_buffers). So whack
it all back to where it was. In a followup commit, I'll do what was
recently agreed to, which is to simply change the default to a higher
value.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main target of this cleanup is the convertJsonb() function, but I also
touched a lot of other things that I spotted into in the process.
The new convertToJsonb() function uses an output buffer that's resized on
demand, so the code to estimate of the size of JsonbValue is removed.
The on-disk format was not changed, even though I refactored the structs
used to handle it. The term "superheader" is replaced with "container".
The jsonb_exists_any and jsonb_exists_all functions no longer sort the input
array. That was a premature optimization, the idea being that if there are
duplicates in the input array, you only need to check them once. Also,
sorting the array saves some effort in the binary search used to find a key
within an object. But there were drawbacks too: the sorting and
deduplicating obviously isn't free, and in the typical case there are no
duplicates to remove, and the gain in the binary search was minimal. Remove
all that, which makes the code simpler too.
This includes a bug-fix; the total length of the elements in a jsonb array
or object mustn't exceed 2^28. That is now checked.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The coding in JsonbHashScalarValue might have accidentally failed to fail
given current representational choices, but the key word there would be
"accidental". Insert the appropriate datatype conversion macro. And
use the right conversion macro for hash_numeric's result, too.
In passing make the code a bit cleaner and less repetitive by factoring
out the xor step from the switch.
|
|
|
|
| |
Report by Tom Lane
|
|
|
|
|
| |
This includes removing tabs after periods in C comments, which was
applied to back branches, so this change should not effect backpatching.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ruleutils.c tries to cope with additions/deletions/renamings of columns in
tables referenced by views, by means of adding machine-generated aliases to
the printed form of a view when needed to preserve the original semantics.
A recent blog post by Marko Tiikkaja pointed out a case I'd missed though:
if one input of a join with USING is itself a join, there is nothing to
stop the user from adding a column of the same name as the USING column to
whichever side of the sub-join didn't provide the USING column. And then
there'll be an error when the view is re-parsed, since now the sub-join
exposes two columns matching the USING specification. We were catching a
lot of related cases, but not this one, so add some logic to cope with it.
Back-patch to 9.3, which is the first release that makes any serious
attempt to cope with such cases (cf commit 2ffa740be and follow-ons).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we have an array of records stored on disk, the individual record fields
cannot contain out-of-line TOAST pointers: the tuptoaster.c mechanisms are
only prepared to deal with TOAST pointers appearing in top-level fields of
a stored row. The same applies for ranges over composite types, nested
composites, etc. However, the existing code only took care of expanding
sub-field TOAST pointers for the case of nested composites, not for other
structured types containing composites. For example, given a command such
as
UPDATE tab SET arraycol = ARRAY[(ROW(x,42)::mycompositetype] ...
where x is a direct reference to a field of an on-disk tuple, if that field
is long enough to be toasted out-of-line then the TOAST pointer would be
inserted as-is into the array column. If the source record for x is later
deleted, the array field value would become a dangling pointer, leading
to errors along the line of "missing chunk number 0 for toast value ..."
when the value is referenced. A reproducible test case for this was
provided by Jan Pecek, but it seems likely that some of the "missing chunk
number" reports we've heard in the past were caused by similar issues.
Code-wise, the problem is that PG_DETOAST_DATUM() is not adequate to
produce a self-contained Datum value if the Datum is of composite type.
Seen in this light, the problem is not just confined to arrays and ranges,
but could also affect some other places where detoasting is done in that
way, for example form_index_tuple().
I tried teaching the array code to apply toast_flatten_tuple_attribute()
along with PG_DETOAST_DATUM() when the array element type is composite,
but this was messy and imposed extra cache lookup costs whether or not any
TOAST pointers were present, indeed sometimes when the array element type
isn't even composite (since sometimes it takes a typcache lookup to find
that out). The idea of extending that approach to all the places that
currently use PG_DETOAST_DATUM() wasn't attractive at all.
This patch instead solves the problem by decreeing that composite Datum
values must not contain any out-of-line TOAST pointers in the first place;
that is, we expand out-of-line fields at the point of constructing a
composite Datum, not at the point where we're about to insert it into a
larger tuple. This rule is applied only to true composite Datums, not
to tuples that are being passed around the system as tuples, so it's not
as invasive as it might sound at first. With this approach, the amount
of code that has to be touched for a full solution is greatly reduced,
and added cache lookup costs are avoided except when there actually is
a TOAST pointer that needs to be inlined.
The main drawback of this approach is that we might sometimes dereference
a TOAST pointer that will never actually be used by the query, imposing a
rather large cost that wasn't there before. On the other side of the coin,
if the field value is used multiple times then we'll come out ahead by
avoiding repeat detoastings. Experimentation suggests that common SQL
coding patterns are unaffected either way, though. Applications that are
very negatively affected could be advised to modify their code to not fetch
columns they won't be using.
In future, we might consider reverting this solution in favor of detoasting
only at the point where data is about to be stored to disk, using some
method that can drill down into multiple levels of nested structured types.
That will require defining new APIs for structured types, though, so it
doesn't seem feasible as a back-patchable fix.
Note that this patch changes HeapTupleGetDatum() from a macro to a function
call; this means that any third-party code using that macro will not get
protection against creating TOAST-pointer-containing Datums until it's
recompiled. The same applies to any uses of PG_RETURN_HEAPTUPLEHEADER().
It seems likely that this is not a big problem in practice: most of the
tuple-returning functions in core and contrib produce outputs that could
not possibly be toasted anyway, and the same probably holds for third-party
extensions.
This bug has existed since TOAST was invented, so back-patch to all
supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit a73018392636ce832b09b5c31f6ad1f18a4643ea created rather a mess by
putting dependencies on backend-only include files into include/common.
We really shouldn't do that. To clean it up:
* Move TABLESPACE_VERSION_DIRECTORY back to its longtime home in
catalog/catalog.h. We won't consider this symbol part of the FE/BE API.
* Push enum ForkNumber from relfilenode.h into relpath.h. We'll consider
relpath.h as the source of truth for fork numbers, since relpath.c was
already partially serving that function, and anyway relfilenode.h was
kind of a random place for that enum.
* So, relfilenode.h now includes relpath.h rather than vice-versa. This
direction of dependency is fine. (That allows most, but not quite all,
of the existing explicit #includes of relpath.h to go away again.)
* Push forkname_to_number from catalog.c to relpath.c, just to centralize
fork number stuff a bit better.
* Push GetDatabasePath from catalog.c to relpath.c; it was rather odd
that the previous commit didn't keep this together with relpath().
* To avoid needing relfilenode.h in common/, redefine the underlying
function (now called GetRelationPath) as taking separate OID arguments,
and make the APIs using RelFileNode or RelFileNodeBackend into macro
wrappers. (The macros have a potential multiple-eval risk, but none of
the existing call sites have an issue with that; one of them had such a
risk already anyway.)
* Fix failure to follow the directions when "init" fork type was added;
specifically, the errhint in forkname_to_number wasn't updated, and neither
was the SGML documentation for pg_relation_size().
* Fix tablespace-path-too-long check in CreateTableSpace() to account for
fork-name component of maximum-length pathnames. This requires putting
FORKNAMECHARS into a header file, but it was rather useless (and
actually unreferenced) where it was.
The last couple of items are potentially back-patchable bug fixes,
if anyone is sufficiently excited about them; but personally I'm not.
Per a gripe from Christoph Berg about how include/common wasn't
self-contained.
|
|
|
|
|
|
|
|
|
| |
Since ruleutils.c recurses, it could be driven to stack overflow by
deeply nested constructs. Very large queries might also take long
enough to deparse that a check for interrupts seems like a good idea.
Stick appropriate tests into a couple of key places.
Noted by Greg Stark. Back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A query such as "SELECT x UNION SELECT y UNION SELECT z UNION ..."
produces a left-deep nested parse tree, which we formerly showed in its
full nested glory and with all the possible parentheses. This does little
for readability, though, and long UNION lists resulting in excessive
indentation are common. Instead, let's omit parentheses and indent all
the subqueries at the same level in such cases.
This patch skips indentation/parenthesization whenever the lefthand input
of a SetOperationStmt is another SetOperationStmt of the same kind and
ALL/DISTINCT property. We could teach the code the exact syntactic
precedence of set operations and thereby avoid parenthesization in some
more cases, but it's not clear that that'd be a readability win: it seems
better to parenthesize if the set operation changes. (As an example,
if there's one UNION in a long list of UNION ALL, it now stands out like
a sore thumb, which seems like a good thing.)
Back-patch to 9.3. This completes our response to a complaint from Greg
Stark that since commit 62e666400d there's a performance problem in pg_dump
for views containing long UNION sequences (or other types of deeply nested
constructs). The previous commit 0601cb54dac14d979d726ab2ebeda251ae36e857
handles the general problem, but this one makes the specific case of UNION
lists look a lot nicer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing to indent no matter how deeply nested we get doesn't really
do anything for readability; what's worse, it results in O(N^2) total
whitespace, which can become a performance and memory-consumption issue.
To address this, once we get past 40 characters of indentation, reduce
the indentation step distance 4x, and also limit the maximum indentation
by reducing it modulo 40. This latter choice is a bit weird at first
glance, but it seems to preserve readability better than a simple cap
would do.
Back-patch to 9.3, because since commit 62e666400d the performance issue
is a hazard for pg_dump.
Greg Stark and Tom Lane
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code attempted to outdent JOIN clauses further left than the parent
FROM keyword, which was odd in any case, and led to inconsistent formatting
since in simple cases the clauses couldn't be moved any further left than
that. And it left a permanent decrement of the indentation level, causing
subsequent lines to be much further left than they should be (again, this
couldn't be seen in simple cases for lack of indentation to give up).
After a little experimentation I chose to make it indent JOIN keywords
two spaces from the parent FROM, which is one space more than the join's
lefthand input in cases where that appears on a different line from FROM.
Back-patch to 9.3. This is a purely cosmetic change, and the bug is quite
old, so that may seem arbitrary; but we are going to be making some other
changes to the indentation behavior in both HEAD and 9.3, so it seems
reasonable to include this in 9.3 too. I committed this one first because
its effects are more visible in the regression test results as they
currently stand than they will be later.
|
|
|
|
|
|
|
| |
Some ancient comments claimed that fn_nargs could be -1 to indicate a
variable number of input arguments; but this was never implemented, and
is at variance with what we ultimately did with "variadic" functions.
Update the comments.
|
|
|
|
|
|
|
|
|
| |
These are natural complements to the functions added by commit
0886fc6a5c75b294544263ea979b9cf6195407d9, but they weren't included
in the original patch for some reason. Add them.
Patch by me, per a complaint by Tom Lane. Review by Tatsuo
Ishii.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Repositioning the tuplestore seek pointer in window_gettupleslot() turns
out to be a very significant expense when the window frame is sizable and
the frame end can move. To fix, introduce a tuplestore function for
skipping an arbitrary number of tuples in one call, parallel to the one we
introduced for tuplesort objects in commit 8d65da1f. This reduces the cost
of window_gettupleslot() to O(1) if the tuplestore has not spilled to disk.
As in the previous commit, I didn't try to do any real optimization of
tuplestore_skiptuples for the case where the tuplestore has spilled to
disk. There is probably no practical way to get the cost to less than O(N)
anyway, but perhaps someone can think of something later.
Also fix PersistHoldablePortal() to make use of this API now that we have
it.
Based on a suggestion by Dean Rasheed, though this turns out not to look
much like his patch.
|
|
|
|
| |
David Rowley and Florian Pflug, reviewed by Dean Rasheed
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First installment of the promised moving-aggregate support in built-in
aggregates: count(), sum(), avg(), stddev() and variance() for
assorted datatypes, though not for float4/float8.
In passing, remove a 2001-vintage kluge in interval_accum(): interval
array elements have been properly aligned since around 2003, but
nobody remembered to take out this workaround. Also, fix a thinko
in the opr_sanity tests for moving-aggregate catalog entries.
David Rowley and Florian Pflug, reviewed by Dean Rasheed
|
|
|
|
|
|
| |
And explain why.
Per report from Pavel Stehule
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This operator class can accelerate subnet/supernet tests as well as
btree-equivalent ordered comparisons. It also handles a new network
operator inet && inet (overlaps, a/k/a "is supernet or subnet of"),
which is expected to be useful in exclusion constraints.
Ideally this opclass would be the default for GiST with inet/cidr data,
but we can't mark it that way until we figure out how to do a more or
less graceful transition from the current situation, in which the
really-completely-bogus inet/cidr opclasses in contrib/btree_gist are
marked as default. Having the opclass in core and not default is better
than not having it at all, though.
While at it, add new documentation sections to allow us to officially
document GiST/GIN/SP-GiST opclasses, something there was never a clear
place to do before. I filled these in with some simple tables listing
the existing opclasses and the operators they support, but there's
certainly scope to put more information there.
Emre Hasegeli, reviewed by Andreas Karlsson, further hacking by me
|
|
|
|
|
|
|
|
|
| |
These functions won't throw an error if the object doesn't exist,
or if (for functions and operators) there's more than one matching
object.
Yugo Nagata and Nozomi Anzai, reviewed by Amit Khandekar, Marti
Raudsepp, Amit Kapila, and me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
VALIDATE CONSTRAINT
CLUSTER ON
SET WITHOUT CLUSTER
ALTER COLUMN SET STATISTICS
ALTER COLUMN SET ()
ALTER COLUMN RESET ()
All other sub-commands use AccessExclusiveLock
Simon Riggs and Noah Misch
Reviews by Robert Haas and Andres Freund
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
EXEC_BACKEND builds (i.e., Windows) failed to absorb values of PGC_BACKEND
parameters if they'd been changed post-startup via the config file. This
for example prevented log_connections from working if it were turned on
post-startup. The mechanism for handling this case has always been a bit
of a kluge, and it wasn't revisited when we implemented EXEC_BACKEND.
While in a normal forking environment new backends will inherit the
postmaster's value of such settings, EXEC_BACKEND backends have to read
the settings from the CONFIG_EXEC_PARAMS file, and they were mistakenly
rejecting them. So this case has always been broken in the Windows port;
so back-patch to all supported branches.
Amit Kapila
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is needed because Windows services may get started with a different
current directory than where pg_ctl is executed. We want relative -D
paths to be interpreted relative to pg_ctl's CWD, similarly to what
happens on other platforms.
In support of this, move the backend's make_absolute_path() function
into src/port/path.c (where it probably should have been long since)
and get rid of the rather inferior version in pg_regress.
Kumar Rajeev Rastogi, reviewed by MauMau
|
|
|
|
|
|
|
| |
This caught a bunch of cases doing that already, which I just fixed in
previous commit. This is the assertion itself.
Per Tom Lane's idea.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For variadic functions (other than VARIADIC ANY), the syntaxes foo(x,y,...)
and foo(VARIADIC ARRAY[x,y,...]) should be considered equivalent, since the
former is converted to the latter at parse time. They have indeed been
equivalent, in all releases before 9.3. However, commit 75b39e790 made an
ill-considered decision to record which syntax had been used in FuncExpr
nodes, and then to make equal() test that in checking node equality ---
which caused the syntaxes to not be seen as equivalent by the planner.
This is the underlying cause of bug #9817 from Dmitry Ryabov.
It might seem that a quick fix would be to make equal() disregard
FuncExpr.funcvariadic, but the same commit made that untenable, because
the field actually *is* semantically significant for some VARIADIC ANY
functions. This patch instead adopts the approach of redefining
funcvariadic (and aggvariadic, in HEAD) as meaning that the last argument
is a variadic array, whether it got that way by parser intervention or was
supplied explicitly by the user. Therefore the value will always be true
for non-ANY variadic functions, restoring the principle of equivalence.
(However, the planner will continue to consider use of VARIADIC as a
meaningful difference for VARIADIC ANY functions, even though some such
functions might disregard it.)
In HEAD, this change lets us simplify the decompilation logic in
ruleutils.c, since the funcvariadic/aggvariadic flag tells directly whether
to print VARIADIC. However, in 9.3 we have to continue to cope with
existing stored rules/views that might contain the previous definition.
Fortunately, this just means no change in ruleutils.c, since its existing
behavior effectively ignores funcvariadic for all cases other than VARIADIC
ANY functions.
In HEAD, bump catversion to reflect the fact that FuncExpr.funcvariadic
changed meanings; this is sort of pro forma, since I don't believe any
built-in views are affected.
Unfortunately, this patch doesn't magically fix everything for affected
9.3 users. After installing 9.3.5, they might need to recreate their
rules/views/indexes containing variadic function calls in order to get
everything consistent with the new definition. As in the cited bug,
the symptom of a problem would be failure to use a nominally matching
index that has a variadic function call in its definition. We'll need
to mention this in the 9.3.5 release notes.
|
|
|
|
|
| |
Mostly, copy-edit the comments; but also fix it to not reject domains over
arrays.
|
|
|
|
| |
Needed for strict C89 compliance.
|