| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
| |
This requires adjusting the API for syscache callback functions: they now
get a hash value, not a TID, to identify the target tuple. Most of them
weren't paying any attention to that argument anyway, but plancache did
require a small amount of fixing.
Also, improve performance a trifle by avoiding sending duplicate inval
messages when a heap_update isn't changing the catcache lookup columns.
|
|
|
|
| |
Shigeru Hanada, with fairly minor editing by me.
|
|
|
|
|
|
|
|
|
|
|
| |
This requires a new shared catalog, pg_shseclabel.
Along the way, fix the security_label regression tests so that they
don't monkey with the labels of any pre-existing objects. This is
unlikely to matter in practice, since only the label for the "dummy"
provider was being manipulated. But this way still seems cleaner.
KaiGai Kohei, with fairly extensive hacking by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous coding, we would look up a relation in RangeVarGetRelid,
lock the resulting OID, and then AcceptInvalidationMessages(). While
this was sufficient to ensure that we noticed any changes to the
relation definition before building the relcache entry, it didn't
handle the possibility that the name we looked up no longer referenced
the same OID. This was particularly problematic in the case where a
table had been dropped and recreated: we'd latch on to the entry for
the old relation and fail later on. Now, we acquire the relation lock
inside RangeVarGetRelid, and retry the name lookup if we notice that
invalidation messages have been processed meanwhile. Many operations
that would previously have failed with an error in the presence of
concurrent DDL will now succeed.
There is a good deal of work remaining to be done here: many callers
of RangeVarGetRelid still pass NoLock for one reason or another. In
addition, nothing in this patch guards against the possibility that
the meaning of an unqualified name might change due to the creation
of a relation in a schema earlier in the user's search path than the
one where it was previously found. Furthermore, there's nothing at
all here to guard against similar race conditions for non-relations.
For all that, it's a start.
Noah Misch and Robert Haas
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get_op_btree_interpretation assumed this in order to save some duplication
of code, but it's not true in general anymore because we added <> support
to btree_gist. (We still assume it for btree opclasses, though.)
Also, essentially the same logic was baked into predtest.c. Get rid of
that duplication by generalizing get_op_btree_interpretation so that it
can be used by predtest.c.
Per bug report from Denis de Bernardy and investigation by Jeff Davis,
though I didn't use Jeff's patch exactly as-is.
Back-patch to 9.1; we do not support this usage before that.
|
| |
|
|
|
|
|
| |
This lets us stop including rel.h into execnodes.h, which is a widely
used header.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unlike the relistemp field which it replaced, relpersistence must be
set correctly quite early during the table creation process, as we
rely on it quite early on for a number of purposes, including security
checks. Normally, this is set based on whether the user enters CREATE
TABLE, CREATE UNLOGGED TABLE, or CREATE TEMPORARY TABLE, but a
relation may also be made implicitly temporary by creating it in
pg_temp. This patch fixes the handling of that case, and also
disables creation of unlogged tables in temporary tablespace (such
table indeed skip WAL-logging, but we reject an explicit
specification) and creation of relations in the temporary schemas of
other sessions (which is not very sensible, and didn't work right
anyway).
Report by Amit Khandekar.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This means that they can initially be added to a large existing table
without checking its initial contents, but new tuples must comply to
them; a separate pass invoked by ALTER TABLE / VALIDATE can verify
existing data and ensure it complies with the constraint, at which point
it is marked validated and becomes a normal part of the table ecosystem.
An non-validated CHECK constraint is ignored in the planner for
constraint_exclusion purposes; when validated, cached plans are
recomputed so that partitioning starts working right away.
This patch also enables domains to have unvalidated CHECK constraints
attached to them as well by way of ALTER DOMAIN / ADD CONSTRAINT / NOT
VALID, which can later be validated with ALTER DOMAIN / VALIDATE
CONSTRAINT.
Thanks to Thom Brown, Dean Rasheed and Jaime Casanova for the various
reviews, and Robert Hass for documentation wording improvement
suggestions.
This patch was sponsored by Enova Financial.
|
|
|
|
| |
Extracted from a patch by Bernd Helmle
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the names try_relation_openrv() and try_heap_openrv() don't seem
quite appropriate, rename the functions to relation_openrv_extended()
and heap_openrv_extended(). This is also more general, if we have a
future need for additional parameters that are of interest to only a
few callers.
This is infrastructure for a forthcoming patch to allow
get_object_address() to take a missing_ok argument as well.
Patch by me, review by Noah Misch.
|
|
|
|
|
|
|
| |
This is just like serial and bigserial, except it generates an int2
column rather than int4 or int8.
Mike Pultz, reviewed by Brar Piening and Josh Kupershmidt
|
|
|
|
|
|
|
|
|
|
| |
Initially, we use this only to eliminate calls to the varchar()
function in cases where the length is not being reduced and, therefore,
the function call is equivalent to a RelabelType operation. The most
significant effect of this is that we can avoid a table rewrite when
changing a varchar(X) column to a varchar(Y) column, where Y > X.
Noah Misch, reviewed by me and Alexey Klyukin
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
The initial commit of the ALTER TABLE ADD FOREIGN KEY NOT VALID feature
failed to support labeling such constraints as deferrable. The best fix
for this seems to be to fold NOT VALID into ConstraintAttributeSpec.
That's a bit more general than the documented syntax, but it allows
better-targeted syntax error messages.
In addition, do some mostly-but-not-entirely-cosmetic code review for
the whole NOT VALID patch.
|
|
|
|
|
|
| |
The variable became obsolete in commit
68739ba856c52e6721d6cffec21f1bf0327a9a7b, but only gcc 4.6 shows the
warning.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This use-case was broken in commit 529cb267a6843a6a8190c86b75d091771d99d6a9
of 2010-10-21, in which I commented "For the moment, we just forbid such
matching. We might later wish to insert an automatic downcast to the
underlying array type, but such a change should also change matching of
domains to ANYELEMENT for consistency". We still lack consensus about what
to do with ANYELEMENT; but not matching ANYARRAY is a clear loss of
functionality compared to prior releases, so let's go ahead and make that
happen. Per complaint from Regina Obe and extensive subsequent discussion.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were trying to make that strictly an internal implementation detail,
but it turns out that it's exposed anyway when dumping a view defined
like
CREATE VIEW test_view AS VALUES (1), (2), (3) ORDER BY 1;
This comes out as
CREATE VIEW ... ORDER BY "*VALUES*".column1;
which fails to parse when reloading the dump.
Hacking ruleutils.c to suppress the column qualification looks like it'd
be a risky business, so instead promote the RTE alias to full-fledged
usability.
Per bug #6049 from Dylan Adams. Back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existence of a btree opclass accepting composite types caused us to
assume that every composite type is sortable. This isn't true of course;
we need to check if the column types are all sortable. There was logic
for this for the case of array comparison (ie, check that the element
type is sortable), but we missed the point for rowtypes. Per Teodor's
report of an ANALYZE failure for an unsortable composite type.
Rather than just add some more ad-hoc logic for this, I moved knowledge of
the issue into typcache.c. The typcache will now only report out array_eq,
record_cmp, and friends as usable operators if the array or composite type
will work with those functions.
Unfortunately we don't have enough info to do this for anonymous RECORD
types; in that case, just assume it will work, and take the runtime failure
as before if it doesn't.
This patch might be a candidate for back-patching at some point, but
given the lack of complaints from the field, I'd rather just test it in
HEAD for now.
Note: most of the places touched in this patch will need further work
when we get around to supporting hashing of record types.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use ColLabel in place of ColId, so that reserved words are accepted as if
they were not reserved. Also, remove BCONST and XCONST, which were never
documented as allowed. Allowing those exposes to users an implementation
detail, namely the format in which the lexer outputs such constants, that
seems unwise to expose.
No documentation change needed, since this just makes the code act more
like you'd expect from reading the CREATE TRIGGER man page.
Per complaint from Szymon Guz and subsequent discussion.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These were labeled with precedences just to avoid attaching explicit
precedences to the productions in which they were the last terminal symbol.
Since a terminal symbol precedence marking can affect many other things
too, it seems like better practice to attach precedence labels to the
productions, and not mark the terminal symbols.
Ideally we'd also remove the precedence attached to NULL_P, but it turns
out that we are actually depending on that having a precedence higher than
POSTFIXOP, else we get a shift/reduce conflict for postfix operators in
b_expr. (Which more or less proves my point about these markings having a
high risk of unexpected consequences.) For the moment, move NULL_P into
the set of keywords grouped with IDENT, so that at least it will act
similarly to non-keywords; and document the interaction.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Per bug #5988, reported by Marko Tiikkaja, and further analyzed by Tom
Lane, the previous coding was broken in several respects: even if the
target table already existed, a subsequent CREATE TABLE IF NOT EXISTS
might try to add additional constraints or sequences-for-serial
specified in the new CREATE TABLE statement.
In passing, this also fixes a minor information leak: it's no longer
possible to figure out whether a schema to which you don't have CREATE
access contains a sequence named like "x_y_seq" by attempting to create a
table in that schema called "x" with a serial column called "y".
Some more refactoring of this code in the future might be warranted,
but that will need to wait for a later major release.
|
|
|
|
|
|
|
|
|
|
|
| |
Instead, foreign tables are treated just like views: permissions can
be granted using GRANT privilege ON [TABLE] foreign_table_name TO role,
and revoked similarly. GRANT/REVOKE .. FOREIGN TABLE is no longer
supported, just as we don't support GRANT/REVOKE .. VIEW. The set of
accepted permissions for foreign tables is now identical to the set for
regular tables, and views.
Per report from Thom Brown, and subsequent discussion.
|
|
|
|
|
|
|
| |
This patch is almost entirely cosmetic --- mostly cleaning up a lot of
neglected comments, and fixing code layout problems in places where the
patch made lines too long and then pgindent did weird things with that.
I did find a bug-of-omission in equalTupleDescs().
|
|
|
|
|
|
|
|
|
|
|
| |
This syntax allows a standalone table to be made into a typed table,
or a typed table to be made standalone. This is possibly a mildly
useful feature in its own right, but the real motivation for this
change is that we need it to make pg_upgrade work with typed tables.
This doesn't actually fix that problem, but it's necessary
infrastructure.
Noah Misch
|
|
|
|
|
|
|
|
|
| |
Per spec we ought to apply select_common_collation() across the expressions
in each column of the VALUES table. The original coding was just taking
the first row and assuming it was representative.
This patch adds a field to struct RangeTblEntry to carry the resolved
collations, so initdb is forced for changes in stored rule representation.
|
|
|
|
|
|
|
|
|
| |
This also ensures that we take a relation lock on the composite type when
creating a typed table, which is necessary to prevent the composite type
and the typed table from getting out of step in the face of concurrent
DDL.
Noah Misch, with some changes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the referencing and referenced columns have different collations,
the parser will be unable to resolve which collation to use unless it's
helped out in this way. The effects are sometimes masked, if we end up
using a non-collation-sensitive plan; but if we do use a mergejoin
we'll see a failure, as recently noted by Robert Haas.
The SQL spec states that the referenced column's collation should be used
to resolve RI checks, so that's what we do. Note however that we currently
don't append a COLLATE clause when writing a query that examines only the
referencing column. If we ever support collations that have varying
notions of equality, that will have to be changed. For the moment, though,
it's preferable to leave it off so that we can use a normal index on the
referencing column.
|
|
|
|
|
|
| |
This warning is new in gcc 4.6 and part of -Wall. This patch cleans
up most of the noise, but there are some still warnings that are
trickier to remove.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove crude hack that tried to propagate collation through a
function-returning-record, ie, from the function's arguments to individual
fields selected from its result record. That is just plain inconsistent,
because the function result is composite and cannot have a collation;
and there's no hope of making this kind of action-at-a-distance work
consistently. Adjust regression test cases that expected this to happen.
Meanwhile, the behavior of casting to a domain with a declared collation
stays the same as it was, since that seemed to be the consensus.
|
|
|
|
|
|
| |
All the other fields of the constant are being extracted from the syscache
entry we already have, so handle collation similarly. (There don't seem
to be any other uses for the new function at the moment.)
|
|
|
|
|
|
|
|
|
|
|
| |
Previous patches took care of assorted places that call transformExpr from
outside the main parser, but I overlooked the fact that some places use
transformWhereClause as a shortcut for transformExpr + coerce_to_boolean.
In particular this broke collation-sensitive index WHERE clauses, as per
report from Thom Brown. Trigger WHEN and rule WHERE clauses too.
I'm not forcing initdb for this fix, but any affected indexes, triggers,
or rules will need to be dropped and recreated.
|
|
|
|
|
|
|
|
|
|
| |
This mostly involves making it work with the objectaddress.c framework,
which does most of the heavy lifting. In that vein, change
GetForeignDataWrapperOidByName to get_foreign_data_wrapper_oid and
GetForeignServerOidByName to get_foreign_server_oid, to match the
pattern we use for other object types.
Robert Haas and Shigeru Hanada
|
|
|
|
|
| |
Mostly cosmetic, though I did find that generateClonedIndexStmt failed
to clone the index's collations.
|
|
|
|
|
|
|
|
|
| |
I'm not sure these have any non-cosmetic implications, but I'm not sure
they don't, either. In particular, ensure the CaseTestExpr generated
by transformAssignmentIndirection to represent the base target column
carries the correct collation, because parse_collate.c won't fix that.
Tweak lsyscache.c API so that we can get the appropriate collation
without an extra syscache lookup.
|
|
|
|
|
|
|
|
|
| |
In nearly all cases, the caller already knows the correct collation, and
in a number of places, the value the caller has handy is more correct than
the default for the type would be. (In particular, this patch makes it
significantly less likely that eval_const_expressions will result in
changing the exposed collation of an expression.) So an internal lookup
is both expensive and wrong.
|
|
|
|
|
|
|
|
|
|
| |
Ensure that parameter symbols receive collation from the function's
resolved input collation, and fix inlining to behave properly.
BTW, this commit lays about 90% of the infrastructure needed to support
use of argument names in SQL functions. Parsing of parameters is now
done via the parser-hook infrastructure ... we'd just need to supply
a column-ref hook ...
|
|
|
|
| |
Bug report from Alvaro Herrera
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This restores a parse error that was thrown (though only in the ORDER BY
case) by the original collation patch. I had removed it in my recent
revisions because it was thrown at a place where collations now haven't
been computed yet; but I thought of another way to handle it.
Throwing the error at parse time, rather than leaving it to be done at
runtime, is good because a syntax error pointer is helpful for localizing
the problem. We can reasonably assume that the comparison function for a
collatable datatype will complain if it doesn't have a collation to use.
Now the planner might choose to implement GROUP or DISTINCT via hashing,
in which case no runtime error would actually occur, but it seems better
to throw error consistently rather than let the error depend on what the
planner chooses to do. Another possible objection is that the user might
specify a nondefault sort operator that doesn't care about collation
... but that's surely an uncommon usage, and it wouldn't hurt him to throw
in a COLLATE clause anyway. This change also makes the ORDER BY/GROUP
BY/DISTINCT case more consistent with the UNION/INTERSECT/EXCEPT case,
which was already coded to throw this error even though the same objections
could be raised there.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All expression nodes now have an explicit output-collation field, unless
they are known to only return a noncollatable data type (such as boolean
or record). Also, nodes that can invoke collation-aware functions store
a separate field that is the collation value to pass to the function.
This avoids confusion that arises when a function has collatable inputs
and noncollatable output type, or vice versa.
Also, replace the parser's on-the-fly collation assignment method with
a post-pass over the completed expression tree. This allows us to use
a more complex (and hopefully more nearly spec-compliant) assignment
rule without paying for it in extra storage in every expression node.
Fix assorted bugs in the planner's handling of collations by making
collation one of the defining properties of an EquivalenceClass and
by converting CollateExprs into discardable RelabelType nodes during
expression preprocessing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch causes unknown-type Consts to be coerced to the resolved output
type of the set operation at parse time. Formerly such Consts were left
alone until late in the planning stage. The disadvantage of that approach
is that it disables some optimizations, because the planner sees the set-op
leaf query as having different output column types than the overall set-op.
We saw an example of that in a recent performance gripe from Claudio
Freire.
Fixing such a Const requires scribbling on the leaf query in
transformSetOperationTree, but that should be all right since if the leaf
query's semantics depended on that output column, it would already have
resolved the unknown to something else.
Most of the bulk of this patch is a simple adjustment of
transformSetOperationTree's API so that upper levels can get at the
TargetEntry containing a Const to be replaced: it now returns a list of
TargetEntries, instead of just the bare expressions.
|
|
|
|
| |
Review by Tom Lane.
|
|
|
|
|
|
|
|
|
|
|
| |
CollateClause is now used only in raw grammar output, and CollateExpr after
parse analysis. This is for clarity and to avoid carrying collation names
in post-analysis parse trees: that's both wasteful and possibly misleading,
since the collation's name could be changed while the parsetree still
exists.
Also, clean up assorted infelicities and omissions in processing of the
node type.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
|
|
|
|
|
|
| |
We really need an automated check for this ... and did VALIDATE really
need to become a keyword at all, rather than picking some other syntax
using existing keywords?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the unconditional superuser permissions check in CREATE EXTENSION,
and instead define a "superuser" extension property, which when false
(not the default) skips the superuser permissions check. In this case
the calling user only needs enough permissions to execute the commands
in the extension's installation script. The superuser property is also
enforced in the same way for ALTER EXTENSION UPDATE cases.
In other ALTER EXTENSION cases and DROP EXTENSION, test ownership of
the extension rather than superuserness. ALTER EXTENSION ADD/DROP needs
to insist on ownership of the target object as well; to do that without
duplicating code, refactor comment.c's big switch for permissions checks
into a separate function in objectaddress.c.
I also removed the superuserness checks in pg_available_extensions and
related functions; there's no strong reason why everybody shouldn't
be able to see that info.
Also invent an IF NOT EXISTS variant of CREATE EXTENSION, and use that
in pg_dump, so that dumps won't fail for installed-by-default extensions.
We don't have any of those yet, but we will soon.
This is all per discussion of wrapping the standard procedural languages
into extensions. I'll make those changes in a separate commit; this is
just putting the core infrastructure in place.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch implements data-modifying WITH queries according to the
semantics that the updates all happen with the same command counter value,
and in an unspecified order. Therefore one WITH clause can't see the
effects of another, nor can the outer query see the effects other than
through the RETURNING values. And attempts to do conflicting updates will
have unpredictable results. We'll need to document all that.
This commit just fixes the code; documentation updates are waiting on
author.
Marko Tiikkaja and Hitoshi Harada
|
|
|
|
|
|
|
|
|
| |
The recent additions for FDW support required checking foreign-table-ness
in several places in the parse/plan chain. While it's not clear whether
that would really result in a noticeable slowdown, it seems best to avoid
any performance risk by keeping a copy of the relation's relkind in
RangeTblEntry. That might have some other uses later, anyway.
Per discussion.
|