| Commit message (Collapse) | Author | Age |
|
|
|
| |
Itagaki Takahiro, with slight modifications.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This code was just plain wrong: what you got was not a line through the
given point but a line almost indistinguishable from the Y-axis, although
not truly vertical. The only caller that tries to use this function with
m == DBL_MAX is dist_ps_internal for the case where the lseg is horizontal;
it would end up producing the distance from the given point to the place
where the lseg's line crosses the Y-axis. That function is used by other
operators too, so there are several operators that could compute wrong
distances from a line segment to something else. Per bug #5745 from
jindiax.
Back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We failed to record any dependency on the underlying table for an index
declared like "create index i on t (foo(t.*))". This would create trouble
if the table were dropped without previously dropping the index. To fix,
simplify some overly-cute code in index_create(), accepting the possibility
that sometimes the whole-table dependency will be redundant. Also document
this hazard in dependency.c. Per report from Kevin Grittner.
In passing, prevent a core dump in pg_get_indexdef() if the index's table
can't be found. I came across this while experimenting with Kevin's
example. Not sure it's a real issue when the catalogs aren't corrupt, but
might as well be cautious.
Back-patch to all supported versions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The core of this patch is hash_array() and associated typcache
infrastructure, which works just about exactly like the existing support
for array comparison.
In addition I did some work to ensure that the planner won't think that an
array type is hashable unless its element type is hashable, and similarly
for sorting. This includes adding a datatype parameter to op_hashjoinable
and op_mergejoinable, and adding an explicit "hashable" flag to
SortGroupClause. The lack of a cross-check on the element type was a
pre-existing bug in mergejoin support --- but it didn't matter so much
before, because if you couldn't sort the element type there wasn't any good
alternative to failing anyhow. Now that we have the alternative of hashing
the array type, there are cases where we can avoid a failure by being picky
at the planner stage, so it's time to be picky.
The issue of exactly how to combine the per-element hash values to produce
an array hash is still open for discussion, but the rest of this is pretty
solid, so I'll commit it as-is.
|
|
|
|
|
|
|
| |
Per C standard, these are semantically the same thing; but saying NULL
when you mean NULL is good for readability.
Marti Raudsepp, per results of INRIA's Coccinelle.
|
|
|
|
|
|
|
|
| |
A NestLoopParam's value can only be a Var or Aggref, but this isn't the
case in general for SubPlan parameters, so print_parameter_expr had better
be prepared to cope. Brain fade in my recent patch to print the referenced
expression instead of just printing $N for PARAM_EXEC Params. Per report
from Pavel Stehule.
|
|
|
|
|
|
|
| |
After much expenditure of effort, we've got this to the point where the
performance penalty is pretty minimal in typical cases.
Andrew Dunstan, reviewed by Brendan Jurd, Dean Rasheed, and Tom Lane
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch eliminates various bizarre behaviors caused by sloppy thinking
about the difference between a domain type and its underlying array type.
In particular, the operation of updating one element of such an array
has to be considered as yielding a value of the underlying array type,
*not* a value of the domain, because there's no assurance that the
domain's CHECK constraints are still satisfied. If we're intending to
store the result back into a domain column, we have to re-cast to the
domain type so that constraints are re-checked.
For similar reasons, such a domain can't be blindly matched to an ANYARRAY
polymorphic parameter, because the polymorphic function is likely to apply
array-ish operations that could invalidate the domain constraints. For the
moment, we just forbid such matching. We might later wish to insert an
automatic downcast to the underlying array type, but such a change should
also change matching of domains to ANYELEMENT for consistency.
To ensure that all such logic is rechecked, this patch removes the original
hack of setting a domain's pg_type.typelem field to match its base type;
the typelem will always be zero instead. In those places where it's really
okay to look through the domain type with no other logic changes, use the
newly added get_base_element_type function in place of get_element_type.
catversion bumped due to change in pg_type contents.
Per bug #5717 from Richard Huxton and subsequent discussion.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The better estimate requires more statistics than we previously stored:
in particular, counts of "entry" versus "data" pages within the index,
as well as knowledge of the number of distinct key values. We collect
this information during initial index build and update it during VACUUM,
storing the info in new fields on the index metapage. No initdb is
required because these fields will read as zeroes in a pre-existing
index, and the new gincostestimate code is coded to behave (reasonably)
sanely if they are zeroes.
Teodor Sigaev, reviewed by Jan Urbanski, Tom Lane, and Itagaki Takahiro.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is not the hoped-for facility of using INSERT/UPDATE/DELETE inside
a WITH, but rather the other way around. It seems useful in its own
right anyway.
Note: catversion bumped because, although the contents of stored rules
might look compatible, there's actually a subtle semantic change.
A single Query containing a WITH and INSERT...VALUES now represents
writing the WITH before the INSERT, not before the VALUES. While it's
not clear that that matters to anyone, it seems like a good idea to
have it cited in the git history for catversion.h.
Original patch by Marko Tiikkaja, with updating and cleanup by
Hitoshi Harada.
|
| |
|
|
|
|
|
|
|
|
|
| |
This patch eliminates the former need to sort the output of an Append scan
when an ordered scan of an inheritance tree is wanted. This should be
particularly useful for fast-start cases such as queries with LIMIT.
Original patch by Greg Stark, with further hacking by Hans-Jurgen Schonig,
Robert Haas, and Tom Lane.
|
|
|
|
|
|
|
| |
to see if a particular privilege has been granted to PUBLIC.
The issue was reported by Jim Nasby.
Patch by Alvaro Herrera, and reviewed by KaiGai Kohei.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds the SQL-standard concept of an INSTEAD OF trigger, which
is fired instead of performing a physical insert/update/delete. The
trigger function is passed the entire old and/or new rows of the view,
and must figure out what to do to the underlying tables to implement
the update. So this feature can be used to implement updatable views
using trigger programming style rather than rule hacking.
In passing, this patch corrects the names of some columns in the
information_schema.triggers view. It seems the SQL committee renamed
them somewhere between SQL:99 and SQL:2003.
Dean Rasheed, reviewed by Bernd Helmle; some additional hacking by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Various places were testing TRIGGER_FIRED_BEFORE() where what they really
meant was !TRIGGER_FIRED_AFTER(), or vice versa. This needs to be cleaned
up because there are about to be more than two possible states.
We might want to note this in the 9.1 release notes as something for
trigger authors to double-check.
For consistency's sake I also changed some places that assumed that
TRIGGER_FIRED_FOR_ROW and TRIGGER_FIRED_FOR_STATEMENT are necessarily
mutually exclusive; that's not in immediate danger of breaking, but
it's still sloppier than it should be.
Extracted from Dean Rasheed's patch for triggers on views. I'm committing
this separately since it's an identifiable separate issue, and is the
only reason for the patch to touch most of these particular files.
|
|
|
|
|
|
| |
This was unintentionally broken in 8.4 while tightening up checking of
ordinary non-Julian date inputs to forbid references to "year zero".
Per bug #5672 from Benjamin Gigot.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
transaction snapshots, i.e. a snapshot registered at the beginning of
a transaction. Change variable naming and comments to reflect this reality
in preparation for a future, truly serializable mode, e.g.
Serializable Snapshot Isolation (SSI).
For the moment transaction snapshots are still used to implement
SERIALIZABLE, but hopefully not for too much longer. Patch by Kevin
Grittner and Dan Ports with review and some minor wording changes by me.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the code underlying pg_get_expr() is not secure against malformed
input, and can't practically be made so, we need to prevent miscreants
from feeding arbitrary data to it. We can do this securely by declaring
pg_get_expr() to take a new datatype "pg_node_tree" and declaring the
system catalog columns that hold nodeToString output to be of that type.
There is no way at SQL level to create a non-null value of type pg_node_tree.
Since the backend-internal operations that fill those catalog columns
operate below the SQL level, they are oblivious to the datatype relabeling
and don't need any changes.
|
|
|
|
| |
Pavel Stehule, reviewed by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
array_in discards unquoted leading and trailing whitespace in array values,
while array_out is careful to quote array elements that contain whitespace.
This is problematic when the definition of "whitespace" varies between
locales: array_in could drop characters that were meant to be part of the
value. To avoid that, lock down "whitespace" to mean only the traditional
six ASCII space characters.
This change also works around a bug in OS X and some older BSD systems, in
which isspace() could return true for character fragments in UTF8 locales.
(There may be other places in PG where that bug could cause problems, but
this is the only one complained of so far; see recent report from Steven
Schlansker.)
Back-patch to 9.0, but not further. Given the lack of previous reports
of trouble, changing this behavior in stable branches seems to offer
more risk of breaking applications than reward of avoiding problems.
|
| |
|
|
|
|
| |
Update comment about them in pg_get_functiondef.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to reliably remove all leftover temporary relation
files on cluster startup without reference to system catalogs or WAL;
therefore, we no longer include temporary relations in XLOG_XACT_COMMIT
and XLOG_XACT_ABORT WAL records.
Since these changes require including a backend ID in each
SharedInvalSmgrMsg, the size of the SharedInvalidationMessage.id
field has been reduced from two bytes to one, and the maximum number
of connections has been reduced from INT_MAX / 4 to 2^23-1. It would
be possible to remove these restrictions by increasing the size of
SharedInvalidationMessage by 4 bytes, but right now that doesn't seem
like a good trade-off.
Review by Jaime Casanova and Tom Lane.
|
|
|
|
|
|
|
|
| |
functions to the core XML code. Per discussion, the former depends on
XMLOPTION while the others do not. These supersede a version previously
offered by contrib/xml2.
Mike Fowler, reviewed by Pavel Stehule
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and the editor's cursor will be initially placed on that line. In \e the
lines are counted with respect to the query buffer, while in \ef they are
counted with line 1 = first line of function body. These choices are useful
for positioning the cursor on the line of a previously-reported error.
To avoid assumptions about what switch the user's editor takes for this
purpose, invent a new psql variable EDITOR_LINENUMBER_SWITCH with (at
present) no default value.
One incompatibility from previous behavior is that "\e 1234" will now
take "1234" as a line number not a file name. There are at least two
ways to select a numerically-named file if you really want to.
Pavel Stehule, reviewed by Jan Urbanski, with further editing by Robert Haas
and Tom Lane
|
|
|
|
|
|
|
| |
continue to accept an empty array with dimension information. array_send()
can output such arrays.
Per report from Vladimir Shakhov.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
better handling of NULL elements within the arrays. The third parameter
is a string that should be used to represent a NULL element, or should
be translated into a NULL element, respectively. If the third parameter
is NULL it behaves the same as the two-parameter form.
There are two incompatible changes in the behavior of the two-parameter form
of string_to_array. First, it will return an empty (zero-element) array
rather than NULL when the input string is of zero length. Second, if the
field separator is NULL, the function splits the string into individual
characters, rather than returning NULL as before. These two changes make
this form fully compatible with the behavior of the new three-parameter form.
Pavel Stehule, reviewed by Brendan Jurd
|
|
|
|
|
|
| |
it offers support for namespace mapping.
Mike Fowler, reviewed by David Fetter
|
|
|
|
|
|
|
|
| |
statistics counts. These numbers are being accumulated but haven't yet been
transmitted to the collector (and won't be, until the transaction ends).
For some purposes, though, it's handy to be able to look at them.
Joel Jacobson, reviewed by Itagaki Takahiro
|
|
|
|
|
|
|
|
| |
functionality, while creating an ambiguity in usage with ORDER BY that at
least two people have already gotten seriously confused by. Also, add an
opr_sanity test to check that we don't in future violate the newly minted
policy of not having built-in aggregates with the same name and different
numbers of parameters. Per discussion of a complaint from Thom Brown.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Rename TSParserGetPrsid to get_ts_parser_oid.
- Rename TSDictionaryGetDictid to get_ts_dict_oid.
- Rename TSTemplateGetTmplid to get_ts_template_oid.
- Rename TSConfigGetCfgid to get_ts_config_oid.
- Rename FindConversionByName to get_conversion_oid.
- Rename GetConstraintName to get_constraint_oid.
- Add new functions get_opclass_oid, get_opfamily_oid, get_rewrite_oid,
get_rewrite_oid_without_relid, get_trigger_oid, and get_cast_oid.
The name of each function matches the corresponding catalog.
Thanks to KaiGai Kohei for the review.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
unqualified names.
- Add a missing_ok parameter to get_tablespace_oid.
- Avoid duplicating get_tablespace_od guts in objectNamesToOids.
- Add a missing_ok parameter to get_database_oid.
- Replace get_roleid and get_role_checked with get_role_oid.
- Add get_namespace_oid, get_language_oid, get_am_oid.
- Refactor existing code to use new interfaces.
Thanks to KaiGai Kohei for the review.
|
|
|
|
| |
by Mike Fowler, reviewed by Peter Eisentraut
|
|
|
|
|
|
|
|
| |
The old computation can sometimes underestimate the necessary space
by 2 bytes; however we're not back-patching this, because this result
isn't used for anything critical. Per discussion with Tom Lane,
make the typmod test in this function match the ones in numeric()
and apply_typmod() exactly.
|
|
|
|
| |
Review by Brendan Jurd and Tom Lane.
|
|
|
|
|
|
|
| |
function. This avoids unnecessary overflows and probably gives a more
accurate result as well.
Paul Matthews, reviewed by Andrew Geery
|
|
|
|
|
| |
for new pg_dump/pg_dumpall parameters, make a couple of trivial stylistic
adjustments to make the code follow usual project style.
|
|
|
|
|
|
| |
implementation deficiencies. Per discussion of bug #5592, we're not
going to change it, but these things should be documented so that if
anyone ever reimplements type tinterval, they will be more careful.
|
|
|
|
|
|
|
|
| |
makeTSQuerySign. The first of these is a live bug, on some platforms,
as per bug #5590 from John Regehr. However the consequences seem limited
because of the relatively narrow scope of use of QTNode.sign. The shift in
makeTSQuerySign is actually safe because TSQS_SIGLEN is unsigned, but it
seems like a good idea to insert an explicit cast rather than depend on that.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tsqueries. CompareTSQ has to have a guard for the case rather than blindly
applying QTNodeCompare to random data past the end of the datums. Also,
change QTNodeCompare to be a little less trusting: use an actual test rather
than just Assert'ing that the input is sane. Problem encountered while
investigating another issue (I saw a core dump in autoanalyze on a table
containing multiple empty tsquery values).
Back-patch to all branches with tsquery support.
In HEAD, also fix some bizarre (though not outright wrong) coding in
tsq_mcontains().
|
|
|
|
|
|
|
|
| |
interval input "invalid" was specified together with other fields. Spotted
by Neil Conway with the help of a clang warning. Although this has been
wrong since the interval code was written more than 10 years ago, it doesn't
affect anything beyond which error message you get for a wrong input, so not
worth back-patching very far.
|
|
|
|
| |
Review by Tom Lane.
|
|
|
|
|
|
|
|
|
| |
Avoid hard-coding lockmode used for many altering DDL commands, allowing easier
future changes of lock levels. Implementation of initial analysis on DDL
sub-commands, so that many lock levels are now at ShareUpdateExclusiveLock or
ShareRowExclusiveLock, allowing certain DDL not to block reads/writes.
First of number of planned changes in this area; additional docs required
when full project complete.
|
|
|
|
|
|
|
|
|
|
| |
I've added a quote_all_identifiers GUC which affects the behavior
of the backend, and a --quote-all-identifiers argument to pg_dump
and pg_dumpall which sets the GUC and also affects the quoting done
internally by those applications.
Design by Tom Lane; review by Alex Hunsaker; in response to bug #5488
filed by Hartmut Goebel.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove bespoke code in DoCopy and RI_Initial_Check, which now instead
fabricate call ExecCheckRTPerms with a manufactured RangeTblEntry.
This is intended to make it feasible for an enhanced security provider
to actually make use of ExecutorCheckPerms_hook, but also has the
advantage that RI_Initial_Check can allow use of the fast-path when
column-level but not table-level permissions are present.
KaiGai Kohei. Reviewed (in an earlier version) by Stephen Frost, and by me.
Some further changes to the comments by me.
|
|
|
|
|
|
| |
casting between money and numeric.
Andy Balholm, reviewed by Kevin Grittner
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rather than just $N. This brings the display of nestloop-inner-indexscan
plans back to where it's been, and incidentally improves the display of
SubPlan parameters as well. In passing, simplify the EXPLAIN code by
having it deal primarily in the PlanState tree rather than separately
searching Plan and PlanState trees. This is noticeably cleaner for
subplans, and about a wash elsewhere.
One small difference from previous behavior is that EXPLAIN will no longer
qualify local variable references in inner-indexscan plan nodes, since it
no longer sees such nodes as possibly referencing multiple tables. Vars
referenced through PARAM_EXEC Params are still forcibly qualified, though,
so I don't think the display is any more confusing than before. Adjust a
couple of examples in the documentation to match this behavior.
|
|
|
|
|
|
|
|
|
|
| |
resjunk outputs of subquery tlists, instead of throwing an error. Per bug
#5548 from Daniel Grace.
We might at some point find we ought to back-patch this further than 9.0,
but I think that such Vars can only occur as resjunk members of upper-level
tlists, in which case the problem can't arise because prior versions didn't
print resjunk tlist items in EXPLAIN VERBOSE.
|
| |
|