| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
| |
implementation uses an in-memory hash table, so it will poop out for very
large recursive results ... but the performance characteristics of a
sort-based implementation would be pretty unpleasant too.
|
|
|
|
|
|
|
|
|
| |
supplies an expression that can't be coerced to the target column type.
The code previously attempted to point at the target column name, which
doesn't work at all in an INSERT with omitted column name list, and is
also not remarkably helpful when the problem is buried somewhere in a
long INSERT-multi-VALUES command. Make it point at the failed expression
instead.
|
|
|
|
|
|
|
|
|
|
|
|
| |
the column alias names of the RTE referenced by the Var to the RowExpr.
This is needed to allow ruleutils.c to correctly deparse FieldSelect nodes
referencing such a construct. Per my recent bug report.
Adding a field to RowExpr forces initdb (because of stored rules changes)
so this solution is not back-patchable; which is unfortunate because 8.2
and 8.3 have this issue. But it only affects EXPLAIN for some pretty odd
corner cases, so we can probably live without a solution for the back
branches.
|
|
|
|
| |
from a query level above the current ParseState.
|
|
|
|
|
|
|
|
|
|
| |
well as regular tables. Per discussion, this seems necessary to meet the
principle of least astonishment.
In passing, simplify the error messages in warnAutoRange(). Now that we
have parser error position info for these errors, it doesn't seem very
useful to word the error message differently depending on whether we are
inside a sub-select or not.
|
|
|
|
| |
non-recursive term. Per an example from Dickson S. Guedes.
|
|
|
|
|
|
| |
recursive CTE that we're still in progress of analyzing. Add a similar guard
to the similar code in expandRecordVariable(), and tweak regression tests to
cover this case. Per report from Dickson S. Guedes.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are some unimplemented aspects: recursive queries must use UNION ALL
(should allow UNION too), and we don't have SEARCH or CYCLE clauses.
These might or might not get done for 8.4, but even without them it's a
pretty useful feature.
There are also a couple of small loose ends and definitional quibbles,
which I'll send a memo about to pgsql-hackers shortly. But let's land
the patch now so we can get on with other development.
Yoshiyuki Asaba, with lots of help from Tatsuo Ishii and Tom Lane
|
|
|
|
|
|
|
|
| |
ctype are now more like encoding, stored in new datcollate and datctype
columns in pg_database.
This is a stripped-down version of Radek Strnad's patch, with further
changes by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and the literal syntax INTERVAL 'string' ... SECOND(n), as required by the
SQL standard. Our old syntax put (n) directly after INTERVAL, which was
a mistake, but will still be accepted for backward compatibility as well
as symmetry with the TIMESTAMP cases.
Change intervaltypmodout to show it in the spec's way, too. (This could
potentially affect clients, if there are any that analyze the typmod of an
INTERVAL in any detail.)
Also fix interval input to handle 'min:sec.frac' properly; I had overlooked
this case in my previous patch.
Document the use of the interval fields qualifier, which up to now we had
never mentioned in the docs. (I think the omission was intentional because
it didn't work per spec; but it does now, or at least close enough to be
credible.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a lot closer than it was before). To do this, tweak coerce_type() to pass
through the typmod information when invoking interval_in() on an UNKNOWN
constant; then fix DecodeInterval to pay attention to the typmod when deciding
how to interpret a units-less integer value. I changed one or two other
details as well. I believe the code now reacts as expected by spec for all
the literal syntaxes that are specifically enumerated in the spec. There
are corner cases involving strings that don't exactly match the set of fields
called out by the typmod, for which we might want to tweak the behavior some
more; but I think this is an area of user friendliness rather than spec
compliance. There remain some non-compliant details about the SQL syntax
(as opposed to what's inside the literal string); but at least we'll throw
error rather than silently doing the wrong thing in those cases.
|
|
|
|
|
|
|
|
|
| |
when user-defined functions used in a plan are modified. Also invalidate
plans when schemas, operators, or operator classes are modified; but for these
cases we just invalidate everything rather than tracking exact dependencies,
since these types of objects seldom change in a production database.
Tom Lane; loosely based on a patch by Martin Pihlak.
|
|
|
|
|
|
|
| |
during parsing. Formerly the parser's stack was allocated with malloc
and so wouldn't be reclaimed; this patch makes it use palloc instead,
so that flushing the current context will reclaim the memory. Per
Marko Kreen.
|
|
|
|
|
| |
There are still some weak spots around JOIN USING and relation alias lists,
but most errors reported within backend/parser/ now have locations.
|
|
|
|
|
|
| |
SELECT foo.*) so that it cannot be confused with a quoted identifier "*".
Instead create a separate node type A_Star to represent this notation.
Per pgsql-hackers discussion of 2007-Sep-27.
|
|
|
|
|
| |
change build system to use only Bison. Simplify build rules, make file names
uniform. Don't build the token table header file where it is not needed.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
most node types used in expression trees (both before and after parse
analysis). This allows us to place an error cursor in many situations
where we formerly could not, because the information wasn't available
beyond the very first level of parse analysis. There's a fair amount
of work still to be done to persuade individual ereport() calls to actually
include an error location, but this gets the initdb-forcing part of the
work out of the way; and the situation is already markedly better than
before for complaints about unimplementable implicit casts, such as
CASE and UNION constructs with incompatible alternative data types.
Per my proposal of a few days ago.
|
|
|
|
|
|
| |
into nodes/nodeFuncs, so as to reduce wanton cross-subsystem #includes inside
the backend. There's probably more that should be done along this line,
but this is a start anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
subqueries into the same thing you'd have gotten from IN (except always with
unknownEqFalse = true, so as to get the proper semantics for an EXISTS).
I believe this fixes the last case within CVS HEAD in which an EXISTS could
give worse performance than an equivalent IN subquery.
The tricky part of this is that if the upper query probes the EXISTS for only
a few rows, the hashing implementation can actually be worse than the default,
and therefore we need to make a cost-based decision about which way to use.
But at the time when the planner generates plans for subqueries, it doesn't
really know how many times the subquery will be executed. The least invasive
solution seems to be to generate both plans and postpone the choice until
execution. Therefore, in a query that has been optimized this way, EXPLAIN
will show two subplans for the EXISTS, of which only one will actually get
executed.
There is a lot more that could be done based on this infrastructure: in
particular it's interesting to consider switching to the hash plan if we start
out using the non-hashed plan but find a lot more upper rows going by than we
expected. I have therefore left some minor inefficiencies in place, such as
initializing both subplans even though we will currently only use one.
|
|
|
|
|
|
|
|
|
|
|
| |
but seem like a separate patch since most of the remaining work is on the
executor side.) I took the opportunity to push selection of the grouping
operators for set operations into the parser where it belongs. Otherwise this
is just a small exercise in making prepunion.c consider both alternatives.
As with the recent DISTINCT patch, this means we can UNION on datatypes that
can hash but not sort, and it means that UNION without ORDER BY is no longer
certain to produce sorted output.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as methods for implementing the DISTINCT step. This eliminates the former
performance gap between DISTINCT and GROUP BY, and also makes it possible
to do SELECT DISTINCT on datatypes that only support hashing not sorting.
SELECT DISTINCT ON is still always implemented by sorting; it would take
executor changes to support hashing that, and it's not clear it's worth
the trouble.
This is a release-note-worthy incompatibility from previous PG versions,
since SELECT DISTINCT can no longer be counted on to deliver sorted output
without explicitly saying ORDER BY. (Anyone who can't cope with that
can consider turning off enable_hashagg.)
Several regression test queries needed to have ORDER BY added to preserve
stable output order. I fixed the ones that manifested here, but there
might be some other cases that show up on other platforms.
|
|
|
|
|
|
| |
sorting. The infrastructure for this was all in place already; it's only
necessary to fix the planner to not assume that sorting is always an available
option.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as per my recent proposal:
1. Fold SortClause and GroupClause into a single node type SortGroupClause.
We were already relying on them to be struct-equivalent, so using two node
tags wasn't accomplishing much except to get in the way of comparing items
with equal().
2. Add an "eqop" field to SortGroupClause to carry the associated equality
operator. This is cheap for the parser to get at the same time it's looking
up the sort operator, and storing it eliminates the need for repeated
not-so-cheap lookups during planning. In future this will also let us
represent GROUP/DISTINCT operations on datatypes that have hash opclasses
but no btree opclasses (ie, they have equality but no natural sort order).
The previous representation simply didn't work for that, since its only
indicator of comparison semantics was a sort operator.
3. Add a hasDistinctOn boolean to struct Query to explicitly record whether
the distinctClause came from DISTINCT or DISTINCT ON. This allows removing
some complicated and not 100% bulletproof code that attempted to figure
that out from the distinctClause alone.
This patch doesn't in itself create any new capability, but it's necessary
infrastructure for future attempts to use hash-based grouping for DISTINCT
and UNION/INTERSECT/EXCEPT.
|
|
|
|
|
|
|
|
|
|
| |
to represent DISTINCT or DISTINCT ON. This gets rid of a longstanding
annoyance that a view or rule using SELECT DISTINCT will be dumped out
with an overspecified ORDER BY list, and is one small step along the way
to decoupling DISTINCT and ORDER BY enough so that hash-based implementation
of DISTINCT will be possible. In passing, improve transformDistinctClause
so that it doesn't reject duplicate DISTINCT ON items, as was reported by
Steve Midgley a couple weeks ago.
|
|
|
|
|
|
|
| |
of the STRING type category, thereby opening up the mechanism for user-defined
types. This is mainly for the benefit of citext, though; there aren't likely
to be a lot of types that are all general-purpose character strings.
Per discussion with David Wheeler.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with system catalog lookups, as was foreseen to be necessary almost since
their creation. Instead put the information into two new pg_type columns,
typcategory and typispreferred. Add support for setting these when
creating a user-defined base type.
The category column is just a "char" (i.e. a poor man's enum), allowing
a crude form of user extensibility of the category list: just use an
otherwise-unused character. This seems sufficient for foreseen uses,
but we could upgrade to having an actual category catalog someday, if
there proves to be a huge demand for custom type categories.
In this patch I have attempted to hew exactly to the behavior of the
previous hardwired logic, except for introducing new type categories for
arrays, composites, and enums. In particular the default preferred state
for user-defined types remains TRUE. That seems worth revisiting, but it
should be done as a separate patch from introducing the infrastructure.
Likewise, any adjustment of the standard set of categories should be done
separately.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a portal are never NULL, but reliably provide the source text of the query.
It turns out that there was only one place that was really taking a short-cut,
which was the 'EXECUTE' utility statement. That doesn't seem like a
sufficiently critical performance hotspot to justify not offering a guarantee
of validity of the portal source text. Fix it to copy the source text over
from the cached plan. Add Asserts in the places that set up cached plans and
portals to reject null source strings, and simplify a bunch of places that
formerly needed to guard against nulls.
There may be a few places that cons up statements for execution without
having any source text at all; I found one such in ConvertTriggerToFK().
It seems sufficient to inject a phony source string in such a case,
for instance
ProcessUtility((Node *) atstmt,
"(generated ALTER TABLE ADD FOREIGN KEY command)",
NULL, false, None_Receiver, NULL);
We should take a second look at the usage of debug_query_string,
particularly the recently added current_query() SQL function.
ITAGAKI Takahiro and Tom Lane
|
|
|
|
|
|
|
| |
(Unlike the original submission, this patch treats TABLE output parameters
as being entirely equivalent to OUT parameters -- tgl)
Pavel Stehule
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
so long as all the trailing arguments are of the same (non-array) type.
The function receives them as a single array argument (which is why they
have to all be the same type).
It might be useful to extend this facility to aggregates, but this patch
doesn't do that.
This patch imposes a noticeable slowdown on function lookup --- a follow-on
patch will fix that by adding a redundant column to pg_proc.
Pavel Stehule
|
|
|
|
| |
known to the SQL parser. Dave Page
|
|
|
|
|
|
|
| |
corresponding struct definitions. This allows other headers to avoid including
certain highly-loaded headers such as rel.h and relscan.h, instead using just
relcache.h, heapam.h or genam.h, which are more lightweight and thus cause less
unnecessary dependencies.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
grammar allows ALTER TABLE/INDEX/SEQUENCE/VIEW interchangeably for all
subforms of those commands, and then we sort out what's really legal
at execution time. This allows the ALTER SEQUENCE/VIEW reference pages
to fully document all the ALTER forms available for sequences and views
respectively, and eliminates a longstanding cause of confusion for users.
The net effect is that the following forms are allowed that weren't before:
ALTER SEQUENCE OWNER TO
ALTER VIEW ALTER COLUMN SET/DROP DEFAULT
ALTER VIEW OWNER TO
ALTER VIEW SET SCHEMA
(There's no actual functionality gain here, but formerly you had to say
ALTER TABLE instead.)
Interestingly, the grammar tables actually get smaller, probably because
there are fewer special cases to keep track of.
I did not disallow using ALTER TABLE for these operations. Perhaps we
should, but there's a backwards-compatibility issue if we do; in fact
it would break existing pg_dump scripts. I did however tighten up
ALTER SEQUENCE and ALTER VIEW to reject non-sequences and non-views
in the new cases as well as a couple of cases where they didn't before.
The patch doesn't change pg_dump to use the new syntaxes, either.
|
|
|
|
| |
Use #define/#ifdef instead of sed to fix include files, this should work on Windows too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sequence to be reset to its original starting value. This requires adding the
original start value to the set of parameters (columns) of a sequence object,
which is a user-visible change with potential compatibility implications;
it also forces initdb.
Also add hopefully-SQL-compatible RESTART/CONTINUE IDENTITY options to
TRUNCATE TABLE. RESTART IDENTITY executes ALTER SEQUENCE RESTART for all
sequences "owned by" any of the truncated relations. CONTINUE IDENTITY is
a no-op option.
Zoltan Boszormenyi
|
|
|
|
|
|
|
|
|
|
|
|
| |
unnecessary #include lines in it. Also, move some tuple routine prototypes and
macros to htup.h, which allows removal of heapam.h inclusion from some .c
files.
For this to work, a new header file access/sysattr.h needed to be created,
initially containing attribute numbers of system columns, for pg_dump usage.
While at it, make contrib ltree, intarray and hstore header files more
consistent with our header style.
|
|
|
|
| |
began to realize that the input() function isn't used and printed warnings.
|
|
|
|
|
| |
makeFloatConst, and avoid "manual" construction of A_Const nodes in grammar
productions, in favor of using makeXXXConst subroutines.
|
|
|
|
| |
Brendan Jurd, minor editorialization by me.
|
|
|
|
|
|
|
|
|
| |
flags by separate AT_SetNotNull subcommands. That was always ugly and
inefficient, and it's now clear that it was merely a partial workaround
for the bug just identified in ATExecAddColumn. This is just code
beautification not a bug fix, so no back-patch.
Brendan Jurd, with some trivial additional cleanup by me.
|
|
|
|
|
|
|
|
|
|
| |
ordinary expressions. This probably doesn't catch every single case
where you might get "cache lookup failed for function 0" for use of a
shell operator, but it will catch most. Per bug #4120 from Pedro Gimeno.
This patch incidentally folds make_op_expr() into its sole remaining
caller --- the alternative was to give it yet more arguments, which
didn't seem an improvement.
|
|
|
|
|
|
|
|
|
|
| |
where Datum is 8 bytes wide. Since this will break old-style C functions
(those still using version 0 calling convention) that have arguments or
results of these types, provide a configure option to disable it and retain
the old pass-by-reference behavior. Likewise, provide a configure option
to disable the recently-committed float4 pass-by-value change.
Zoltan Boszormenyi, plus configurability stuff by me.
|
|
|
|
|
|
|
|
|
|
|
| |
"consistent" functions, and remove pg_amop.opreqcheck, as per recent
discussion. The main immediate benefit of this is that we no longer need
8.3's ugly hack of requiring @@@ rather than @@ to test weight-using tsquery
searches on GIN indexes. In future it should be possible to optimize some
other queries better than is done now, by detecting at runtime whether the
index match is exact or not.
Tom Lane, after an idea of Heikki's, and with some help from Teodor.
|
|
|
|
|
|
|
|
|
|
| |
input functions that include garbage bytes in their results. Provide a
compile-time option RANDOMIZE_ALLOCATED_MEMORY to make palloc fill returned
blocks with variable contents. This option also makes the parser perform
conversions of literal constants twice and compare the results, emitting a
WARNING if they don't match. (This is the code I used to catch the input
function bugs fixed in the previous commit.) For the moment, I've set it
to be activated automatically by --enable-cassert.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
currently support this because we must be able to build Vars referencing
join columns, and varattno is only 16 bits wide. Perhaps this should be
improved in future, but considering that it never came up before, I'm not
sure the problem is worth much effort. Per bug #4070 from Marcello
Ceschia.
The problem seems largely academic in 8.0 and 7.4, because they have
(different) O(N^2) performance issues with such wide joins, but
back-patch all the way anyway.
|
|
|
|
| |
a make clean...
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
inclusions in src/include/catalog/*.h files. The main idea here is to push
function declarations for src/backend/catalog/*.c files into separate headers,
rather than sticking them into the corresponding catalog definition file as
has been done in the past. This commit only carries out that idea fully for
pg_proc, pg_type and pg_conversion, but that's enough for the moment ---
if pg_list.h ever becomes unsafe for frontend code to include, we'll need
to work a bit more.
Zdenek Kotala
|