| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
| |
ps_TupFromTlist in plan nodes that make use of it. This was being done
correctly in join nodes and Result nodes but not in any relation-scan nodes.
Bug would lead to bogus results if a set-returning function appeared in the
targetlist of a subquery that could be rescanned after partial execution,
for example a subquery within EXISTS(). Bug has been around forever :-(
... surprising it wasn't reported before.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a multiple (OR'ed) indexscan. It was checking for duplicate
tuple->t_data->t_ctid, when what it should be checking is tuple->t_self.
The trouble situation occurs when a live tuple has t_ctid not pointing to
itself, which can happen if an attempted UPDATE was rolled back. After a
VACUUM, an unrelated tuple could be installed where the failed update tuple
was, leading to one live tuple's t_ctid pointing to an unrelated tuple.
If one of these tuples is fetched by an earlier OR'ed indexscan and the other
by a later indexscan, nodeIndexscan.c would incorrectly ignore the second
tuple. The bug exists in all 7.4.* and 8.0.* versions, but not in earlier
or later branches because this code was only used in those releases. Per
trouble report from Rafael Martinez Guerrero.
|
|
|
|
|
|
|
|
| |
Also performed an initial run through of upgrading our Copyright date to
extend to 2005 ... first run here was very simple ... change everything
where: grep 1996-2004 && the word 'Copyright' ... scanned through the
generated list with 'less' first, and after, to make sure that I only
picked up the right entries ...
|
|
|
|
|
|
|
|
| |
returning a NULL pointer (some callers remembered to check the return
value, but some did not -- it is safer to just bail out).
Also, cleanup pgstat.c to use elog(ERROR) rather than elog(LOG) followed
by exit().
|
| |
|
| |
|
|
|
|
|
| |
list compatibility API by default. While doing this, I decided to keep
the llast() macro around and introduce llast_int() and llast_oid() variants.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the past, we used a 'Lispy' linked list implementation: a "list" was
merely a pointer to the head node of the list. The problem with that
design is that it makes lappend() and length() linear time. This patch
fixes that problem (and others) by maintaining a count of the list
length and a pointer to the tail node along with each head node pointer.
A "list" is now a pointer to a structure containing some meta-data
about the list; the head and tail pointers in that structure refer
to ListCell structures that maintain the actual linked list of nodes.
The function names of the list API have also been changed to, I hope,
be more logically consistent. By default, the old function names are
still available; they will be disabled-by-default once the rest of
the tree has been updated to use the new API names.
|
|
|
|
|
|
|
|
|
| |
the next are handled by ReleaseAndReadBuffer rather than separate
ReleaseBuffer and ReadBuffer calls. This cuts the number of acquisitions
of the BufMgrLock by a factor of 2 (possibly more, if an indexscan happens
to pull successive rows from the same heap page). Unfortunately this
doesn't seem enough to get us out of the recently discussed context-switch
storm problem, but it's surely worth doing anyway.
|
|
|
|
| |
spent initializing it during indexscan startup.
|
|
|
|
|
|
|
| |
Make btree index creation and initial validation of foreign-key constraints
use maintenance_work_mem rather than work_mem as their memory limit.
Add some code to guc.c to allow these variables to be referenced by their
old names in SHOW and SET commands, for backwards compatibility.
|
|
|
|
|
|
|
|
| |
pointer type when it is not necessary to do so.
For future reference, casting NULL to a pointer type is only necessary
when (a) invoking a function AND either (b) the function has no prototype
OR (c) the function is a varargs function.
|
|
|
|
|
|
|
|
|
| |
regular qpqual ('filter condition'), add special-purpose code to
nodeIndexscan.c to recheck them. This ends being almost no net addition
of code, because the removal of planner code balances out the extra
executor code, but it is significantly more efficient when a lossy
operator is involved in an OR indexscan. The old implementation had
to recheck the entire indexqual in such cases.
|
|
|
|
|
| |
number-of-buckets that exceeds the size we actually plan to allow
the hash table to grow to. Per trouble report from Sean Shanny.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
pghackers proposal of 8-Nov. All the existing cross-type comparison
operators (int2/int4/int8 and float4/float8) have appropriate support.
The original proposal of storing the right-hand-side datatype as part of
the primary key for pg_amop and pg_amproc got modified a bit in the event;
it is easier to store zero as the 'default' case and only store a nonzero
when the operator is actually cross-type. Along the way, remove the
long-since-defunct bigbox_ops operator class.
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the 'strategy map' code, which was a large amount of mechanism
that no longer had any use except reverse-mapping from procedure OID to
strategy number. Passing the strategy number to the index AM in the
first place is simpler and faster.
This is a preliminary step in planned support for cross-datatype index
operations. I'm committing it now since the ScanKeyEntryInitialize()
API change touches quite a lot of files, and I want to commit those
changes before the tree drifts under me.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
now able to cope with assigning new relfilenode values to nailed-in-cache
indexes, so they can be reindexed using the fully crash-safe method. This
leaves only shared system indexes as special cases. Remove the 'index
deactivation' code, since it provides no useful protection in the shared-
index case. Require reindexing of shared indexes to be done in standalone
mode, but remove other restrictions on REINDEX. -P (IgnoreSystemIndexes)
now prevents using indexes for lookups, but does not disable index updates.
It is therefore safe to allow from PGOPTIONS. Upshot: reindexing system catalogs
can be done without a standalone backend for all cases except
shared catalogs.
|
|
|
|
|
|
|
| |
handling many-way scans: instead of re-evaluating all prior indexscan
quals to see if a tuple has been fetched more than once, use a hash table
indexed by tuple CTID. But fall back to the old way if the hash table
grows to exceed SortMem.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
rid of the assumption that sizeof(Oid)==sizeof(int). This is one small
step towards someday supporting 8-byte OIDs. For the moment, it doesn't
do much except get rid of a lot of unsightly casts.
|
|
|
|
|
|
|
|
| |
nodes where it's not really necessary. In many cases where the scan node
is not the topmost plan node (eg, joins, aggregation), it's possible to
just return the table tuple directly instead of generating an intermediate
projection tuple. In preliminary testing, this reduced the CPU time
needed for 'SELECT COUNT(*) FROM foo' by about 10%.
|
|
|
|
| |
ExecAssignResultTypeFromTL().
|
|
|
|
|
| |
retrieved. This cannot happen in ordinary execution, but it can happen
under EvalPlanQual().
|
|
|
|
|
|
| |
a per-query memory context created by CreateExecutorState --- and destroyed
by FreeExecutorState. This provides a final solution to the longstanding
problem of memory leaked by various ExecEndNode calls.
|
|
|
|
|
|
|
| |
execution state trees, and ExecEvalExpr takes an expression state tree
not an expression plan tree. The plan tree is now read-only as far as
the executor is concerned. Next step is to begin actually exploiting
this property.
|
|
|
|
|
|
|
|
|
| |
so that all executable expression nodes inherit from a common supertype
Expr. This is somewhat of an exercise in code purity rather than any
real functional advance, but getting rid of the extra Oper or Func node
formerly used in each operator or function call should provide at least
a little space and speed improvement.
initdb forced by changes in stored-rules representation.
|
|
|
|
|
|
|
|
|
|
| |
to plan nodes, not vice-versa. All executor state nodes now inherit from
struct PlanState. Copying of plan trees has been simplified by not
storing a list of SubPlans in Plan nodes (eliminating duplicate links).
The executor still needs such a list, but it can build it during
ExecutorStart since it has to scan the plan tree anyway.
No initdb forced since no stored-on-disk structures changed, but you
will need a full recompile because of node-numbering changes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For example, if I run a query, that uses an index scan, and call
MemoryContextSt ats (CurrentMemoryContext) before ExecutorStart() and
after ExecutorEnd() in ProcessQuery(), I am consistently see ing that
the 'after' call shows 256 bytes more used, then 'before'...
The problem seems to be in ExecEndIndexScan - it does not release
scanstate, ind exstate, indexstate->iss_RelationDescs and indexstate ->
iss_ScanDescs...
Dmitry Tkach
|
| |
|
|
|
|
|
|
|
| |
yesterday's proposal to pghackers. Also remove unnecessary parameters
to heap_beginscan, heap_rescan. I modified pg_proc.h to reflect the
new numbers of parameters for the AM interface routines, but did not
force an initdb because nothing actually looks at those fields.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improve 'pg_internal.init' relcache entry preload mechanism so that it is
safe to use for all system catalogs, and arrange to preload a realistic
set of system-catalog entries instead of only the three nailed-in-cache
indexes that were formerly loaded this way. Fix mechanism for deleting
out-of-date pg_internal.init files: this must be synchronized with transaction
commit, not just done at random times within transactions. Drive it off
relcache invalidation mechanism so that no special-case tests are needed.
Cache additional information in relcache entries for indexes (their pg_index
tuples and index-operator OIDs) to eliminate repeated lookups. Also cache
index opclass info at the per-opclass level to avoid repeated lookups during
relcache load.
Generalize 'systable scan' utilities originally developed by Hiroshi,
move them into genam.c, use in a number of places where there was formerly
ugly code for choosing either heap or index scan. In particular this allows
simplification of the logic that prevents infinite recursion between syscache
and relcache during startup: we can easily switch to heapscans in relcache.c
when and where needed to avoid recursion, so IndexScanOK becomes simpler and
does not need any expensive initialization.
Eliminate useless opening of a heapscan data structure while doing an indexscan
(this saves an mdnblocks call and thus at least one kernel call).
|
|
|
|
|
|
|
| |
inner indexscan (ie, one with runtime keys). ExecIndexReScan must
compute or recompute runtime keys even if we are rescanning in the
EPQ case. TidScan seems to have comparable problems. Per bug
noted by Barry Lind 11-Feb-02.
|
|
|
|
|
| |
index scan. Problem was that link to outer tuple wasn't being stored
everyplace it needed to be.
|
|
|
|
| |
spacing. Also adds space for one-line comments.
|
|
|
|
| |
tests pass.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
per previous discussion on pghackers. Most of the duplicate code in
different AMs' ambuild routines has been moved out to a common routine
in index.c; this means that all index types now do the right things about
inserting recently-dead tuples, etc. (I also removed support for EXTEND
INDEX in the ambuild routines, since that's about to go away anyway, and
it cluttered the code a lot.) The retail indextuple deletion routines have
been replaced by a "bulk delete" routine in which the indexscan is inside
the access method. I haven't pushed this change as far as it should go yet,
but it should allow considerable simplification of the internal bookkeeping
for deletions. Also, add flag columns to pg_am to eliminate various
hardcoded tests on AM OIDs, and remove unused pg_am columns.
Fix rtree and gist index types to not attempt to store NULLs; before this,
gist usually crashed, while rtree managed not to crash but computed wacko
bounding boxes for NULL entries (which might have had something to do with
the performance problems we've heard about occasionally).
Add AtEOXact routines to hash, rtree, and gist, all of which have static
state that needs to be reset after an error. We discovered this need long
ago for btree, but missed the other guys.
Oh, one more thing: concurrent VACUUM is now the default.
|
|
|
|
|
|
|
| |
it's hard to keep such massive changes in sync with the tree
so I need to get it in and work from there now).
Jan
|
|
|
|
| |
Rather surprising we hadn't seen bug reports about this ...
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
allocated by plan nodes are not leaked at end of query. This doesn't
really matter for normal queries, but it sure does for queries invoked
repetitively inside SQL functions. Clean up some other grotty code
associated with tupdescs, and fix a few other memory leaks exposed by
tests with simple SQL functions.
|
| |
|
|
|
|
| |
expression evaluation.
|
|
|
|
|
|
| |
for example, an SQL function can be used in a functional index. (I make
no promises about speed, but it'll work ;-).) Clean up and simplify
handling of functions returning sets.
|
|
|
|
|
|
|
|
|
|
| |
right thing with variable-free clauses that contain noncachable functions,
such as 'WHERE random() < 0.5' --- these are evaluated once per
potential output tuple. Expressions that contain only Params are
now candidates to be indexscan quals --- for example, 'var = ($1 + 1)'
can now be indexed. Cope with RelabelType nodes atop potential indexscan
variables --- this oversight prevents 7.0.* from recognizing some
potentially indexscanable situations.
|
|
|
|
|
|
| |
memory contexts. Currently, only leaks in expressions executed as
quals or projections are handled. Clean up some old dead cruft in
executor while at it --- unused fields in state nodes, that sort of thing.
|