aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor/nodeBitmapHeapscan.c
Commit message (Collapse)AuthorAge
...
* Change seqscan logic so that we check visibility of all tuples on a pageTom Lane2005-11-26
| | | | | | | | | | | | | | when we first read the page, rather than checking them one at a time. This allows us to take and release the buffer content lock just once per page, instead of once per tuple. Since it's a shared lock the contention penalty for holding the lock longer shouldn't be too bad. We can safely do this only when using an MVCC snapshot; else the assumption that visibility won't change over time is uncool. Therefore there are now two code paths depending on the snapshot type. I also made the same change in nodeBitmapHeapscan.c, where it can be done always because we only support MVCC snapshots for bitmap scans anyway. Also make some incidental cleanups in the APIs of these functions. Per a suggestion from Qingqing Zhou.
* Improve ExecStoreTuple to be smarter about replacing the contents ofTom Lane2005-11-25
| | | | | | | | | | | | | | a TupleTableSlot: instead of calling ExecClearTuple, inline the needed operations, so that we can avoid redundant steps. In particular, when the old and new tuples are both on the same disk page, avoid releasing and re-acquiring the buffer pin --- this saves work in both the bufmgr and ResourceOwner modules. To make this improvement actually useful, partially revert a change I made on 2004-04-21 that caused SeqNext et al to call ExecClearTuple before ExecStoreTuple. The motivation for that, to avoid grabbing the BufMgrLock separately for releasing the old buffer and grabbing the new one, no longer applies. My profiling says that this saves about 5% of the CPU time for an all-in-memory seqscan.
* Standard pgindent run for 8.1.Bruce Momjian2005-10-15
|
* Revise pgstats stuff to fix the problems with not counting accessesTom Lane2005-10-06
| | | | | | | generated by bitmap index scans. Along the way, simplify and speed up the code for counting sequential and index scans; it was both confusing and inefficient to be taking care of that in the per-tuple loops, IMHO. initdb forced because of internal changes in pg_stat view definitions.
* For some reason access/tupmacs.h has been #including utils/memutils.h,Tom Lane2005-05-06
| | | | | | | which is neither needed by nor related to that header. Remove the bogus inclusion and instead include the header in those C files that actually need it. Also fix unnecessary inclusions and bad inclusion order in tsearch2 files.
* Create executor and planner-backend support for decoupled heap and indexTom Lane2005-04-19
scans, using in-memory tuple ID bitmaps as the intermediary. The planner frontend (path creation and cost estimation) is not there yet, so none of this code can be executed. I have tested it using some hacked planner code that is far too ugly to see the light of day, however. Committing now so that the bulk of the infrastructure changes go in before the tree drifts under me.