| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
198.68.123.0/27 the same when indexing them.
D'Arcy
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
{
Oid relId;
Oid dbId;
union
{
BlockNumber blkno;
TransactionId xid;
} objId;
>
> Added:
> /*
> * offnum should be part of objId.tupleId above, but would increase
> * sizeof(LOCKTAG) and so moved here; currently used by userlocks only.
> */
> OffsetNumber offnum;
uint16 lockmethod; /* needed by userlocks */
} LOCKTAG;
gmake clean required...
User locks are ready for 6.5 release...
|
|
|
|
| |
Vince.
|
|
|
|
|
|
| |
later.
Vince.
|
|
|
|
|
| |
all fields that should be set). Add a MoveToFront primitive to speed up
one of the hotspots in SearchSysCache.
|
|
|
|
| |
memory context at transaction commit or abort.
|
|
|
|
| |
in an index doesn't have a restriction selectivity estimator.
|
| |
|
|
|
|
|
| |
right circumstances it would leave old and new bucket headers pointing to
the same list of records.
|
| |
|
|
|
|
|
| |
(no sense to hold it) or we'll be out of lock entries.
Great thanks to Hiroshi Inoue.
|
|
|
|
| |
(Curious that gcc doesn't complain about this code...).
|
|
|
|
| |
is poor coding style. I agree.
|
|
|
|
|
| |
file headers, to conform to established Postgres coding style and avoid
warnings from gcc.
|
|
|
|
|
| |
after checking for presence of C++ compiler. Odd we hadn't seen any
reports of problems before...
|
|
|
|
| |
when used with egcs --- now it does.
|
|
|
|
|
|
| |
2. Get rid of locking when updating statistics in vacuum.
3. Use QuerySnapshot in COPY TO and call SetQuerySnashot
in main tcop loop before FETCH and COPY TO.
|
| |
|
| |
|
|
|
|
| |
few percent speedup in INSERT...
|
|
|
|
|
|
| |
redundant) SearchSysCache searches per table column in an INSERT, which
accounted for a good percentage of the CPU time for INSERT ... VALUES().
Now it only does two searches in the typical case.
|
|
|
|
| |
cache access routines.
|
|
|
|
|
|
|
|
| |
through MAXBACKENDS array entries used to be fine when MAXBACKENDS = 64.
It's not so cool with MAXBACKENDS = 1024 (or more!), especially not in a
frequently-used routine like SIDelExpiredDataEntries. Repair by making
procState array size be the soft MaxBackends limit rather than the hard
limit, and by converting SIGetProcStateLimit() to a macro.
|
| |
|
|
|
|
|
|
|
|
| |
do the right thing: look for a NOTICE message from the backend before we
close our side of the socket. 6.4 libpq did not reliably print the backend's
hara-kiri message, 'The Postmaster has informed me ...', because it only
did the right thing if connection closure was detected during a read
attempt instead of a write attempt.
|
|
|
|
| |
new -x option to skip acl dump.
|
|
|
|
|
|
|
| |
but the Makefile does break non g++.
<<mak.patch>>
Andreas
|
| |
|
|
|
|
|
| |
before any tuples are loaded, preserve the default '1000 tuples' table
size estimate.
|
|
|
|
|
| |
the backend does. Remove unnecessary limitation on field size in
dumpClasses_dumpData (ie, -d or -D case).
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
inserts. Change some variables to bool to be clearer.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
BT_READ/BT_WRITE are BUFFER_LOCK_SHARE/BUFFER_LOCK_EXCLUSIVE now.
Also get rid of #define BT_VERSION_1 - we use version 1 as default
for near two years now.
|
|
|
|
|
|
| |
LockBuffer is used to acquire read/write access
to index pages. Pages are released before leaving
index internals.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
not be marked inFromCl any longer. Otherwise the planner gets confused
and joins over them where in fact it does not have to.
Adjust hasSubLinks now with a recursive lookup - could be wrong in
multi action rules because parse state isn't reset correctly and all
actions in the rule are marked hasSubLinks if one of them has.
Jan
|
|
|
|
| |
Jan
|
| |
|
|
|
|
| |
GROUP BY or ORDER BY expressions in INSERT ... SELECT.
|
|
|
|
|
|
|
| |
aggregate functions, as in
select a, b from foo group by a;
The ungrouped reference to b is not kosher, but formerly we neglected to
check this unless there was an aggregate function somewhere in the query.
|