| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
no held locks. This maintains the invariant that proclocks are present
only for procs that are holding or awaiting a lock; when this is not
true, LockRelease will fail. Per report from Stephen Clouse.
|
|
|
|
|
|
|
|
|
|
| |
and FreeDir routines modeled on the existing AllocateFile/FreeFile.
Like the latter, these routines will avoid failing on EMFILE/ENFILE
conditions whenever possible, and will prevent leakage of directory
descriptors if an elog() occurs while one is open.
Also, reduce PANIC to ERROR in MoveOfflineLogs() --- this is not
critical code and there is no reason to force a DB restart on failure.
All per recent trouble report from Olivier Hubaut.
|
|
|
|
|
|
| |
number of openable files and the number already opened. This eliminates
depending on sysconf(_SC_OPEN_MAX), and allows much saner behavior on
platforms where open-file slots are used up by semaphores.
|
|
|
|
|
| |
since there is no need to worry about damaged pages when we are going to
overwrite them anyway from the WAL. Per recent discussion.
|
|
|
|
|
|
| |
Ward's report that it can still happen in RC2 forces me to realize that
this is not a can't-happen condition after all, and that the compaction
code had better cope rather than panicking.
|
|
|
|
| |
Neil Conway.
|
|
|
|
|
| |
FSM data has to be both moved down and compressed. Per report from
Dror Matalon.
|
|
|
|
|
|
|
|
| |
memory say 'out of shared memory'; some were doing that and some just
said 'out of memory'. Also add a HINT about increasing max_locks_per_transaction
where relevant, per suggestion from Sean Chittenden. (The former change
does not break the strings freeze; the latter does, but I think it's
worth doing anyway.)
|
| |
|
|
|
|
|
|
|
| |
when -fstrict-aliasing is turned on, as it is in the latest gcc when you
use -O2
Andrew Dunstan
|
|
|
|
|
|
|
| |
discussion on pgsql-hackers: in READ COMMITTED mode we just have to force
a QuerySnapshot update in the trigger, but in SERIALIZABLE mode we have
to run the scan under a current snapshot and then complain if any rows
would be updated/deleted that are not visible in the transaction snapshot.
|
| |
|
|
|
|
|
| |
terms, add some clarifications, fix some untranslatable attempts at dynamic
message building.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
now able to cope with assigning new relfilenode values to nailed-in-cache
indexes, so they can be reindexed using the fully crash-safe method. This
leaves only shared system indexes as special cases. Remove the 'index
deactivation' code, since it provides no useful protection in the shared-
index case. Require reindexing of shared indexes to be done in standalone
mode, but remove other restrictions on REINDEX. -P (IgnoreSystemIndexes)
now prevents using indexes for lookups, but does not disable index updates.
It is therefore safe to allow from PGOPTIONS. Upshot: reindexing system catalogs
can be done without a standalone backend for all cases except
shared catalogs.
|
|
|
|
|
|
|
| |
not just MAXALIGN boundaries. This makes a noticeable difference in
the speed of transfers to and from kernel space, at least on recent
Pentiums, and might help other CPUs too. We should look at making
this happen for local buffers and buffile.c too. Patch from Manfred Spraul.
|
| |
|
|
|
|
|
|
|
|
| |
pghackers. This fixes the problem recently reported by Markus KrÌutner
(hash bucket split corrupts the state of scans being done concurrently),
and I believe it also fixes all the known problems with deadlocks in
hash index operations. Hash indexes are still not really ready for prime
time (since they aren't WAL-logged), but this is a step forward.
|
|
|
|
|
| |
No change in behavior, but old code would have failed to detect
overrun of MAX_LOCKMODES.
|
|
|
|
|
|
|
|
|
|
| |
index pages: when _bt_getbuf asks the FSM for a free index page, it is
possible (and, in some cases, even moderately likely) that the answer
will be the same page that _bt_split is trying to split. _bt_getbuf
already knew that the returned page might not be free, but it wasn't
prepared for the possibility that even trying to lock the page could
be problematic. Fix by doing a conditional rather than unconditional
grab of the page lock.
|
| |
|
|
|
|
| |
spinlock. Per recent pghackers discussion.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
free'd for every transaction or statement, respectively. This patch
puts these data structures into static memory, thus saving a few CPU
cycles and two malloc calls per transaction or (in isolation level
READ COMMITTED) per query.
Manfred Koizar
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
least-recently-used strategy from clog.c into slru.c. It doesn't
change any visible behaviour and passes all regression tests plus a
TruncateCLOG test done manually.
Apart from refactoring I made a little change to SlruRecentlyUsed,
formerly ClogRecentlyUsed: It now skips incrementing lru_counts, if
slotno is already the LRU slot, thus saving a few CPU cycles. To make
this work, lru_counts are initialised to 1 in SimpleLruInit.
SimpleLru will be used by pg_subtrans (part of the nested transactions
project), so the main purpose of this patch is to avoid future code
duplication.
Manfred Koizar
|
|
|
|
|
| |
docs that CLIENT/LOG_MIN_MESSAGES now controls debug_* output location.
Doc changes included.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Win32 port is now called 'win32' rather than 'win'
add -lwsock32 on Win32
make gethostname() be only used when kerberos4 is enabled
use /port/getopt.c
new /port/opendir.c routines
disable GUC unix_socket_group on Win32
convert some keywords.c symbols to KEYWORD_P to prevent conflict
create new FCNTL_NONBLOCK macro to turn off socket blocking
create new /include/port.h file that has /port prototypes, move
out of c.h
new /include/port/win32_include dir to hold missing include files
work around ERROR being defined in Win32 includes
|
|
|
|
|
|
|
|
|
|
| |
detected during buffer dump to be labeled with the buffer location.
For example, if a page LSN is clobbered, we now produce something like
ERROR: XLogFlush: request 2C000000/8468EC8 is not satisfied --- flushed only
to 0/8468EF0
CONTEXT: writing block 0 of relation 428946/566240
whereas before there was no convenient way to find out which page had
been trashed.
|
|
|
|
| |
fork/exec.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
context sloppiness, some other things. Includes Neil's mopup patch
of 22-Apr.
|
|
|
|
|
| |
maintain a separate out-of-line version of PPC tas() anymore.
Also fix S_UNLOCK for __powerpc64__ platforms.
|
| |
|
|
|
|
|
|
|
| |
harmless on signed-char machines but would lead to core dump in the
deadlock detection code if char is unsigned. Amazingly, this bug has
been here since 7.1 and yet wasn't reported till now. Thanks to Robert
Bruccoleri for providing the opportunity to track it down.
|
|
|
|
|
|
|
|
| |
page when it's read in, per pghackers discussion around 17-Feb. Add a
GUC variable zero_damaged_pages that causes the response to be a WARNING
followed by zeroing the page, rather than the normal ERROR; this is per
Hiroshi's suggestion that there needs to be a way to get at the data
in the rest of the table.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
|
|
|
|
|
|
| |
at database shutdown, and then load it again at database startup. This
preserves our hard-won knowledge of free space across restarts (given
an orderly shutdown, that is).
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adjustable threshold is gone in favor of keeping track of total requested
page storage and doling out proportional fractions to each relation
(with a minimum amount per relation, and some quantization of the results
to avoid thrashing with small changes in page counts). Provide special-
case code for indexes so as not to waste space storing useless page
free space counts. Restructure internal data storage to be a flat array
instead of list-of-chunks; this may cost a little more work in data
copying when reorganizing, but allows binary search to be used during
lookup_fsm_page_entry().
|
|
|
|
|
| |
older than current Xmin; we don't have to wait till it's older than
GlobalXmin.
|
|
|
|
| |
to fix, but it seems to basically work...
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RelOid_pg_class, and transaction locks XactLockTableId. RelId is renamed
to objId.
- LockObject() and UnlockObject() functions created, and their use
sprinkled throughout the code to do descent locking for domains and
types. They accept lock modes AccessShare and AccessExclusive, as we
only really need a 'read' and 'write' lock at the moment. Most locking
cases are held until the end of the transaction.
This fixes the cases Tom mentioned earlier in regards to locking with
Domains. If the patch is good, I'll work on cleaning up issues with
other database objects that have this problem (most of them).
Rod Taylor
|
| |
|
|
|
|
| |
consistency.
|