aboutsummaryrefslogtreecommitdiff
path: root/src/backend/storage
Commit message (Collapse)AuthorAge
* Fix deadlock with LWLockAcquireWithVar and LWLockWaitForVar.Heikki Linnakangas2014-10-14
| | | | | | | | | | | | | LWLockRelease should release all backends waiting with LWLockWaitForVar, even when another backend has already been woken up to acquire the lock, i.e. when releaseOK is false. LWLockWaitForVar can return as soon as the protected value changes, even if the other backend will acquire the lock. Fix that by resetting releaseOK to true in LWLockWaitForVar, whenever adding itself to the wait queue. This should fix the bug reported by MauMau, where the system occasionally hangs when there is a lot of concurrent WAL activity and a checkpoint. Backpatch to 9.4, where this code was added.
* Extend shm_mq API with new functions shm_mq_sendv, shm_mq_set_handle.Robert Haas2014-10-08
| | | | | | | | | | | | | | | | | | | | shm_mq_sendv sends a message to the queue assembled from multiple locations. This is expected to be used by forthcoming patches to allow frontend/backend protocol messages to be sent via shm_mq, but might be useful for other purposes as well. shm_mq_set_handle associates a BackgroundWorkerHandle with an already-existing shm_mq_handle. This solves a timing problem when creating a shm_mq to communicate with a newly-launched background worker: if you attach to the queue first, and the background worker fails to start, you might block forever trying to do I/O on the queue; but if you start the background worker first, but then die before attaching to the queue, the background worrker might block forever trying to do I/O on the queue. This lets you attach before starting the worker (so that the worker is protected) and then associate the BackgroundWorkerHandle later (so that you are also protected). Patch by me, reviewed by Stephen Frost.
* Increase the number of buffer mapping partitions to 128.Robert Haas2014-10-02
| | | | | | | | | Testing by Amit Kapila, Andres Freund, and myself, with and without other patches that also aim to improve scalability, seems to indicate that this change is a significant win over the current value and over smaller values such as 64. It's not clear how high we can push this value before it starts to have negative side-effects elsewhere, but going this far looks OK.
* Add a basic atomic ops API abstracting away platform/architecture details.Andres Freund2014-09-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Several upcoming performance/scalability improvements require atomic operations. This new API avoids the need to splatter compiler and architecture dependent code over all the locations employing atomic ops. For several of the potential usages it'd be problematic to maintain both, a atomics using implementation and one using spinlocks or similar. In all likelihood one of the implementations would not get tested regularly under concurrency. To avoid that scenario the new API provides a automatic fallback of atomic operations to spinlocks. All properties of atomic operations are maintained. This fallback - obviously - isn't as fast as just using atomic ops, but it's not bad either. For one of the future users the atomics ontop spinlocks implementation was actually slightly faster than the old purely spinlock using implementation. That's important because it reduces the fear of regressing older platforms when improving the scalability for new ones. The API, loosely modeled after the C11 atomics support, currently provides 'atomic flags' and 32 bit unsigned integers. If the platform efficiently supports atomic 64 bit unsigned integers those are also provided. To implement atomics support for a platform/architecture/compiler for a type of atomics 32bit compare and exchange needs to be implemented. If available and more efficient native support for flags, 32 bit atomic addition, and corresponding 64 bit operations may also be provided. Additional useful atomic operations are implemented generically ontop of these. The implementation for various versions of gcc, msvc and sun studio have been tested. Additional existing stub implementations for * Intel icc * HUPX acc * IBM xlc are included but have never been tested. These will likely require fixes based on buildfarm and user feedback. As atomic operations also require barriers for some operations the existing barrier support has been moved into the atomics code. Author: Andres Freund with contributions from Oskari Saarenmaa Reviewed-By: Amit Kapila, Robert Haas, Heikki Linnakangas and Álvaro Herrera Discussion: CA+TgmoYBW+ux5-8Ja=Mcyuy8=VXAnVRHp3Kess6Pn3DMXAPAEA@mail.gmail.com, 20131015123303.GH5300@awork2.anarazel.de, 20131028205522.GI20248@awork2.anarazel.de
* Change locking regimen around buffer replacement.Robert Haas2014-09-25
| | | | | | | | | | | | | | | | | | | | | Previously, we used an lwlock that was held from the time we began seeking a candidate buffer until the time when we found and pinned one, which is disastrous for concurrency. Instead, use a spinlock which is held just long enough to pop the freelist or advance the clock sweep hand, and then released. If we need to advance the clock sweep further, we reacquire the spinlock once per buffer. This represents a significant increase in atomic operations around buffer eviction, but it still wins on many workloads. On others, it may result in no gain, or even cause a regression, unless the number of buffer mapping locks is also increased. However, that seems like material for a separate commit. We may also need to consider other methods of mitigating contention on this spinlock, such as splitting it into multiple locks or jumping the clock sweep hand more than one buffer at a time, but those, too, seem like separate improvements. Patch by me, inspired by a much larger patch from Amit Kapila. Reviewed by Andres Freund.
* Remove volatile qualifiers from lwlock.c.Robert Haas2014-09-22
| | | | | | | Now that spinlocks (hopefully!) act as compiler barriers, as of commit 0709b7ee72e4bc71ad07b7120acd117265ab51d0, this should be safe. This serves as a demonstration of the new coding style, and may be optimized better on some machines as well.
* Add missing volatile qualifier.Robert Haas2014-09-11
| | | | | Yet another silly mistake in 0709b7ee72e4bc71ad07b7120acd117265ab51d0, again found by buildfarm member castoroides.
* Change the spinlock primitives to function as compiler barriers.Robert Haas2014-09-09
| | | | | | | | | | | | | | Previously, they functioned as barriers against CPU reordering but not compiler reordering, an odd API that required extensive use of volatile everywhere that spinlocks are used. That's error-prone and has negative implications for performance, so change it. In theory, this makes it safe to remove many of the uses of volatile that we currently have in our code base, but we may find that there are some bugs in this effort when we do. In the long run, though, this should make for much more maintainable code. Patch by me. Review by Andres Freund.
* Assorted message fixes and improvementsPeter Eisentraut2014-09-05
|
* Declare lwlock.c's LWLockAcquireCommon() as a static inline.Andres Freund2014-09-01
| | | | | | | | | | | | 68a2e52bbaf98f136 has introduced LWLockAcquireCommon() containing the previous contents of LWLockAcquire() plus added functionality. The latter then calls it, just like LWLockAcquireWithVar(). Because the majority of callers don't need the added functionality, declare the common code as inline. The compiler then can optimize away the unused code. Doing so is also useful when looking at profiles, to differentiate the users. Backpatch to 9.4, the first branch to contain LWLockAcquireCommon().
* Protect definition of SpinlockSemaArray, just like its declaration.Andres Freund2014-09-01
| | | | Found via clang's -Wmissing-variable-declarations.
* Make backend local tracking of buffer pins memory efficient.Andres Freund2014-08-30
| | | | | | | | | | | | | | | | | | | | | | | | Since the dawn of time (aka Postgres95) multiple pins of the same buffer by one backend have been optimized not to modify the shared refcount more than once. This optimization has always used a NBuffer sized array in each backend keeping track of a backend's pins. That array (PrivateRefCount) was one of the biggest per-backend memory allocations, depending on the shared_buffers setting. Besides the waste of memory it also has proven to be a performance bottleneck when assertions are enabled as we make sure that there's no remaining pins left at the end of transactions. Also, on servers with lots of memory and a correspondingly high shared_buffers setting the amount of random memory accesses can also lead to poor cpu cache efficiency. Because of these reasons a backend's buffers pins are now kept track of in a small statically sized array that overflows into a hash table when necessary. Benchmarks have shown neutral to positive performance results with considerably lower memory usage. Patch by me, review by Robert Haas. Discussion: 20140321182231.GA17111@alap3.anarazel.de
* Fix obsolete statement in smgr/README.Tom Lane2014-07-28
| | | | | | | Since commit 2d00190495b22e0d0ba351b2cda9c95fb2e3d083, fork numbers are defined in relpath.h not relfilenode.h. Fabrízio de Royes Mello
* Prevent shm_mq_send from reading uninitialized memory.Robert Haas2014-07-24
| | | | | | | | | shm_mq_send_bytes didn't invariably initialize *bytes_written before returning, which would cause shm_mq_send to read from uninitialized memory and add the value it found there to mqh->mqh_partial_bytes. This could cause the next attempt to send a message via the queue to fail an assertion (if the queue was detached) or copy data from a garbage pointer value into the queue (if non-blocking mode was in use).
* Avoid access to already-released lock in LockRefindAndRelease.Robert Haas2014-07-24
| | | | Spotted by Tom Lane.
* Check block number against the correct fork in get_raw_page().Tom Lane2014-07-22
| | | | | | | | | | | | | | | | | | get_raw_page tried to validate the supplied block number against RelationGetNumberOfBlocks(), which of course is only right when accessing the main fork. In most cases, the main fork is longer than the others, so that the check was too weak (allowing a lower-level error to be reported, but no real harm to be done). However, very small tables could have an FSM larger than their heap, in which case the mistake prevented access to some FSM pages. Per report from Torsten Foertsch. In passing, make the bad-block-number error into an ereport not elog (since it's certainly not an internal error); and fix sloppily maintained comment for RelationGetNumberOfBlocksInFork. This has been wrong since we invented relation forks, so back-patch to all supported branches.
* Fix and enhance the assertion of no palloc's in a critical section.Heikki Linnakangas2014-06-30
| | | | | | | | | | | | The assertion failed if WAL_DEBUG or LWLOCK_STATS was enabled; fix that by using separate memory contexts for the allocations made within those code blocks. This patch introduces a mechanism for marking any memory context as allowed in a critical section. Previously ErrorContext was exempt as a special case. Instead of a blanket exception of the checkpointer process, only exempt the memory context used for the pending ops hash table.
* Don't allow to disable backend assertions via the debug_assertions GUC.Andres Freund2014-06-20
| | | | | | | | | | | | | | | | | | | | The existance of the assert_enabled variable (backing the debug_assertions GUC) reduced the amount of knowledge some static code checkers (like coverity and various compilers) could infer from the existance of the assertion. That could have been solved by optionally removing the assertion_enabled variable from the Assert() et al macros at compile time when some special macro is defined, but the resulting complication doesn't seem to be worth the gain from having debug_assertions. Recompiling is fast enough. The debug_assertions GUC is still available, but readonly, as it's useful when diagnosing problems. The commandline/client startup option -A, which previously also allowed to enable/disable assertions, has been removed as it doesn't serve a purpose anymore. While at it, reduce code duplication in bufmgr.c and localbuf.c assertions checking for spurious buffer pins. That code had to be reindented anyway to cope with the assert_enabled removal.
* Add defenses against running with a wrong selection of LOBLKSIZE.Tom Lane2014-06-05
| | | | | | | | | | | | | | | | | | | | | It's critical that the backend's idea of LOBLKSIZE match the way data has actually been divided up in pg_largeobject. While we don't provide any direct way to adjust that value, doing so is a one-line source code change and various people have expressed interest recently in changing it. So, just as with TOAST_MAX_CHUNK_SIZE, it seems prudent to record the value in pg_control and cross-check that the backend's compiled-in setting matches the on-disk data. Also tweak the code in inv_api.c so that fetches from pg_largeobject explicitly verify that the length of the data field is not more than LOBLKSIZE. Formerly we just had Asserts() for that, which is no protection at all in production builds. In some of the call sites an overlength data value would translate directly to a security-relevant stack clobber, so it seems worth one extra runtime comparison to be sure. In the back branches, we can't change the contents of pg_control; but we can still make the extra checks in inv_api.c, which will offer some amount of protection against running with the wrong value of LOBLKSIZE.
* Fix misc typos in comments.Heikki Linnakangas2014-05-23
|
* Remove unnecessary cleanup code.Robert Haas2014-05-22
| | | | | | | This is all inside a block guarded by op == DSM_OP_ATTACH, so it can never be the case that op == DSM_OP_CREATE. Reported by Coverity.
* Fix typos in comments.Heikki Linnakangas2014-05-21
|
* Fix logic bug in dsm_attach().Robert Haas2014-05-06
| | | | | | | The previous coding would potentially cause attaching to segment A to fail if segment B was at the same time in the process of going away. Andres Freund, with a comment tweak by me
* pgindent run for 9.4Bruce Momjian2014-05-06
| | | | | This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
* Fix possible cache invalidation failure in ReceiveSharedInvalidMessages.Tom Lane2014-05-05
| | | | | | | | | | | | | | | Commit fad153ec45299bd4d4f29dec8d9e04e2f1c08148 modified sinval.c to reduce the number of calls into sinvaladt.c (which require taking a shared lock) by keeping a local buffer of collected-but-not-yet-processed messages. However, if processing of the last message in a batch resulted in a recursive call to ReceiveSharedInvalidMessages, we could overwrite that message with a new one while the outer invalidation function was still working on it. This would be likely to lead to invalidation of the wrong cache entry, allowing subsequent processing to use stale cache data. The fix is just to make a local copy of each message while we're processing it. Spotted by Andres Freund. Back-patch to 8.4 where the bug was introduced.
* Consistently allow reading of messages from a detached shm_mq.Robert Haas2014-04-30
| | | | | | | | This was intended to work always, but the previous code only allowed it if at least one message was successfully read by the receiver before the sender detached the queue. Report by Petr Jelinek. Patch by me.
* Rationalize common/relpath.[hc].Tom Lane2014-04-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a73018392636ce832b09b5c31f6ad1f18a4643ea created rather a mess by putting dependencies on backend-only include files into include/common. We really shouldn't do that. To clean it up: * Move TABLESPACE_VERSION_DIRECTORY back to its longtime home in catalog/catalog.h. We won't consider this symbol part of the FE/BE API. * Push enum ForkNumber from relfilenode.h into relpath.h. We'll consider relpath.h as the source of truth for fork numbers, since relpath.c was already partially serving that function, and anyway relfilenode.h was kind of a random place for that enum. * So, relfilenode.h now includes relpath.h rather than vice-versa. This direction of dependency is fine. (That allows most, but not quite all, of the existing explicit #includes of relpath.h to go away again.) * Push forkname_to_number from catalog.c to relpath.c, just to centralize fork number stuff a bit better. * Push GetDatabasePath from catalog.c to relpath.c; it was rather odd that the previous commit didn't keep this together with relpath(). * To avoid needing relfilenode.h in common/, redefine the underlying function (now called GetRelationPath) as taking separate OID arguments, and make the APIs using RelFileNode or RelFileNodeBackend into macro wrappers. (The macros have a potential multiple-eval risk, but none of the existing call sites have an issue with that; one of them had such a risk already anyway.) * Fix failure to follow the directions when "init" fork type was added; specifically, the errhint in forkname_to_number wasn't updated, and neither was the SGML documentation for pg_relation_size(). * Fix tablespace-path-too-long check in CreateTableSpace() to account for fork-name component of maximum-length pathnames. This requires putting FORKNAMECHARS into a header file, but it was rather useless (and actually unreferenced) where it was. The last couple of items are potentially back-patchable bug fixes, if anyone is sufficiently excited about them; but personally I'm not. Per a gripe from Christoph Berg about how include/common wasn't self-contained.
* Fix off-by-one bug in LWLockRegisterTranche().Tom Lane2014-04-25
| | | | | | | Original coding failed to enlarge the array as required if the requested tranche_id was equal to LWLockTranchesAllocated. In passing, fix poor style of not casting the result of (re)palloc.
* Try to fix spurious DSM failures on Windows.Robert Haas2014-04-16
| | | | | | | | Apparently, Windows can sometimes return an error code even when the operation actually worked just fine. Rearrange the order of checks according to what appear to be the best practices in this area. Amit Kapila
* Fix misc typos in comments.Heikki Linnakangas2014-04-09
|
* Get rid of the dynamic shared memory state file.Robert Haas2014-04-08
| | | | | | | | | | | | | Instead of storing the ID of the dynamic shared memory control segment in a file within the data directory, store it in the main control segment. This avoids a number of nasty corner cases, most seriously that doing an online backup and then using it on the same machine (e.g. to fire up a standby) would result in the standby clobbering all of the master's dynamic shared memory segments. Per complaints from Heikki Linnakangas, Fujii Masao, and Tom Lane.
* Assert that strong-lock count is >0 everywhere it's decremented.Robert Haas2014-04-07
| | | | | | | | | | The one existing assertion of this type has tripped a few times in the buildfarm lately, but it's not clear whether the problem is really originating there or whether it's leftovers from a trip through one of the other two paths that lack a matching assertion. So add one. Since the same bug(s) most likely exist(s) in the back-branches also, back-patch to 9.2, where the fast-path lock mechanism was added.
* Avoid allocations in critical sections.Heikki Linnakangas2014-04-04
| | | | If a palloc in a critical section fails, it becomes a PANIC.
* Mark FastPathStrongRelationLocks volatile.Robert Haas2014-03-31
| | | | | | | | | Otherwise, the compiler might decide to move modifications to data within this structure outside the enclosing SpinLockAcquire / SpinLockRelease pair, leading to shared memory corruption. This may or may not explain a recent lmgr-related buildfarm failure on prairiedog, but it needs to be fixed either way.
* Count buffers dirtied due to hints in pgBufferUsage.shared_blks_dirtied.Robert Haas2014-03-31
| | | | | | | | | | Previously, such buffers weren't counted, with the possible result that EXPLAIN (BUFFERS) and pg_stat_statements would understate the true number of blocks dirtied by an SQL statement. Back-patch to 9.2, where this counter was introduced. Amit Kapila
* Fix build with LWLOCK_STATS or dtrace.Heikki Linnakangas2014-03-21
| | | | | | | | Also fix the name of the dtrace probe for LWLockAcquireOrWait(). The function was renamed from LWLockWaitUntilFree to LWLockAqcuireOrWait, but the dtrace probe was neglected. Pointed out by Andres Freund and the buildfarm.
* Remove MinGW readdir/errno bug workaround fixed on 2003-10-10Bruce Momjian2014-03-21
|
* Properly check for readdir/closedir() failuresBruce Momjian2014-03-21
| | | | | | | Clear errno before calling readdir() and handle old MinGW errno bug while adding full test coverage for readdir/closedir failures. Backpatch through 8.4.
* Replace the XLogInsert slots with regular LWLocks.Heikki Linnakangas2014-03-21
| | | | | | | | | | The special feature the XLogInsert slots had over regular LWLocks is the insertingAt value that was updated atomically with releasing backends waiting on it. Add new functions to the LWLock API to do that, and replace the slots with LWLocks. This reduces the amount of duplicated code. (There's still some duplication, but at least it's all in lwlock.c now.) Reviewed by Andres Freund.
* Setup error context callback for transaction lock waitsAlvaro Herrera2014-03-19
| | | | | | | | | | | | | | | | | | With this in place, a session blocking behind another one because of tuple locks will get a context line mentioning the relation name, tuple TID, and operation being done on tuple. For example: LOG: process 11367 still waiting for ShareLock on transaction 717 after 1000.108 ms DETAIL: Process holding the lock: 11366. Wait queue: 11367. CONTEXT: while updating tuple (0,2) in relation "foo" STATEMENT: UPDATE foo SET value = 3; Most usefully, the new line is displayed by log entries due to log_lock_waits, although of course it will be printed by any other log message as well. Author: Christian Kruse, some tweaks by Álvaro Herrera Reviewed-by: Amit Kapila, Andres Freund, Tom Lane, Robert Haas
* Rewrite comment for shm_mq_receive_bytes.Robert Haas2014-03-18
| | | | | | | The comment and the code diverged at some point before the initial commit of this feature, and I failed to notice. Noted by Tom Lane.
* Improve shm_mq portability around MAXIMUM_ALIGNOF and sizeof(Size).Robert Haas2014-03-18
| | | | | | | | | | | Revise the original decision to expose a uint64-based interface and use Size everywhere possible. Avoid assuming that MAXIMUM_ALIGNOF is 8, or making any assumption about the relationship between that value and sizeof(Size). If MAXIMUM_ALIGNOF is bigger, we'll now insert padding after the length word; if it's smaller, we are now prepared to read and write the length word in chunks. Per discussion with Tom Lane.
* Make it easy to detach completely from shared memory.Robert Haas2014-03-18
| | | | | | | | | | The new function dsm_detach_all() can be used either by postmaster children that don't wish to take any risk of accidentally corrupting shared memory; or by forked children of regular backends with the same need. This patch also updates the postmaster children that already do PGSharedMemoryDetach() to do dsm_detach_all() as well. Per discussion with Tom Lane.
* Fix whitespacePeter Eisentraut2014-03-16
|
* C comments: remove odd blank lines after #ifdef WIN32 linesBruce Momjian2014-03-13
|
* Show PIDs of lock holders and waiters in log_lock_waits log message.Fujii Masao2014-03-13
| | | | Christian Kruse, reviewed by Kumar Rajeev Rastogi.
* Allow dynamic shared memory segments to be kept until shutdown.Robert Haas2014-03-10
| | | | | Amit Kapila, reviewed by Kyotaro Horiguchi, with some further changes by me.
* Teach on_exit_reset() to discard pending cleanups for dsm.Robert Haas2014-03-10
| | | | | | | | | If a postmaster child invokes fork() and then calls on_exit_reset, that should be sufficient to let it exit() without breaking anything, but dynamic shared memory broke that by not updating on_exit_reset() to discard callbacks registered with dynamic shared memory segments. Per investigation of a complaint from Tom Lane.
* Fix dangling smgr_owner pointer when a fake relcache entry is freed.Heikki Linnakangas2014-03-07
| | | | | | | | | | | | A fake relcache entry can "own" a SmgrRelation object, like a regular relcache entry. But when it was free'd, the owner field in SmgrRelation was not cleared, so it was left pointing to free'd memory. Amazingly this apparently hasn't caused crashes in practice, or we would've heard about it earlier. Andres found this with Valgrind. Report and fix by Andres Freund, with minor modifications by me. Backpatch to all supported versions.
* Fix some typos introduced by the logical decoding patch.Robert Haas2014-03-05
| | | | Erik Rijkers