aboutsummaryrefslogtreecommitdiff
path: root/src/backend/storage/buffer/bufmgr.c
diff options
context:
space:
mode:
authorRobert Haas <rhaas@postgresql.org>2014-09-25 10:43:24 -0400
committerRobert Haas <rhaas@postgresql.org>2014-09-25 10:43:24 -0400
commit5d7962c6797c0baae9ffb3b5b9ac0aec7b598bc3 (patch)
tree9abf4b7ad28b57c77305b5b1361d3468642bc299 /src/backend/storage/buffer/bufmgr.c
parent1dcfb8da09c47d2a7502d1dfab06c8be4b6cf323 (diff)
downloadpostgresql-5d7962c6797c0baae9ffb3b5b9ac0aec7b598bc3.tar.gz
postgresql-5d7962c6797c0baae9ffb3b5b9ac0aec7b598bc3.zip
Change locking regimen around buffer replacement.
Previously, we used an lwlock that was held from the time we began seeking a candidate buffer until the time when we found and pinned one, which is disastrous for concurrency. Instead, use a spinlock which is held just long enough to pop the freelist or advance the clock sweep hand, and then released. If we need to advance the clock sweep further, we reacquire the spinlock once per buffer. This represents a significant increase in atomic operations around buffer eviction, but it still wins on many workloads. On others, it may result in no gain, or even cause a regression, unless the number of buffer mapping locks is also increased. However, that seems like material for a separate commit. We may also need to consider other methods of mitigating contention on this spinlock, such as splitting it into multiple locks or jumping the clock sweep hand more than one buffer at a time, but those, too, seem like separate improvements. Patch by me, inspired by a much larger patch from Amit Kapila. Reviewed by Andres Freund.
Diffstat (limited to 'src/backend/storage/buffer/bufmgr.c')
-rw-r--r--src/backend/storage/buffer/bufmgr.c12
1 files changed, 2 insertions, 10 deletions
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 32404327cfd..45d1d61d95d 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -889,15 +889,11 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
/* Loop here in case we have to try another victim buffer */
for (;;)
{
- bool lock_held;
-
/*
* Select a victim buffer. The buffer is returned with its header
- * spinlock still held! Also (in most cases) the BufFreelistLock is
- * still held, since it would be bad to hold the spinlock while
- * possibly waking up other processes.
+ * spinlock still held!
*/
- buf = StrategyGetBuffer(strategy, &lock_held);
+ buf = StrategyGetBuffer(strategy);
Assert(buf->refcount == 0);
@@ -907,10 +903,6 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
/* Pin the buffer and then release the buffer spinlock */
PinBuffer_Locked(buf);
- /* Now it's safe to release the freelist lock */
- if (lock_held)
- LWLockRelease(BufFreelistLock);
-
/*
* If the buffer was dirty, try to write it out. There is a race
* condition here, in that someone might dirty it after we released it