aboutsummaryrefslogtreecommitdiff
path: root/src/backend/storage/buffer
diff options
context:
space:
mode:
Diffstat (limited to 'src/backend/storage/buffer')
-rw-r--r--src/backend/storage/buffer/README12
-rw-r--r--src/backend/storage/buffer/bufmgr.c4
2 files changed, 8 insertions, 8 deletions
diff --git a/src/backend/storage/buffer/README b/src/backend/storage/buffer/README
index dc12c8ca087..248883f0dae 100644
--- a/src/backend/storage/buffer/README
+++ b/src/backend/storage/buffer/README
@@ -89,12 +89,12 @@ then returns false, while LockBufferForCleanup() releases the exclusive lock
(but not the caller's pin) and waits until signaled by another backend,
whereupon it tries again. The signal will occur when UnpinBuffer decrements
the shared pin count to 1. As indicated above, this operation might have to
-wait a good while before it acquires lock, but that shouldn't matter much for
-concurrent VACUUM. The current implementation only supports a single waiter
-for pin-count-1 on any particular shared buffer. This is enough for VACUUM's
-use, since we don't allow multiple VACUUMs concurrently on a single relation
-anyway. Anyone wishing to obtain a cleanup lock outside of recovery or a
-VACUUM must use the conditional variant of the function.
+wait a good while before it acquires the lock, but that shouldn't matter much
+for concurrent VACUUM. The current implementation only supports a single
+waiter for pin-count-1 on any particular shared buffer. This is enough for
+VACUUM's use, since we don't allow multiple VACUUMs concurrently on a single
+relation anyway. Anyone wishing to obtain a cleanup lock outside of recovery
+or a VACUUM must use the conditional variant of the function.
Buffer Manager's Internal Locking
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 6dd7c6ecb67..42aa2f9df9b 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -921,7 +921,7 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
*
* Since no-one else can be looking at the page contents yet, there is no
* difference between an exclusive lock and a cleanup-strength lock. (Note
- * that we cannot use LockBuffer() of LockBufferForCleanup() here, because
+ * that we cannot use LockBuffer() or LockBufferForCleanup() here, because
* they assert that the buffer is already valid.)
*/
if ((mode == RBM_ZERO_AND_LOCK || mode == RBM_ZERO_AND_CLEANUP_LOCK) &&
@@ -1882,7 +1882,7 @@ BufferSync(int flags)
* and clears the flag right after we check, but that doesn't matter
* since SyncOneBuffer will then do nothing. However, there is a
* further race condition: it's conceivable that between the time we
- * examine the bit here and the time SyncOneBuffer acquires lock,
+ * examine the bit here and the time SyncOneBuffer acquires the lock,
* someone else not only wrote the buffer but replaced it with another
* page and dirtied it. In that improbable case, SyncOneBuffer will
* write the buffer though we didn't need to. It doesn't seem worth