aboutsummaryrefslogtreecommitdiff
path: root/src/backend/access/transam/clog.c
diff options
context:
space:
mode:
authorAndres Freund <andres@anarazel.de>2016-04-08 08:18:52 -0700
committerAndres Freund <andres@anarazel.de>2016-04-08 08:25:59 -0700
commit5364b357fb115ed4dc7174085d8f59d9425638dd (patch)
treea1a6b88c1d12efad2801a68b8112f2d195ddfb89 /src/backend/access/transam/clog.c
parent25fe8b5f1ac93c3ec01519854e4f554b2e57a926 (diff)
downloadpostgresql-5364b357fb115ed4dc7174085d8f59d9425638dd.tar.gz
postgresql-5364b357fb115ed4dc7174085d8f59d9425638dd.zip
Increase maximum number of clog buffers.
Benchmarking has shown that the current number of clog buffers limits scalability. We've previously increased the number in 33aaa139, but that's not sufficient with a large number of clients. We've benchmarked the cost of increasing the limit by benchmarking worst case scenarios; testing showed that 128 buffers don't cause a regression, even in contrived scenarios, whereas 256 does There are a number of more complex patches flying around to address various clog scalability problems, but this is simple enough that we can get it into 9.6; and is beneficial even after those patches have been applied. It is a bit unsatisfactory to increase this in small steps every few releases, but a better solution seems to require a rewrite of slru.c; not something done quickly. Author: Amit Kapila and Andres Freund Discussion: CAA4eK1+-=18HOrdqtLXqOMwZDbC_15WTyHiFruz7BvVArZPaAw@mail.gmail.com
Diffstat (limited to 'src/backend/access/transam/clog.c')
-rw-r--r--src/backend/access/transam/clog.c29
1 files changed, 11 insertions, 18 deletions
diff --git a/src/backend/access/transam/clog.c b/src/backend/access/transam/clog.c
index 06aff181d8d..263447679b8 100644
--- a/src/backend/access/transam/clog.c
+++ b/src/backend/access/transam/clog.c
@@ -417,30 +417,23 @@ TransactionIdGetStatus(TransactionId xid, XLogRecPtr *lsn)
/*
* Number of shared CLOG buffers.
*
- * Testing during the PostgreSQL 9.2 development cycle revealed that on a
- * large multi-processor system, it was possible to have more CLOG page
- * requests in flight at one time than the number of CLOG buffers which existed
- * at that time, which was hardcoded to 8. Further testing revealed that
- * performance dropped off with more than 32 CLOG buffers, possibly because
- * the linear buffer search algorithm doesn't scale well.
+ * On larger multi-processor systems, it is possible to have many CLOG page
+ * requests in flight at one time which could lead to disk access for CLOG
+ * page if the required page is not found in memory. Testing revealed that we
+ * can get the best performance by having 128 CLOG buffers, more than that it
+ * doesn't improve performance.
*
- * Unconditionally increasing the number of CLOG buffers to 32 did not seem
- * like a good idea, because it would increase the minimum amount of shared
- * memory required to start, which could be a problem for people running very
- * small configurations. The following formula seems to represent a reasonable
+ * Unconditionally keeping the number of CLOG buffers to 128 did not seem like
+ * a good idea, because it would increase the minimum amount of shared memory
+ * required to start, which could be a problem for people running very small
+ * configurations. The following formula seems to represent a reasonable
* compromise: people with very low values for shared_buffers will get fewer
- * CLOG buffers as well, and everyone else will get 32.
- *
- * It is likely that some further work will be needed here in future releases;
- * for example, on a 64-core server, the maximum number of CLOG requests that
- * can be simultaneously in flight will be even larger. But that will
- * apparently require more than just changing the formula, so for now we take
- * the easy way out.
+ * CLOG buffers as well, and everyone else will get 128.
*/
Size
CLOGShmemBuffers(void)
{
- return Min(32, Max(4, NBuffers / 512));
+ return Min(128, Max(4, NBuffers / 512));
}
/*