diff options
author | Tom Lane <tgl@sss.pgh.pa.us> | 2015-08-04 18:18:46 -0400 |
---|---|---|
committer | Tom Lane <tgl@sss.pgh.pa.us> | 2015-08-04 18:18:46 -0400 |
commit | 8ea3e7a75c0d22c41c57f59c8b367059b97d0b66 (patch) | |
tree | 37658922d3ff3abf63984cd29f4a9a56a4e16484 /src/backend/utils/mmgr/aset.c | |
parent | 85e5e222b1dd02f135a8c3bf387d0d6d88e669bd (diff) | |
download | postgresql-8ea3e7a75c0d22c41c57f59c8b367059b97d0b66.tar.gz postgresql-8ea3e7a75c0d22c41c57f59c8b367059b97d0b66.zip |
Fix bogus "out of memory" reports in tuplestore.c.
The tuplesort/tuplestore memory management logic assumed that the chunk
allocation overhead for its memtuples array could not increase when
increasing the array size. This is and always was true for tuplesort,
but we (I, I think) blindly copied that logic into tuplestore.c without
noticing that the assumption failed to hold for the much smaller array
elements used by tuplestore. Given rather small work_mem, this could
result in an improper complaint about "unexpected out-of-memory situation",
as reported by Brent DeSpain in bug #13530.
The easiest way to fix this is just to increase tuplestore's initial
array size so that the assumption holds. Rather than relying on magic
constants, though, let's export a #define from aset.c that represents
the safe allocation threshold, and make tuplestore's calculation depend
on that.
Do the same in tuplesort.c to keep the logic looking parallel, even though
tuplesort.c isn't actually at risk at present. This will keep us from
breaking it if we ever muck with the allocation parameters in aset.c.
Back-patch to all supported versions. The error message doesn't occur
pre-9.3, not so much because the problem can't happen as because the
pre-9.3 tuplestore code neglected to check for it. (The chance of
trouble is a great deal larger as of 9.3, though, due to changes in the
array-size-increasing strategy.) However, allowing LACKMEM() to become
true unexpectedly could still result in less-than-desirable behavior,
so let's patch it all the way back.
Diffstat (limited to 'src/backend/utils/mmgr/aset.c')
-rw-r--r-- | src/backend/utils/mmgr/aset.c | 11 |
1 files changed, 8 insertions, 3 deletions
diff --git a/src/backend/utils/mmgr/aset.c b/src/backend/utils/mmgr/aset.c index 0cfb934b003..febeb6eaf8e 100644 --- a/src/backend/utils/mmgr/aset.c +++ b/src/backend/utils/mmgr/aset.c @@ -112,9 +112,9 @@ * * With the current parameters, request sizes up to 8K are treated as chunks, * larger requests go into dedicated blocks. Change ALLOCSET_NUM_FREELISTS - * to adjust the boundary point. (But in contexts with small maxBlockSize, - * we may set the allocChunkLimit to less than 8K, so as to avoid space - * wastage.) + * to adjust the boundary point; and adjust ALLOCSET_SEPARATE_THRESHOLD in + * memutils.h to agree. (Note: in contexts with small maxBlockSize, we may + * set the allocChunkLimit to less than 8K, so as to avoid space wastage.) *-------------------- */ @@ -476,7 +476,12 @@ AllocSetContextCreate(MemoryContext parent, * We have to have allocChunkLimit a power of two, because the requested * and actually-allocated sizes of any chunk must be on the same side of * the limit, else we get confused about whether the chunk is "big". + * + * Also, allocChunkLimit must not exceed ALLOCSET_SEPARATE_THRESHOLD. */ + StaticAssertStmt(ALLOC_CHUNK_LIMIT == ALLOCSET_SEPARATE_THRESHOLD, + "ALLOC_CHUNK_LIMIT != ALLOCSET_SEPARATE_THRESHOLD"); + set->allocChunkLimit = ALLOC_CHUNK_LIMIT; while ((Size) (set->allocChunkLimit + ALLOC_CHUNKHDRSZ) > (Size) ((maxBlockSize - ALLOC_BLOCKHDRSZ) / ALLOC_CHUNK_FRACTION)) |