| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
This provides a small but worthwhile speedup when sorting text, at least
in cases to which the sortsupport machinery applies.
Robert Haas and Peter Geoghegan
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
parseRelOptions() tended to leak memory in the caller's context. Most
of the time this doesn't really matter since the caller's context is
at most query-lifespan, and the function won't be invoked very many times.
However, when testing with CLOBBER_CACHE_RECURSIVELY, the same relcache
entry can get rebuilt a *lot* of times in one query, leading to significant
intraquery memory bloat if it has any reloptions. Noted while
investigating a related report from Tomas Vondra.
In passing, get rid of some Asserts that are redundant with the one
done by deconstruct_array().
As with other patches to avoid leaks in CLOBBER_CACHE testing, it doesn't
really seem worth back-patching this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When replacing rd_indexlist, rd_indexattr, etc, we neglected to pfree any
old value of these fields. Under ordinary circumstances, the old value
would always be NULL, so this seemed reasonable enough. However, in cases
where we're rebuilding a system catalog's relcache entry and another cache
flush occurs on that same catalog meanwhile, it's possible for the field to
not be NULL when we return to the outer level, because we already refilled
it while recovering from the inner flush. This leads to a fairly small
session-lifespan leak in CacheMemoryContext. In real-world usage the leak
would be too small to notice; but in testing with CLOBBER_CACHE_RECURSIVELY
the leakage can add up to the point of causing OOM failures, as reported by
Tomas Vondra.
The issue has been there a long time, but it only seems worth fixing in
HEAD, like the previous fix in this area (commit 078b2ed291c758e7).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When doing logical decoding using START_LOGICAL_REPLICATION in a
walsender process the walsender sometimes was sending out keepalive
messages too frequently. Asking for feedback every time.
WalSndWaitForWal() sends out keepalive messages when it's waiting for
new WAL to be generated locally when it sees that the remote side
hasn't yet flushed WAL up to the local position. That generally is
good but causes problems if the remote side only writes but doesn't
flush changes yet. So check for both remote write and flush position.
Additionally we've asked for feedback to the keepalive message which
isn't warranted when waiting for WAL in contrast to preventing
timeouts because of wal_sender_timeout.
Complaint and patch by Steve Singer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When both postgresql.conf and postgresql.auto.conf have their own entry of
the same parameter, PostgreSQL uses the entry in postgresql.auto.conf because
it appears last in the configuration scan. IOW, the other entries which appear
earlier are ignored. But, previously, ProcessConfigFile() detected the invalid
settings of even those unused entries and emitted the error messages
complaining about them, at postmaster startup. Complaining about the entries
to ignore is basically useless.
This problem happened because ProcessConfigFile() was called twice at
postmaster startup and the first call read only postgresql.conf. That is, the
first call could check the entry which might be ignored eventually by
the second call which read both postgresql.conf and postgresql.auto.conf.
To work around the problem, this commit changes ProcessConfigFile so that
its first call processes only data_directory and the second one does all the
entries. It's OK to process data_directory in the first call because it's
ensured that data_directory doesn't exist in postgresql.auto.conf.
Back-patch to 9.4 where postgresql.auto.conf was added.
Patch by me. Review by Amit Kapila
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This refactoring is in preparation for adding support for other SSL
implementations, with no user-visible effects. There are now two #defines,
USE_OPENSSL which is defined when building with OpenSSL, and USE_SSL which
is defined when building with any SSL implementation. Currently, OpenSSL is
the only implementation so the two #defines go together, but USE_SSL is
supposed to be used for implementation-independent code.
The libpq SSL code is changed to use a custom BIO, which does all the raw
I/O, like we've been doing in the backend for a long time. That makes it
possible to use MSG_NOSIGNAL to block SIGPIPE when using SSL, which avoids
a couple of syscall for each send(). Probably doesn't make much performance
difference in practice - the SSL encryption is expensive enough to mask the
effect - but it was a natural result of this refactoring.
Based on a patch by Martijn van Oosterhout from 2006. Briefly reviewed by
Alvaro Herrera, Andreas Karlsson, Jeff Janes.
|
|
|
|
|
|
|
|
| |
There's actually no need for any special case for unknown-type literals,
since we only need to push the value through its output function and
unknownout() works fine. The code that was here was completely bizarre
anyway, and would fail outright in cases that should work, not to mention
suffering from some copy-and-paste bugs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix an obvious typo in json_build_object()'s complaint about invalid
number of arguments, and make the errhint a bit more sensible too.
Per discussion about how to word the improved hint, change the few places
in the documentation that refer to JSON object field names as "names" to
say "keys" instead, since that's what we've said in the vast majority of
places in the docs. Arguably "name" is more correct, since that's the
terminology used in RFC 7159; but we're stuck with "key" in view of the
naming of json_object_keys() so let's at least be self-consistent.
I adjusted a few code comments to match this as well, and failed to
resist the temptation to clean up some odd whitespace choices in the
same area, as well as a useless duplicate PG_ARGISNULL() check. There's
still quite a bit of code that uses the phrase "field name" in non-user-
visible ways, so I left those usages alone.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Such cases are disallowed by the SQL spec, and even if we wanted to allow
them, the semantics seem ambiguous: how should the FK columns be matched up
with the columns of a unique index? (The matching could be significant in
the presence of opclasses with different notions of equality, so this issue
isn't just academic.) However, our code did not previously reject such
cases, but instead would either fail to match to any unique index, or
generate a bizarre opclass-lookup error because of sloppy thinking in the
index-matching code.
David Rowley
|
|
|
|
|
|
|
|
| |
Previously, TOAST tables only required in the new cluster could cause
oid conflicts if they were auto-numbered and a later conflicting oid had
to be assigned.
Backpatch through 9.3
|
|
|
|
|
|
|
|
|
| |
This could be useful for datatypes like text, where we might want
to optimize for some collations but not others. However, this patch
doesn't introduce any new sortsupport functions that work this way;
it merely revises the code so that future patches may do so.
Patch by me. Review by Peter Geoghegan.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
parameter.
When more than one setting entries of same parameter exist in the
configuration file, PostgreSQL uses only entry appearing last in
configuration file scan. Since the other entries are not used,
ParseConfigFp() doesn't need to process them, but previously it did
that. This problematic behavior caused the configuration file scan
to detect invalid settings of unused entries (e.g., existence of
multiple entries of PGC_POSTMASTER parameter) and log the messages
complaining about them.
This commit changes the configuration file scan so that it processes
only last entry of each parameter.
Note that when multiple entries of same parameter exist both in
postgresql.conf and postgresql.auto.conf, unused entries in
postgresql.conf are still processed only at postmaster startup.
The problem has existed since old version, but a user is more likely
to encounter it since 9.4 where ALTER SYSTEM command was introduced.
So back-patch to 9.4.
Amit Kapila, slightly modified by me. Per report from Christoph Berg.
|
| |
|
|
|
|
|
|
|
| |
These messages are new in 9.4, which hasn't been released yet, so
back-patch to REL9_4_STABLE.
Daniele Varrazzo
|
|
|
|
|
|
|
|
|
|
|
| |
log_newpage is used by many indexams, in addition to heap, but for
historical reasons it's always been part of the heapam rmgr. Starting with
9.3, we have another WAL record type for logging an image of a page,
XLOG_FPI. Simplify things by moving log_newpage and log_newpage_buffer to
xlog.c, and switch to using the XLOG_FPI record type.
Bump the WAL version number because the code to replay the old HEAP_NEWPAGE
records is removed.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When autovacuum is nominally off, we will still launch autovac workers
to vacuum tables that are at risk of XID wraparound. But after we'd done
that, an autovac worker would proceed to autovacuum every table in the
targeted database, if they meet the usual thresholds for autovacuuming.
This is at best pretty unexpected; at worst it delays response to the
wraparound threat. Fix it so that if autovacuum is nominally off, we
*only* do forced vacuums and not any other work.
Per gripe from Andrey Zhidenkov. This has been like this all along,
so back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
| |
InitProcess() relies on IsBackgroundWorker to decide whether the PGPROC
for a new backend should be taken from ProcGlobal's freeProcs or from
bgworkerFreeProcs. In EXEC_BACKEND builds, InitProcess() is called
sooner than in non-EXEC_BACKEND builds, and IsBackgroundWorker wasn't
getting initialized soon enough.
Report by Noah Misch. Diagnosis and fix by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 0ac5ad5134f2 removed an optimization in multixact.c that skipped
fetching members of MultiXactId that were older than our
OldestVisibleMXactId value. The reason this was removed is that it is
possible for multixacts that contain updates to be older than that
value. However, if the caller is certain that the multi does not
contain an update (because the infomask bits say so), it can pass this
info down to GetMultiXactIdMembers, enabling it to use the old
optimization.
Pointed out by Andres Freund in 20131121200517.GM7240@alap2.anarazel.de
|
|
|
|
|
|
|
|
|
|
| |
Testing for abortedness of a multixact member that's being frozen is
unnecessary: we only need to know whether the transaction is still in
progress or committed to determine whether it must be kept or not. This
let us simplify the code a bit and avoid a useless TransactionIdDidAbort
test.
Suggested by Andres Freund awhile back.
|
|
|
|
|
| |
Pointed out by Tom Lane. Backpatch to 9.4, the code was structured
differently in earlier branches and didn't have this mistake.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were several oversights in recovery code where COMMIT/ABORT PREPARED
records were ignored:
* pg_last_xact_replay_timestamp() (wasn't updated for 2PC commits)
* recovery_min_apply_delay (2PC commits were applied immediately)
* recovery_target_xid (recovery would not stop if the XID used 2PC)
The first of those was reported by Sergiy Zuban in bug #11032, analyzed by
Tom Lane and Andres Freund. The bug was always there, but was masked before
commit d19bd29f07aef9e508ff047d128a4046cc8bc1e2, because COMMIT PREPARED
always created an extra regular transaction that was WAL-logged.
Backpatch to all supported versions (older versions didn't have all the
features and therefore didn't have all of the above bugs).
|
|
|
|
|
|
|
| |
Since commit 2d00190495b22e0d0ba351b2cda9c95fb2e3d083, fork numbers are
defined in relpath.h not relfilenode.h.
Fabrízio de Royes Mello
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This return code is possible wherever we pass bAlertable = TRUE; it
arises when Windows caused the current thread to run an "I/O completion
routine" or an "asynchronous procedure call". PostgreSQL does not
provoke either of those Windows facilities, hence this bug remaining
largely unnoticed, but other local code might do so. Due to a shortage
of complaints, no back-patch for now.
Per report from Shiv Shivaraju Gowda, this bug can cause
PGSemaphoreLock() to PANIC. The bug can also cause select() to report
timeout expiration too early, which might confuse pgstat_init() and
CheckRADIUSAuth().
|
|
|
|
|
|
|
|
|
| |
shm_mq_send_bytes didn't invariably initialize *bytes_written before
returning, which would cause shm_mq_send to read from uninitialized
memory and add the value it found there to mqh->mqh_partial_bytes.
This could cause the next attempt to send a message via the queue to
fail an assertion (if the queue was detached) or copy data from a
garbage pointer value into the queue (if non-blocking mode was in use).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Nothing in the checkpointer calls InitXLOGAccess(), so WALInsertLocks
never got initialized there. Without EXEC_BACKEND, it works anyway
because the correct value is inherited from the postmaster, but
with EXEC_BACKEND we've got a problem. The problem appears to have
been introduced by commit 68a2e52bbaf98f136a96b3a0d734ca52ca440a95.
To fix, move the relevant initialization steps from InitXLOGAccess()
to XLOGShmemInit(), making this more parallel to what we do
elsewhere.
Amit Kapila
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ephemeral slots - slots that shouldn't survive database restarts -
weren't properly cleaned up after a immediate/crash restart. They were
ignored in the sense that they weren't restored into memory and thus
didn't cause unwanted resource retention; but they prevented a new
slot with the same name from being created.
Now ephemeral slots are fully removed during startup.
Backpatch to 9.4 where replication slots where added.
|
|
|
|
| |
Spotted by Tom Lane.
|
|
|
|
|
|
|
|
|
|
|
| |
The executor has thrown errors for negative OFFSET values since 8.4 (see
commit bfce56eea45b1369b7bb2150a150d1ac109f5073), but in a moment of brain
fade I taught the planner that OFFSET with a constant negative value was a
no-op (commit 1a1832eb085e5bca198735e5d0e766a3cb61b8fc). Reinstate the
former behavior by only discarding OFFSET with a value of exactly 0. In
passing, adjust a planner comment that referenced the ancient behavior.
Back-patch to 9.3 where the mistake was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get_raw_page tried to validate the supplied block number against
RelationGetNumberOfBlocks(), which of course is only right when
accessing the main fork. In most cases, the main fork is longer
than the others, so that the check was too weak (allowing a
lower-level error to be reported, but no real harm to be done).
However, very small tables could have an FSM larger than their heap,
in which case the mistake prevented access to some FSM pages.
Per report from Torsten Foertsch.
In passing, make the bad-block-number error into an ereport not elog
(since it's certainly not an internal error); and fix sloppily
maintained comment for RelationGetNumberOfBlocksInFork.
This has been wrong since we invented relation forks, so back-patch
to all supported branches.
|
|
|
|
|
| |
This makes the behaviour consistent with the json parser, other
json-generating functions, and the JSON standards.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 631dc390f49909a5c8ebd6002cfb2bcee5415a9d, we started to handle
simple numeric timezone offsets via the zic library instead of the old
CTimeZone/HasCTZSet kluge. However, we overlooked the fact that the zic
code will reject UTC offsets exceeding a week (which seems a bit arbitrary,
but not because it's too tight ...). This led to possibly setting
session_timezone to NULL, which results in crashes in most timezone-related
operations as of 9.4, and crashes in a small number of places even before
that. So check for NULL return from pg_tzset_offset() and report an
appropriate error message. Per bug #11014 from Duncan Gillis.
Back-patch to all supported branches, like the previous patch.
(Unfortunately, as of today that no longer includes 8.4.)
|
|
|
|
| |
Per recommendation from Andres.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit a61daa14d56867e90dc011bbba52ef771cea6770, we fixed pg_upgrade so
that it would install sane relminmxid and datminmxid values, but that does
not cure the problem for installations that were already pg_upgraded to
9.3; they'll initially have "1" in those fields. This is not a big problem
so long as 1 is "in the past" compared to the current nextMultiXact
counter. But if an installation were more than halfway to the MXID wrap
point at the time of upgrade, 1 would appear to be "in the future" and
that would effectively disable tracking of oldest MXIDs in those
tables/databases, until such time as the counter wrapped around.
While in itself this isn't worse than the situation pre-9.3, where we did
not manage MXID wraparound risk at all, the consequences of premature
truncation of pg_multixact are worse now; so we ought to make some effort
to cope with this. We discussed advising users to fix the tracking values
manually, but that seems both very tedious and very error-prone.
Instead, this patch adopts two amelioration rules. First, a relminmxid
value that is "in the future" is allowed to be overwritten with a
full-table VACUUM's actual freeze cutoff, ignoring the normal rule that
relminmxid should never go backwards. (This essentially assumes that we
have enough defenses in place that wraparound can never occur anymore,
and thus that a value "in the future" must be corrupt.) Second, if we see
any "in the future" values then we refrain from truncating pg_clog and
pg_multixact. This prevents loss of clog data until we have cleaned up
all the broken tracking data. In the worst case that could result in
considerable clog bloat, but in practice we expect that relfrozenxid-driven
freezing will happen soon enough to fix the problem before clog bloat
becomes intolerable. (Users could do manual VACUUM FREEZEs if not.)
Note that this mechanism cannot save us if there are already-wrapped or
already-truncated-away MXIDs in the table; it's only capable of dealing
with corrupt tracking values. But that's the situation we have with the
pg_upgrade bug.
For consistency, apply the same rules to relfrozenxid/datfrozenxid. There
are not known mechanisms for these to get messed up, but if they were, the
same tactics seem appropriate for fixing them.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a view has a function-returning-composite in FROM, and there are
some dropped columns in the underlying composite type, ruleutils.c
printed junk in the column alias list for the reconstructed FROM entry.
Before 9.3, this was prevented by doing get_rte_attribute_is_dropped
tests while printing the column alias list; but that solution is not
currently available to us for reasons I'll explain below. Instead,
check for empty-string entries in the alias list, which can only exist
if that column position had been dropped at the time the view was made.
(The parser fills in empty strings to preserve the invariant that the
aliases correspond to physical column positions.)
While this is sufficient to handle the case of columns dropped before
the view was made, we have still got issues with columns dropped after
the view was made. In particular, the view could contain Vars that
explicitly reference such columns! The dependency machinery really
ought to refuse the column drop attempt in such cases, as it would do
when trying to drop a table column that's explicitly referenced in
views. However, we currently neglect to store dependencies on columns
of composite types, and fixing that is likely to be too big to be
back-patchable (not to mention that existing views in existing databases
would not have the needed pg_depend entries anyway). So I'll leave that
for a separate patch.
Pre-9.3, ruleutils would print such Vars normally (with their original
column names) even though it suppressed their entries in the RTE's
column alias list. This is certainly bogus, since the printed view
definition would fail to reload, but at least it didn't crash. However,
as of 9.3 the printed column alias list is tightly tied to the names
printed for Vars; so we can't treat columns as dropped for one purpose
and not dropped for the other. This is why we can't just put back the
get_rte_attribute_is_dropped test: it results in an assertion failure
if the view in fact contains any Vars referencing the dropped column.
Once we've got dependencies preventing such cases, we'll probably want
to do it that way instead of relying on the empty-string test used here.
This fix turned up a very ancient bug in outfuncs/readfuncs, namely
that T_String nodes containing empty strings were not dumped/reloaded
correctly: the node was printed as "<>" which is read as a string
value of <>. Since (per SQL) we disallow empty-string identifiers,
such nodes don't occur normally, which is why we'd not noticed.
(Such nodes aren't used for literal constants, just identifiers.)
Per report from Marc Schablewski. Back-patch to 9.3 which is where
the rule printing behavior changed. The dangling-variable case is
broken all the way back, but that's not what his complaint is about.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If pg_regcomp failed after having invoked markst/cleanst, it would leak any
"struct subre" nodes it had created. (We've already detected all regex
syntax errors at that point, so the only likely causes of later failure
would be query cancel or out-of-memory.) To fix, make sure freesrnode
knows the difference between the pre-cleanst and post-cleanst cleanup
procedures. Add some documentation of this less-than-obvious point.
Also, newlacon did the wrong thing with an out-of-memory failure from
realloc(), so that the previously allocated array would be leaked.
Both of these are pretty low-probability scenarios, but a bug is a bug,
so patch all the way back.
Per bug #10976 from Arthur O'Dwyer.
|
|
|
|
|
|
|
|
|
|
|
| |
pg_ctl will log to the Windows event log when it is running as a service,
which is the primary way of running PostgreSQL on Windows. This option
makes it possible to specify which event source to use for this, in order
to separate different instances. The server logging itself is still controlled
by the regular logging parameters, including a separate setting for the event
source. The parameter to pg_ctl only controlls the logging from pg_ctl itself.
MauMau, review in many iterations by Amit Kapila and me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The consistent function contained several bugs:
* The "if (which2) { ... }" block was broken. It compared the argument's
lower bound against centroid's upper bound, while it was supposed to compare
the argument's upper bound against the centroid's lower bound (the comment
was correct, code was wrong). Also, it cleared bits in the "which1"
variable, while it was supposed to clear bits in "which2".
* If the argument's upper bound was equal to the centroid's lower bound, we
descended to both halves (= all quadrants). That's unnecessary, searching
the right quadrants is sufficient. This didn't lead to incorrect query
results, but was clearly wrong, and slowed down queries unnecessarily.
* In the case that argument's lower bound is adjacent to the centroid's
upper bound, we also don't need to visit all quadrants. Per similar
reasoning as previous point.
* The code where we compare the previous centroid with the current centroid
should match the code where we compare the current centroid with the
argument. The point of that code is to redo the calculation done in the
previous level, to see if we were supposed to traverse left or right (or up
or down), and if we actually did. If we moved in the different direction,
then we know there are no matches for bound.
Refactor the code and adds comments to make it more readable and easier to
reason about.
Backpatch to 9.3 where SP-GiST support for range types was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can remove a left join to a relation if the relation's output is
provably distinct for the columns involved in the join clause (considering
only equijoin clauses) and the relation supplies no variables needed above
the join. Previously, the join removal logic could only prove distinctness
by reference to unique indexes of a table. This patch extends the logic
to consider subquery relations, wherein distinctness might be proven by
reference to GROUP BY, DISTINCT, etc.
We actually already had some code to check that a subquery's output was
provably distinct, but it was hidden inside pathnode.c; which was a pretty
bad place for it really, since that file is mostly boilerplate Path
construction and comparison. Move that code to analyzejoins.c, which is
arguably a more appropriate location, and is certainly the site of the
new usage for it.
David Rowley, reviewed by Simon Riggs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Trying to reassign objects owned by a user that had text search
dictionaries or configurations used to fail with:
ERROR: unexpected classid 3600
or
ERROR: unexpected classid 3602
Fix by adding cases for those object types in a switch in pg_shdepend.c.
Both REASSIGN OWNED and text search objects go back all the way to 8.1,
so backpatch to all supported branches. In 9.3 the alter-owner code was
made generic, so the required change in recent branches is pretty
simple; however, for 9.2 and older ones we need some additional
reshuffling to enable specifying objects by OID rather than name.
Text search templates and parsers are not owned objects, so there's no
change required for them.
Per bug #9749 reported by Michal Novotný
|
|
|
|
|
|
| |
Both the psql banner and the connection logging already included
SSL status, cipher and bitlength, this adds the information about
compression being on or off.
|
| |
|
|
|
|
|
| |
Also update one place where the wal_level "logical" was not added to an
error message.
|
|
|
|
|
|
|
| |
Per discussion after a gripe from me in
http://www.postgresql.org/message-id/20140611194633.GH18688@eldon.alvh.no-ip.org
Jaime Casanova
|
|
|
|
|
|
|
|
| |
Prominent binaries already had this metadata. A handful of minor
binaries, such as pg_regress.exe, still lack it; efforts to eliminate
such exceptions are welcome.
Michael Paquier, reviewed by MauMau.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ANALYZE.
EXPLAIN ANALYZE shows the information of the numbers of exact/lossy blocks which
bitmap heap scan processes. But, previously, when those numbers were both zero,
it displayed only the prefix "Heap Blocks:" in TEXT output format. This is strange
and would confuse the users. So this commit suppresses such unnecessary information.
Backpatch to 9.4 where EXPLAIN ANALYZE was changed so that such information was
displayed.
Etsuro Fujita
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 1b86c81d2d fixed the decoding of toasted columns for the rows
contained in one xl_heap_multi_insert record. But that's not actually
enough, because heap_multi_insert() will actually first toast all
passed in rows and then emit several *_multi_insert records; one for
each page it fills with tuples.
Add a XLOG_HEAP_LAST_MULTI_INSERT flag which is set in
xl_heap_multi_insert->flag denoting that this multi_insert record is
the last emitted by one heap_multi_insert() call. Then use that flag
in decode.c to only set clear_toast_afterwards in the right situation.
Expand the number of rows inserted via COPY in the corresponding
regression test to make sure that more than one heap page is filled
with tuples by one heap_multi_insert() call.
Backpatch to 9.4 like the previous commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source
TupleTableSlot just once per query. But if the input is coming from an
Append (or, perhaps, other cases?) more than one slot might be returned
over the query run. This led to "record type has not been registered"
errors when a composite datum was extracted from a non-blessed slot.
This bug has been there a long time; I guess it escaped notice because when
dealing with subqueries the planner tends to expand whole-row Vars into
RowExprs, which don't have the same problem. It is possible to trigger
the problem in all active branches, though, as illustrated by the added
regression test.
|