| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a potential equivalence clause references a variable from the nullable
side of an outer join, the planner needs to take care that derived clauses
are not pushed to below the outer join; else they may use the wrong value
for the variable. (The problem arises only with non-strict clauses, since
if an upper clause can be proven strict then the outer join will get
simplified to a plain join.) The planner attempted to prevent this type
of error by checking that potential equivalence clauses aren't
outerjoin-delayed as a whole, but actually we have to check each side
separately, since the two sides of the clause will get moved around
separately if it's treated as an equivalence. Bugs of this type can be
demonstrated as far back as 7.4, even though releases before 8.3 had only
a very ad-hoc notion of equivalence clauses.
In addition, we neglected to account for the possibility that such clauses
might have nonempty nullable_relids even when not outerjoin-delayed; so the
equivalence-class machinery lacked logic to compute correct nullable_relids
values for clauses it constructs. This oversight was harmless before 9.2
because we were only using RestrictInfo.nullable_relids for OR clauses;
but as of 9.2 it could result in pushing constructed equivalence clauses
to incorrect places. (This accounts for bug #7604 from Bill MacArthur.)
Fix the first problem by adding a new test check_equivalence_delay() in
distribute_qual_to_rels, and fix the second one by adding code in
equivclass.c and called functions to set correct nullable_relids for
generated clauses. Although I believe the second part of this is not
currently necessary before 9.2, I chose to back-patch it anyway, partly to
keep the logic similar across branches and partly because it seems possible
we might find other reasons why we need valid values of nullable_relids in
the older branches.
Add regression tests illustrating these problems. In 9.0 and up, also
add test cases checking that we can push constants through outer joins,
since we've broken that optimization before and I nearly broke it again
with an overly simplistic patch for this problem.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an SMgrRelation is not "owned" by a relcache entry, don't allow it to
live past transaction end. This design allows the same SMgrRelation to be
used for blind writes of multiple blocks during a transaction, but ensures
that we don't hold onto such an SMgrRelation indefinitely. Because an
SMgrRelation typically corresponds to open file descriptors at the fd.c
level, leaving it open when there's no corresponding relcache entry can
mean that we prevent the kernel from reclaiming deleted disk space.
(While CacheInvalidateSmgr messages usually fix that, there are cases
where they're not issued, such as DROP DATABASE. We might want to add
some more sinval messaging for that, but I'd be inclined to keep this
type of logic anyway, since allowing VFDs to accumulate indefinitely
for blind-written relations doesn't seem like a good idea.)
This code replaces a previous attempt towards the same goal that proved
to be unreliable. Back-patch to 9.1 where the previous patch was added.
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit fba105b1099f4f5fa7283bb17cba6fed2baa8d0c.
That approach had problems with the smgr-level state not tracking what
we really want to happen, and with the VFD-level state not tracking the
smgr-level state very well either. In consequence, it was still possible
to hold kernel file descriptors open for long-gone tables (as in recent
report from Tore Halset), and yet there were also cases of FDs being closed
undesirably soon. A replacement implementation will follow.
|
|
|
|
|
|
|
|
|
|
|
|
| |
If postmaster changed postmaster.pid while pg_ctl was reading it, pg_ctl
could overrun the buffer it allocated for the file. Fix by reading the
whole file to memory with one read() call.
initdb contains an identical copy of the readfile() function, but the files
that initdb reads are static, not modified concurrently. Nevertheless, add
a simple bounds-check there, if only to silence static analysis tools.
Per report from Dave Vitek. Backpatch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous coding, new backend processes would attempt to create their
self-pipe during the OwnLatch call in InitProcess. However, pipe creation
could fail if the kernel is short of resources; and the system does not
recover gracefully from a FATAL error right there, since we have armed the
dead-man switch for this process and not yet set up the on_shmem_exit
callback that would disarm it. The postmaster then forces an unnecessary
database-wide crash and restart, as reported by Sean Chittenden.
There are various ways we could rearrange the code to fix this, but the
simplest and sanest seems to be to split out creation of the self-pipe into
a new function InitializeLatchSupport, which must be called from a place
where failure is allowed. For most processes that gets called in
InitProcess or InitAuxiliaryProcess, but processes that don't call either
but still use latches need their own calls.
Back-patch to 9.1, which has only a part of the latch logic that 9.2 and
HEAD have, but nonetheless includes this bug.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When hashing a subplan like "WHERE (a, b) NOT IN (SELECT x, y FROM ...)",
findPartialMatch() attempted to match rows using the hashtable's internal
equality operators, which of course are for x and y's datatypes. What we
need to use are the potentially cross-type operators for a=x, b=y, etc.
Failure to do that leads to wrong answers or even crashes. The scope for
problems is limited to cases where we have different types with compatible
hash functions (else we'd not be using a hashed subplan), but for example
int4 vs int8 can cause the problem.
Per bug #7597 from Bo Jensen. This has been wrong since the hashed-subplan
code was written, so patch all the way back.
|
|
|
|
|
|
|
|
|
|
| |
Building a shlib on AIX requires use of the mkldexport.sh script, but we
failed to install that, preventing its use from non-source-tree contexts.
Also, Makefile.aix had the wrong idea about where to find the installed
copy of the postgres.imp symbol file used by AIX.
Per report from John Pierce. Patch all the way back, since this has been
broken since the beginning of PGXS.
|
|
|
|
|
|
|
|
|
| |
I found that these functions tend to return -1 while leaving an empty error
message string in the PGconn, if they suffer some kind of I/O error on the
file. The reason is that lo_close, which thinks it's executed a perfectly
fine SQL command, clears the errorMessage. The minimum-change workaround
is to reorder operations here so that we don't fill the errorMessage until
after lo_close.
|
|
|
|
| |
lo_export returns -1, not zero, on failure.
|
|
|
|
| |
Tomonaru Katsumata
|
|
|
|
|
|
| |
Instead of continuing if the next character is not an array boundary get_data()
used to continue only on finding a boundary so it was not able to read any
element after the first.
|
|
|
|
|
|
|
|
| |
Apparently this was considered in the original code (see commit
cec3b0a9) but I failed to notice that such entries would always be
skipped by the database check at the start of the loop.
Per bugs #7578 by Nikolay, #6116 by tushar.qa@gmail.com.
|
|
|
|
|
|
| |
This affects date_in(), and a couple of other funcions that use DecodeDate().
Hitoshi Harada
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The tar output module did some very ugly and ultimately incorrect hacking
on COPY commands to try to get them to work in the context of restoring a
deconstructed tar archive. In particular, it would fail altogether for
table names containing any upper-case characters, since it smashed the
command string to lower-case before modifying it (and, just to add insult
to injury, did that in a way that would fail in multibyte encodings).
I don't see any particular value in being flexible about the case of the
command keywords, since the string will just have been created by
dumpTableData, so let's get rid of the whole case-folding thing.
Also, it doesn't seem to meet the POLA for the script to restore data only
in COPY mode, so add \i commands to make it have comparable behavior in
--inserts mode.
Noted while looking at the tar-output code in connection with Brian
Weaver's patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Back-patch portions of commit 05b555d12bc2ad0d581f48a12b45174db41dc10d.
There doesn't seem to be any reason not to fix pg_basebackup fully, but
we can't change pg_dump's "magic" string without breaking older versions
of pg_restore. Instead, just patch pg_restore to accept either version
of the magic string, in hopes of avoiding compatibility problems when
9.3 comes out. I also fixed pg_dump to write the correct 2-block EOF
marker, since that won't create a compatibility problem with pg_restore
and it could help with some versions of tar.
Brian Weaver and Tom Lane
|
| |
|
|
|
|
| |
DST law changes in Fiji.
|
| |
|
|
|
|
|
|
|
|
|
| |
Recovery code documents clearly that a shutdown checkpoint is executed at
end of recovery - a shutdown checkpoint WAL record is written but the buffer
manager had been altered to treat end of recovery as a normal checkpoint.
This bug exacerbates the bufmgr relpersistence bug.
Bug spotted by Andres Freund, patch by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Back-patch commits 9afc6481117d2dd936e752da0424a2b6b05f6459 and
b8fbbcf37f22c5e8361da939ad0fc4be18a34ca9. The first of these is really
a minor code cleanup to save a few cycles, but it turns out to provide
a workaround for the misoptimization problem described in bug #7516.
The second commit adds a regression test case.
Back-patch the fix to all active branches. The test case only works
as far back as 9.0, because it relies on plpgsql which isn't installed
by default before that. (I didn't have success modifying it into an
all-plperl form that still provoked a crash, though this may just reflect
my lack of Perl-fu.)
|
|
|
|
|
|
|
| |
This can result in buffers failing to be properly flushed at
checkpoint time, leading to data loss.
Report, diagnosis, and patch by Jeff Davis.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I made multiple errors in commit 97532f7c29468010b87e40a04f8daa3eb097f654,
stemming mostly from failure to think about the available frequency data
as being element frequencies not value frequencies (so that occurrences of
different elements are not mutually exclusive). This led to sillinesses
such as estimating that "word" would match more rows than "word:*".
The choice to clamp to a minimum estimate of DEFAULT_TS_MATCH_SEL also
seems pretty ill-considered in hindsight, as it would frequently result in
an estimate much larger than the available data suggests. We do need some
sort of clamp, since a pattern not matching any of the MCELEMs probably
still needs a selectivity estimate of more than zero. I chose instead to
clamp to at least what a non-MCELEM word would be estimated as, preserving
the property that "word:*" doesn't get an estimate less than plain "word",
whether or not the word appears in MCELEM.
Per investigation of a gripe from Bill Martin, though I suspect that his
example case actually isn't even reaching the erroneous code.
Back-patch to 9.1 where this code was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
validate_plperl_function() supposed that it could free an old
plperl_proc_desc struct immediately upon detecting that it was stale.
However, if a plperl function is called recursively, this could result
in deleting the struct out from under an outer invocation, leading to
misbehavior or crashes. Add a simple reference-count mechanism to
ensure that such structs are freed only when the last reference goes
away.
Per investigation of bug #7516 from Marko Tiikkaja. I am not certain
that this error explains his report, because he says he didn't have
any recursive calls --- but it's hard to see how else it could have
crashed right there. In any case, this definitely fixes some problems
in the area.
Back-patch to all active branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Investigation shows that some intermittent build failures in ecpg are the
result of a gmake bug that was reported quite some time ago:
http://savannah.gnu.org/bugs/?30653
Preventing parallel builds of the ecpg subdirectories seems to dodge the
bug. Per yesterday's pgsql-hackers discussion, there are some other things
in the subdirectory makefiles that seem rather unsafe for parallel builds
too, but there's little point in fixing them as long as we have to work
around a make bug.
Back-patch to 9.1; parallel builds weren't very well supported before
that anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The planner previously assumed that parameter Vars having the same absolute
query level, varno, and varattno could safely be assigned the same runtime
PARAM_EXEC slot, even though they might be different Vars appearing in
different subqueries. This was (probably) safe before the introduction of
CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can
be executed during execution of some other subquery, causing the lifespan
of Params at the same syntactic nesting level as the CTE to overlap with
use of the same slots inside the CTE. In 9.1 we created additional hazards
by using the same parameter-assignment technology for nestloop inner scan
parameters, but it was broken before that, as illustrated by the added
regression test.
To fix, restructure the planner's management of PlannerParamItems so that
items having different semantic lifespans are kept rigorously separated.
This will probably result in complex queries using more runtime PARAM_EXEC
slots than before, but the slots are cheap enough that this hardly matters.
Also, stop generating PlannerParamItems containing Params for subquery
outputs: all we really need to do is reserve the PARAM_EXEC slot number,
and that now only takes incrementing a counter. The planning code is
simpler and probably faster than before, as well as being more correct.
Per report from Vik Reykja.
Back-patch of commit 46c508fbcf98ac334f1e831d21021d731c882fbb into all
branches that support WITH.
|
|
|
|
|
| |
This affects initdb, clusterdb, reindexdb, and vacuumdb in master
and 9.2; in earlier branches, only initdb is affected.
|
|
|
|
|
|
|
|
| |
Give the correct name of the GUC parameter being complained of.
Also, emit a more suitable SQLSTATE (INVALID_PARAMETER_VALUE,
not the default INTERNAL_ERROR).
Gurjeet Singh, errcode adjustment by me
|
|
|
|
|
|
|
|
|
|
|
|
| |
Perl, for some unaccountable reason, believes it's a good idea to reset
SIGFPE handling to SIG_IGN. Which wouldn't be a good idea even if it
worked; but on some platforms (Linux at least) it doesn't work at all,
instead resulting in forced process termination if the signal occurs.
Given the lack of other complaints, it seems safe to assume that Perl
never actually provokes SIGFPE and so there is no value in the setting
anyway. Hence, reset it to our normal handler after initializing Perl.
Report, analysis and patch by Andres Freund.
|
|
|
|
|
|
|
|
|
|
|
| |
We previously supposed that any given platform would supply both or neither
of these functions, so that one configure test would be sufficient. It now
appears that at least on AIX this is not the case ... which is likely an
AIX bug, but nonetheless we need to cope with it. So use separate tests.
Per bug #6758; thanks to Andrew Hastie for doing the followup testing
needed to confirm what was happening.
Backpatch to 9.1, where we began using these functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This back-ports commits c8ba697a4bdb934f0c51424c654e8db6133ea255 and
e5db11c5582b469c04a11f217a0f32c827da5dd7, which fix one definite and one
speculative bug in gistchoose, and make the code a lot more intelligible as
well. In 9.2 only, this also affects the largely-copied-and-pasted logic
in gistRelocateBuildBuffersOnSplit.
The impact of the bugs was that the functions might make poor decisions
as to which index tree branch to push a new entry down into, resulting in
GiST index bloat and poor performance. The fixes rectify these decisions
for future insertions, but a REINDEX would be needed to clean up any
existing index bloat.
Alexander Korotkov, Robert Haas, Tom Lane
|
|
|
|
| |
Per note from Peter Eisentraut.
|
|
|
|
|
|
| |
This is a backport of commit b85427f2276d02756b558c0024949305ea65aca5.
Per discussion of bug #4958. Some of these fixes probably need to be
back-patched further, but I'm just doing this much for now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GUC check hooks for transaction_read_only and transaction_isolation
tried to check RecoveryInProgress(), so as to disallow setting read/write
mode or serializable isolation level (respectively) in hot standby
sessions. However, GUC check hooks can be called in many situations where
we're not connected to shared memory at all, resulting in a crash in
RecoveryInProgress(). Among other cases, this results in EXEC_BACKEND
builds crashing during child process start if default_transaction_isolation
is serializable, as reported by Heikki Linnakangas. Protect those calls
by silently allowing any setting when not inside a transaction; which is
okay anyway since these GUCs are always reset at start of transaction.
Also, add a check to GetSerializableTransactionSnapshot() to complain
if we are in hot standby. We need that check despite the one in
check_XactIsoLevel() because default_transaction_isolation could be
serializable. We don't want to complain any sooner than this in such
cases, since that would prevent running transactions at all in such a
state; but a transaction can be run, if SET TRANSACTION ISOLATION is done
before setting a snapshot. Per report some months ago from Robert Haas.
Back-patch to 9.1, since these problems were introduced by the SSI patch.
Kevin Grittner and Tom Lane, with ideas from Heikki Linnakangas
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we revoke a grant option from some role X, but X still holds the option
via another grant, we should not recursively revoke the privilege from
role(s) Y that X had granted it to. This was supposedly fixed as one
aspect of commit 4b2dafcc0b1a579ef5daaa2728223006d1ff98e9, but I must not
have tested it, because in fact that code never worked: it forgot to shift
the grant-option bits back over when masking the bits being revoked.
Per bug #6728 from Daniel German. Back-patch to all active branches,
since this has been wrong since 8.0.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous coding essentially assumed that nodes would be rescanned in
the same order they were initialized in; or at least that the "leader" of
a group of CTEscans would be rescanned before any others were required to
execute. Unfortunately, that isn't even a little bit true. It's possible
to devise queries in which the leader isn't rescanned until other CTEscans
on the same CTE have run to completion, or even in which the leader never
gets a rescan call at all.
The fix makes the leader specially responsible only for initial creation
and final destruction of the tuplestore; rescan resets are now a
symmetrically shared responsibility. This means that we might reset the
tuplestore multiple times when restarting a plan subtree containing
multiple CTEscans; but resetting an already-empty tuplestore is cheap
enough that that doesn't seem like a problem.
Per report from Adam Mackler; the new regression test cases are based on
his example query.
Back-patch to 8.4 where CTE scans were introduced.
|
|
|
|
|
|
|
|
|
|
|
| |
This situation creates a dependency loop that confuses pg_dump and probably
other things. Moreover, since the mental model is that the extension
"contains" schemas it owns, but "is contained in" its extschema (even
though neither is strictly true), having both true at once is confusing for
people too. So prevent the situation from being set up.
Reported and patched by Thom Brown. Back-patch to 9.1 where extensions
were added.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
xml_parse() would attempt to fetch external files or URLs as needed to
resolve DTD and entity references in an XML value, thus allowing
unprivileged database users to attempt to fetch data with the privileges
of the database server. While the external data wouldn't get returned
directly to the user, portions of it could be exposed in error messages
if the data didn't parse as valid XML; and in any case the mere ability
to check existence of a file might be useful to an attacker.
The ideal solution to this would still allow fetching of references that
are listed in the host system's XML catalogs, so that documents can be
validated according to installed DTDs. However, doing that with the
available libxml2 APIs appears complex and error-prone, so we're not going
to risk it in a security patch that necessarily hasn't gotten wide review.
So this patch merely shuts off all access, causing any external fetch to
silently expand to an empty string. A future patch may improve this.
In HEAD and 9.2, also suppress warnings about undefined entities, which
would otherwise occur as a result of not loading referenced DTDs. Previous
branches don't show such warnings anyway, due to different error handling
arrangements.
Credit to Noah Misch for first reporting the problem, and for much work
towards a solution, though this simplistic approach was not his preference.
Also thanks to Daniel Veillard for consultation.
Security: CVE-2012-3489
|
| |
|
|
|
|
|
|
|
|
|
|
| |
DST law changes in Morocco; Tokelau has relocated to the other side of
the International Date Line; and apparently Olson had Tokelau's GMT
offset wrong by an hour even before that.
There are also a large number of non-significant changes in this update.
Upstream took the opportunity to remove trailing whitespace, and the
SCCS-style version numbers on the individual files are gone too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This command generated new pg_depend entries linking the index to the
constraint and the constraint to the table, which match the entries made
when a unique or primary key constraint is built de novo. However, it did
not bother to get rid of the entries linking the index directly to the
table. We had considered the issue when the ADD CONSTRAINT USING INDEX
patch was written, and concluded that we didn't need to get rid of the
extra entries. But this is wrong: ALTER COLUMN TYPE wasn't expecting such
redundant dependencies to exist, as reported by Hubert Depesz Lubaczewski.
On reflection it seems rather likely to break other things as well, since
there are many bits of code that crawl pg_depend for one purpose or
another, and most of them are pretty naive about what relationships they're
expecting to find. Fortunately it's not that hard to get rid of the extra
dependency entries, so let's do that.
Back-patch to 9.1, where ALTER TABLE ADD CONSTRAINT USING INDEX was added.
|
|
|
|
|
|
|
|
| |
Should be limited to the maximum number of connections excluding
autovacuum workers, not including.
Add similar check for max_wal_senders, which should never be higher than
max_connections.
|
|
|
|
|
| |
The directions explaining about running the prepared-transactions test
were not updated in commit ae55d9fbe3871a5e6309d9b91629f1b0ff2b8cba.
|
|
|
|
| |
Dave Kerr, backpatched by Simon Riggs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used to convert the unicode object directly to a string in the server
encoding by calling Python's PyUnicode_AsEncodedString function. In other
words, we used Python's routines to do the encoding. However, that has a
few problems. First of all, it required keeping a mapping table of Python
encoding names and PostgreSQL encodings. But the real killer was that Python
doesn't support EUC_TW and MULE_INTERNAL encodings at all.
Instead, convert the Python unicode object to UTF-8, and use PostgreSQL's
encoding conversion functions to convert from UTF-8 to server encoding. We
were already doing the same in the other direction in PLyUnicode_FromString,
so this is more consistent, too.
Note: This makes SQL_ASCII to behave more leniently. We used to map
SQL_ASCII to Python's 'ascii', which on Python means strict 7-bit ASCII
only, so you got an error if the python string contained anything but pure
ASCII. You no longer get an error; you get the UTF-8 representation of the
string instead.
Backpatch to 9.0, where these conversions were introduced.
Jan UrbaĆski
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DecodeInterval() failed to honor the "range" parameter (the special SQL
syntax for indicating which fields appear in the literal string) if the
time was signed. This seems inappropriate, so make it work like the
not-signed case. The inconsistency was introduced in my commit
f867339c0148381eb1d01f93ab5c79f9d10211de, which as noted in its log message
was only really focused on making SQL-compliant literals work per spec.
Including a sign here is not per spec, but if we're going to allow it
then it's reasonable to expect it to work like the not-signed case.
Also, remove bogus setting of tmask, which caused subsequent processing to
think that what had been given was a timezone and not an hh:mm(:ss) field,
thus confusing checks for redundant fields. This seems to be an aboriginal
mistake in Lockhart's commit 2cf1642461536d0d8f3a1cf124ead0eac04eb760.
Add regression test cases to illustrate the changed behaviors.
Back-patch as far as 8.4, where support for spec-compliant interval
literals was added.
Range problem reported and diagnosed by Amit Kapila, tmask problem by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Parse analysis neglected to cover the case of a WITH clause attached to an
intermediate-level set operation; it only handled WITH at the top level
or WITH attached to a leaf-level SELECT. Per report from Adam Mackler.
In HEAD, I rearranged the order of SelectStmt's fields to put withClause
with the other fields that can appear on non-leaf SelectStmts. In back
branches, leave it alone to avoid a possible ABI break for third-party
code.
Back-patch to 8.4 where WITH support was added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the original coding of the log rotation stuff, we did not bother to make
the truncation logic work for the very first rotation after postmaster
start (or after a syslogger crash and restart). It just always appended
in that case. It did not seem terribly important at the time, but we've
recently had two separate complaints from people who expected it to work
unsurprisingly. (Both users tend to restart the postmaster about as often
as a log rotation is configured to happen, which is maybe not typical use,
but still...) Since the initial log file is opened in the postmaster,
fixing this requires passing down some more state to the syslogger child
process.
It's always been like this, so back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
| |
This function suppressed any stderr output from the called program, which
is unnecessary in the normal case and unhelpful in error cases. It also
gave a rather opaque message along the lines of "fgets failure: Success"
in case the called program failed to return anything on stdout. Since
we've seen multiple reports of people not understanding what's wrong when
pg_ctl reports this, improve the message.
Back-patch to all active branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the original coding of the autovacuum cancel feature, commit
acac68b2bcae818bc8803b8cb8cbb17eee8d5e2b, an autovacuum process was
considered a target for cancellation if it was found to hard-block any
process examined in the deadlock search. This patch tightens the test so
that the autovacuum must directly hard-block the current process. This
should make the behavior more predictable in general, and in particular
it ensures that an autovacuum will not be canceled with less than
deadlock_timeout grace period. In the old coding, it was possible for an
autovacuum to be canceled almost instantly, given unfortunate timing of two
or more other processes' lock attempts.
This also justifies the logging methodology in the recent commit
d7318d43d891bd63e82dcfc27948113ed7b1db80; without this restriction, that
patch isn't providing enough information to see the connection of the
canceling process to the autovacuum. Like that one, patch all the way
back.
|