| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
This should have been done in 6bc8ef0b7f1f1df3998745a66e1790e27424aa0c
and/or 50e547096c4858a68abf09894667a542cc418315, but better late than
never. If we don't change this then we risk 9.3 pg_controldata or
pg_resetxlog being inappropriately used against a 9.4 pg_control file,
or vice versa.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
HeapTupleSatisfiesVacuum() didn't properly discern between
DELETE_IN_PROGRESS and INSERT_IN_PROGRESS for rows that have been
inserted in the current transaction and deleted in a aborted
subtransaction of the current backend. At the very least that caused
problems for CLUSTER and CREATE INDEX in transactions that had
aborting subtransactions producing rows, leading to warnings like:
WARNING: concurrent delete in progress within table "..."
possibly in an endless, uninterruptible, loop.
Instead of treating *InProgress xmins the same as *IsCurrent ones,
treat them as being distinct like the other visibility routines. As
implemented this separatation can cause a behaviour change for rows
that have been inserted and deleted in another, still running,
transaction. HTSV will now return INSERT_IN_PROGRESS instead of
DELETE_IN_PROGRESS for those. That's both, more in line with the other
visibility routines and arguably more correct. The latter because a
INSERT_IN_PROGRESS will make callers look at/wait for xmin, instead of
xmax.
The only current caller where that's possibly worse than the old
behaviour is heap_prune_chain() which now won't mark the page as
prunable if a row has concurrently been inserted and deleted. That's
harmless enough.
As a cautionary measure also insert a interrupt check before the gotos
in IndexBuildHeapScan() that lead to the uninterruptible loop. There
are other possible causes, like a row that several sessions try to
update and all fail, for repeated loops and the cost of doing so in
the retry case is low.
As this bug goes back all the way to the introduction of
subtransactions in 573a71a5da backpatch to all supported releases.
Reported-By: Sandro Santilli
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
shutdown.
187492b6c2e8cafc5b39063ca3b67846e8155d24 changed pgstat.c so that
the stats files were saved into $PGDATA/pg_stat directory when the server
was shutdowned. But it accidentally forgot to change the location of
pg_stat_statements permanent stats file. This commit fixes pg_stat_statements
so that its stats file is also saved into $PGDATA/pg_stat at shutdown.
Since this fix changes the file layout, we don't back-patch it to 9.3
where this oversight was introduced.
|
|
|
|
|
|
|
|
|
|
| |
Per gripe from Peter Eisentraut and Tom Lane.
The output is slightly different, but still ISO 8601 compliant: to_char
doesn't output the minutes when time zone offset is an integer number of
hours, while EncodeDateTime outputs ":00".
The code is slightly adapted from code in xml.c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, any backslash in text being escaped for JSON was doubled so
that the result was still valid JSON. However, this led to some perverse
results in the case of Unicode sequences, These are now detected and the
initial backslash is no longer escaped. All other backslashes are
still escaped. No validity check is performed, all that is looked for is
\uXXXX where X is a hexidecimal digit.
This is a change from the 9.2 and 9.3 behaviour as noted in the Release
notes.
Per complaint from Teodor Sigaev.
|
|
|
|
|
|
|
|
|
|
|
| |
Many JSON processors require timestamp strings in ISO 8601 format in
order to convert the strings. When converting a timestamp, with or
without timezone, to a JSON datum we therefore now use such a format
rather than the type's default text output, in functions such as
to_json().
This is a change in behaviour from 9.2 and 9.3, as noted in the release
notes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test previously used a data value containing U+0080, and would
therefore fail if the database encoding didn't have an equivalent to
that; which only about half of our supported server encodings do.
We could fall back to using some plain-ASCII character, but that seems
like it's losing most of the point of the test. Instead switch to using
U+00A0 (no-break space), which translates into all our supported encodings
except the four in the EUC_xx family.
Per buildfarm testing. Back-patch to 9.1, which is as far back as this
test is expected to succeed everywhere. (9.0 has the test, but without
back-patching some 9.1 code changes we could not expect to get consistent
results across platforms anyway.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because RecoveryConflictInterrupt() didn't set the process latch
anything using the latter to wait for events didn't get notified about
recovery conflicts. Most latch users are never the target of recovery
conflicts, which explains the lack of reports about this until
now.
Since 9.3 two possible affected users exist though: The sql callable
pg_sleep() now uses latches to wait and background workers are
expected to use latches in their main loop. Both would currently wait
until the end of WaitLatch's timeout.
Fix by adding a SetLatch() to RecoveryConflictInterrupt(). It'd also
be possible to fix the issue by having each latch user set
set_latch_on_sigusr1. That seems failure prone and though, as most of
these callsites won't often receive recovery conflicts and thus will
likely only be tested against normal query cancels et al. It'd also be
unnecessarily verbose.
Backpatch to 9.1 where latches were introduced. Arguably 9.3 would be
sufficient, because that's where pg_sleep() was converted to waiting
on the latch and background workers got introduced; but there could be
user level code making use of the latch pre 9.3.
|
|
|
|
|
|
|
|
|
|
| |
Use the unaligned/no rowcount output mode in a regression tests that
shows all built-in leakproof functions. Currently a new leakproof
function will often change the alignment of all existing functions,
making it hard to see the actual difference and creating unnecessary
patch conflicts.
Noticed while looking over a patch introducing new leakproof functions.
|
|
|
|
|
|
|
|
|
|
| |
Instead of iterating over jsonb structures, use the inbuilt functions
findJsonbValueFromContainerLen() and getIthJsonbValueFromContainer() to
extract values directly. These functions use algorithms that are O(n log
n) and O(1) respectively, whereas iterating is O(n), so we should see
considerable speedup here.
Teodor Sigaev.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As of Xcode 5.0, Apple isn't including the Python framework as part of the
SDK-level files, which means that linking to it might fail depending on
whether Xcode thinks you've selected a specific SDK version. According to
their Tech Note 2328, they've basically deprecated the framework method of
linking to libpython and are telling people to link to the shared library
normally. (I'm pretty sure this is in direct contradiction to the advice
they were giving a few years ago, but whatever.) Testing says that this
approach works fine at least as far back as OS X 10.4.11, so let's just
rip out the framework special case entirely. We do still need a special
case to decide that OS X provides a shared library at all, unfortunately
(I wonder why the distutils check doesn't work ...). But this is still
less of a special case than before, so it's fine.
Back-patch to all supported branches, since we'll doubtless be hearing
about this more as more people update to recent Xcode.
|
|
|
|
| |
Michael Paquier
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 45b7abe59e9485657ac9380f35d2d917dd0da25b.
It turns out that the %name-prefix syntax without "=" does not work
at all in pre-2.4 Bison. We are not prepared to make such a large
jump in minimum required Bison version just to suppress a warning
message in a version hardly any developers are using yet.
When 3.0 gets more popular, we'll figure out a way to deal with this.
In the meantime, BISONFLAGS=-Wno-deprecated is recommendable for
anyone using 3.0 who doesn't want to see the warning.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sometimes CREATE_REPLICATION_SLOT ... LOGICAL ... needs to wait for
further WAL using WalSndWaitForWal(). That used to always respect
wal_sender_timeout and kill the session when waiting long enough
because no feedback/ping messages can be sent while the slot is still
being created.
Introduce the notion that last_reply_timestamp = 0 means that the
walsender currently doesn't need timeout processing to avoid that
problem. Use that notion for CREATE_REPLICATION_SLOT ... LOGICAL.
Bugreport and initial patch by Steve Singer, revised by me.
|
|
|
|
|
|
|
|
| |
The only caller of compareJsonbScalarValue that needed locale-sensitive
comparison of strings was also the only caller that didn't just check for
equality. Separate the two cases for clarity: compareJsonbScalarValue now
does locale-sensitive comparison, and a new function,
equalsJsonbScalarValue, just checks for equality.
|
|
|
|
|
|
|
|
|
|
| |
Fix an over-zealous assertion, which didn't take into account that sometimes
a scalar element can be compared against an array/object element.
Avoid comparing possibly-uninitialized local variables when end-of-array or
end-of-object is reached. Also fix and enhance comments a bit.
Peter Geoghegan, per reports by Pavel Stehule and me.
|
|
|
|
|
|
|
|
|
|
|
|
| |
%name-prefix doesn't use an "=" sign according to the Bison docs, but it
silently accepted one anyway, until Bison 3.0. This was originally a
typo of mine in commit 012abebab1bc72043f3f670bf32e91ae4ee04bd2, and we
seem to have slavishly copied the error into all the other grammar files.
Per report from Vik Fearing; analysis by Peter Eisentraut.
Back-patch to all active branches, since somebody might try to build
a back branch with up-to-date tools.
|
|
|
|
|
|
|
|
|
|
|
| |
Move the code that sends the initial status information as well as the
calculation of paths inside the ENSURE_ERROR_CLEANUP block. If this code
failed, we would "leak" a counter of number of concurrent backups, thereby
making the system always believe it was in backup mode. This could happen
if the sending failed (which it probably never did given that the small
amount of data to send would never cause a flush) or if the psprintf calls
ran out of memory. Both are very low risk, but all operations after
do_pg_start_backup should be protected.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In general it's not a good idea for built-in types in the 'U' category
to be marked preferred; they could draw behavior away from user-defined
types with similarly-named operators. pg_lsn is probably at low risk
of that right now given the lack of casts between it and other types,
but that doesn't make this marking OK.
Ordinarily we'd bump catversion when changing any predefined catalog
contents like this, but since we're past beta1, the costs of a forced
initdb seem to outweigh the benefits of guaranteed behavioral consistency.
There's not any known behavioral impact today anyway --- this is more
in the nature of being sure there's not problems in future.
Per an off-list complaint from Thomas Fanghaenel.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The recent addition of regression tests to uuid-ossp exposed the fact
that the MSVC build system wasn't being consistent about whether it was
building/testing that contrib module, ie, it would try to test the module
even when it hadn't built it. The same hazard was latent for sslinfo.
For the moment I just copied the more up-to-date logic from point A to
point B, but this is screaming for refactoring.
Per buildfarm results.
|
|
|
|
|
|
| |
Commit 5035701e07e8bd395aa878465a102afd7b74e8c3 improved xlog.c's method
for creating a database system identifier, but I neglected to fix the
copy of that code appearing in pg_resetxlog.c. Spotted by Andres Freund.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow the contrib/uuid-ossp extension to be built atop any one of these
three popular UUID libraries. (The extension's name is now arguably a
misnomer, but we'll keep it the same so as not to cause unnecessary
compatibility issues for users.)
We would not normally consider a change like this post-beta1, but the issue
has been forced by our upgrade to autoconf 2.69, whose more rigorous header
checks are causing OSSP's header files to be rejected on some platforms.
It's been foreseen for some time that we'd have to move away from depending
on OSSP UUID due to lack of upstream maintenance, so this is a down payment
on that problem.
While at it, add some simple regression tests, in hopes of catching any
major incompatibilities between the three implementations.
Matteo Beccati, with some further hacking by me
|
|
|
|
|
|
|
|
|
| |
The bug was caused by omitting 'I:' from the short argument list to
getopt_long(). To make similar bugs in the future less likely reorder
options in --help, long and short option lists to be in the same,
alphabetical within groups, order.
Report and fix by Michael Paquier, some additional reordering by me.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new page deletion code didn't cope with the case the target page's
right sibling was marked half-dead. It failed a sanity check which checked
that the downlinks in the parent page match the lower level, because a
half-dead page has no downlink. To cope, check for that condition, and
just give up on the deletion if it happens. The vacuum will finish the
deletion of the half-dead page when it gets there, and on the next vacuum
after that the empty can be deleted.
Reported by Jeff Janes.
|
|
|
|
|
|
|
|
|
|
|
| |
HeapTupleHeaderGetCmax() asserts that it is only used if the tuple has
been updated by the current transaction. That check is correct and
sensible but requires allocating memory if xmax is a multixact. When
wal_level is set to logical cmax needs to be included in a wal record
, generated inside a critical section, which can trigger the assertion
added in 4a170ee9e.
Reported-By: Steve Singer
|
|
|
|
|
|
|
|
|
|
| |
Define padding bytes in SharedInvalidationMessage structs to be
defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by
multiple processes, will cause spurious valgrind warnings about
undefined memory being used. That's because valgrind remembers the
undefined bytes from the last local process's store, not realizing
that another process has written since, filling the previously
uninitialized bytes.
|
| |
|
|
|
|
|
|
|
| |
This is all inside a block guarded by op == DSM_OP_ATTACH, so it can
never be the case that op == DSM_OP_CREATE.
Reported by Coverity.
|
|
|
|
| |
Erik Rijkers
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit af7914c6627bcf0b0ca614e9ce95d3f8056602bf, which introduced the
EXPLAIN (TIMING) option, for some reason coded explain.c to look at
planstate->instrument->need_timer rather than es->timing to decide
whether to print timing info. However, the former flag might get set
as a result of contrib/auto_explain wanting timing information. We
certainly don't want activation of auto_explain to change user-visible
statement behavior, so fix that.
Also fix an independent bug introduced in the same patch: in the code
path for a never-executed node with a machine-friendly output format,
if timing was selected, it would fail to print the Actual Rows and Actual
Loops items.
Per bug #10404 from Tomonari Katsumata. Back-patch to 9.2 where the
faulty code was introduced.
|
|
|
|
| |
Peter Geoghegan
|
|
|
|
|
|
|
| |
Lowercase help statements. Use an existing message to reduce the number
of strings to be translated.
Euler Taveira
|
|
|
|
|
|
|
|
|
|
|
|
| |
I got the backup block numbers off-by-one in the commit that changed the
way incomplete-splits are handled. I blame the comments, which said
"backup block 1" and "backup block 2", even though the backup blocks
are numbered starting from 0, in the macros and functions used in replay.
Fix the comments and the code.
Per Jeff Janes' bug report about corruption caused by torn page writes.
The incorrect code is new in git master, but backpatch the comment change
down to 9.0, where the numbering in the redo-side macros was changed.
|
|
|
|
|
|
| |
That's what I get for not fully retesting the final version of the patch.
The replace_allowed cross-check needs an additional special case for
bootstrapping.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RelationCacheInsert() ignored the possibility that hash_search(HASH_ENTER)
might find a hashtable entry already present for the same OID. However,
that can in fact occur during recursive relcache load scenarios. When it
did happen, we overwrote the pointer to the pre-existing Relation, causing
a session-lifespan leakage of that entire structure. As far as is known,
the pre-existing Relation would always have reference count zero by the
time we arrive back at the outer insertion, so add code that deletes the
pre-existing Relation if so. If by some chance its refcount is positive,
elog a WARNING and allow the pre-existing Relation to be leaked as before.
Also, AttrDefaultFetch() was sloppy about leaking the cstring form of the
pg_attrdef.adbin value it's copying into the relcache structure. This is
only a query-lifespan leakage, and normally not very significant, but it
adds up during CLOBBER_CACHE testing.
These bugs are of very ancient vintage, but I'll refrain from back-patching
since there's no evidence that these leaks amount to anything in ordinary
usage.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The fallback implementation involves acquiring and releasing a spinlock
variable that is otherwise unreferenced --- not even to the extent of
initializing it. This accidentally fails to fail on platforms where
spinlocks should be initialized to zeroes, but elsewhere it results in
a "stuck spinlock" failure during startup.
I griped about this last July, and put in a hack that worked for gcc
on HPPA, but didn't get around to fixing the general case. Per the
discussion back then, the best thing to do seems to be to initialize
dummy_spinlock in main.c.
|
|
|
|
| |
Per testing with a compiler that whines about this.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The xl_heap_header_len structures in an XLOG_HEAP_UPDATE record aren't
necessarily aligned adequately. The regular replay function for these
records is aware of that, but decode.c didn't get the memo. I'm not
sure why the buildfarm failed to catch this; the test_decoding test
certainly blows up real good on my old HPPA box.
Also, I'm pretty sure that the address arithmetic was wrong for the
case of XLOG_HEAP_CONTAINS_OLD and not XLOG_HEAP_CONTAINS_NEW_TUPLE,
though this apparently can't happen when logical decoding is active.
|
|
|
|
|
|
|
| |
transam/README explained how B-tree incomplete splits were tracked and
fixed after recovery, as an example of handling complex actions that need
multiple WAL records, but that's not how it works anymore. Explain the new
paradigm.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several years ago we changed chr(int) so that if the database encoding is
UTF8, it would interpret its argument as a Unicode code point and expand it
into the appropriate multibyte sequence. However, we weren't sufficiently
careful about checking validity of the input. According to RFC3629, UTF8
disallows code points above U+10FFFF (note that the predecessor standard
RFC2279 was more liberal). Also, both versions of the UTF8 spec agree
that Unicode surrogate-pair codes should never appear in UTF8. Because
our encoding validity checks follow RFC3629, our failure to enforce these
restrictions in chr() means it could be used to produce text strings that
will be rejected when the database is dumped and reloaded. To ensure
consistency with the input functions, let's actually apply
pg_utf8_islegal() to the proposed output of chr().
Per discussion, this seems like too much of a behavioral change to
back-patch, but it's not too late to squeeze it into 9.4.
|
|
|
|
|
|
|
| |
The decoding of prepared transaction commits accidentally used the XID of
the transaction performing the COMMIT PREPARED, not the XID of the prepared
transaction. Before bb38fb0d43c8d that lead to those transactions not being
decoded, afterwards to a assertion failure.
|
|
|
|
|
|
| |
Let's complain about e.g an invalid path or permission problem sooner rather
than later. Before this patch, we would only try to open the output file
after receiving the first decoded message from the server.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit dd428c79 added dbId and tsId to the xl_xact_commit struct but missed
that prepared transaction commits reuse that struct. Fix that.
Because those fields were left unitialized, replaying a commit prepared WAL
record in a hot standby node would fail to remove the relcache init file.
That can lead to "could not open file" errors on the standby. Relcache init
file only needs to be removed when a system table/index is rewritten in the
transaction using two phase commit, so that should be rare in practice. In
HEAD, the incorrect dbId/tsId values are also used for filtering in logical
replication code, causing the transaction to always be filtered out.
Analysis and fix by Andres Freund. Backpatch to 9.0 where hot standby was
introduced.
|
|
|
|
|
|
|
|
|
|
|
|
| |
In yesterday's commit 2dc4f011fd61501cce507be78c39a2677690d44b, I tried
to force buffering of stdout/stderr in initdb to be what it is by
default when the program is run interactively on Unix (since that's how
most manual testing is done). This tripped over the fact that Windows
doesn't support _IOLBF mode. We dealt with that a long time ago in
syslogger.c by falling back to unbuffered mode on Windows. Export that
solution in port.h and use it in initdb.
Back-patch to 8.4, like the previous commit.
|
|
|
|
|
|
|
| |
Don't close stdout on SIGHUP. Also, when a SIGHUP is received, close the
file immediately, rather than only after receiving some more data from
the server. Rename a variable, to avoid mentally dealing with double
negatives (not unsynced means synced).
|
|
|
|
|
|
|
|
|
|
| |
The proc array can contain duplicate XIDs, when a transaction is just being
prepared for two-phase commit. To cope, remove any duplicates in
txid_current_snapshot(). Also ignore duplicates in the input functions, so
that if e.g. you have an old pg_dump file that already contains duplicates,
it will be accepted.
Report and fix by Jan Wieck. Backpatch to all supported versions.
|