| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
| |
HotStandbyActiveInReplay, introduced in 061b079f, only allowed WAL
replay to happen in the startup process, missing the single user case.
This buglet is fairly harmless as it only causes problems when single
user mode in an assertion enabled build is used to replay a btree vacuum
record.
Backpatch to 9.2. 061b079f was backpatched further, but the assertion
was not.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Supporting deletion of JSON pairs within jsonb objects using an
array-style integer subscript allowed for surprising outcomes. This was
mostly due to the implementation-defined ordering of pairs within
objects for jsonb.
It also seems desirable to make jsonb integer subscript deletion
consistent with the 9.4 era general purpose integer subscripting
operator for jsonb (although that operator returns NULL when an object
is encountered, while we prefer here to throw an error).
Peter Geoghegan, following discussion on -hackers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we invalidate the relcache entry for a system catalog or index, we
must also delete the relcache "init file" if the init file contains a copy
of that rel's entry. The old way of doing this relied on a specially
maintained list of the OIDs of relations present in the init file: we made
the list either when reading the file in, or when writing the file out.
The problem is that when writing the file out, we included only rels
present in our local relcache, which might have already suffered some
deletions due to relcache inval events. In such cases we correctly decided
not to overwrite the real init file with incomplete data --- but we still
used the incomplete initFileRelationIds list for the rest of the current
session. This could result in wrong decisions about whether the session's
own actions require deletion of the init file, potentially allowing an init
file created by some other concurrent session to be left around even though
it's been made stale.
Since we don't support changing the schema of a system catalog at runtime,
the only likely scenario in which this would cause a problem in the field
involves a "vacuum full" on a catalog concurrently with other activity, and
even then it's far from easy to provoke. Remarkably, this has been broken
since 2002 (in commit 786340441706ac1957a031f11ad1c2e5b6e18314), but we had
never seen a reproducible test case until recently. If it did happen in
the field, the symptoms would probably involve unexpected "cache lookup
failed" errors to begin with, then "could not open file" failures after the
next checkpoint, as all accesses to the affected catalog stopped working.
Recovery would require manually removing the stale "pg_internal.init" file.
To fix, get rid of the initFileRelationIds list, and instead consult
syscache.c's list of relations used in catalog caches to decide whether a
relation is included in the init file. This should be a tad more efficient
anyway, since we're replacing linear search of a list with ~100 entries
with a binary search. It's a bit ugly that the init file contents are now
so directly tied to the catalog caches, but in practice that won't make
much difference.
Back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We should set MyProc->databaseId after acquiring the per-database lock,
not beforehand. The old way risked deadlock against processes trying to
copy or delete the target database, since they would first acquire the lock
and then wait for processes with matching databaseId to exit; that left a
window wherein an incoming process could set its databaseId and then block
on the lock, while the other process had the lock and waited in vain for
the incoming process to exit.
CountOtherDBBackends() would time out and fail after 5 seconds, so this
just resulted in an unexpected failure not a permanent lockup, but it's
still annoying when it happens. A real-world example of a use-case is that
short-duration connections to a template database should not cause CREATE
DATABASE to fail.
Doing it in the other order should be fine since the contract has always
been that processes searching the ProcArray for a database ID must hold the
relevant per-database lock while searching. Thus, this actually removes
the former race condition that required an assumption that storing to
MyProc->databaseId is atomic.
It's been like this for a long time, so back-patch to all active branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Recent commits, mainly b69bf30b9bfacafc733a9ba77c9587cf54d06c0c and
53bb309d2d5a9432d2602c93ed18e58bd2924e15, introduced mechanisms to
protect against wraparound of the MultiXact member space: the number
of multixacts that can exist at one time is limited to 2^32, but the
total number of members in those multixacts is also limited to 2^32,
and older code did not take care to enforce the second limit,
potentially allowing old data to be overwritten while it was still
needed.
Unfortunately, these new mechanisms failed to account for the fact
that the code paths in which they run might be executed during
recovery or while the cluster was in an inconsistent state. Also,
they failed to account for the fact that users who used pg_upgrade
to upgrade a PostgreSQL version between 9.3.0 and 9.3.4 might have
might oldestMultiXid = 1 in the control file despite the true value
being larger.
To fix these problems, first, avoid unnecessarily examining the
mmembers of MultiXacts when the cluster is not known to be consistent.
TruncateMultiXact has done this for a long time, and this patch does
not fix that. But the new calls used to prevent member wraparound
are not needed until we reach normal running, so avoid calling them
earlier. (SetMultiXactIdLimit is actually called before InRecovery
is set, so we can't rely on that; we invent our own multixact-specific
flag instead.)
Second, make failure to look up the members of a MultiXact a non-fatal
error. Instead, if we're unable to determine the member offset at
which wraparound would occur, postpone arming the member wraparound
defenses until we are able to do so. If we're unable to determine the
member offset that should force autovacuum, force it continuously
until we are able to do so. If we're unable to deterine the member
offset at which we should truncate the members SLRU, log a message and
skip truncation.
An important consequence of these changes is that anyone who does have
a bogus oldestMultiXid = 1 value in pg_control will experience
immediate emergency autovacuuming when upgrading to a release that
contains this fix. The release notes should highlight this fact. If
a user has no pg_multixact/offsets/0000 file, but has oldestMultiXid = 1
in the control file, they may wish to vacuum any tables with
relminmxid = 1 prior to upgrading in order to avoid an immediate
emergency autovacuum after the upgrade. This must be done with a
PostgreSQL version 9.3.5 or newer and with vacuum_multixact_freeze_min_age
and vacuum_multixact_freeze_table_age set to 0.
This patch also adds an additional log message at each database server
startup, indicating either that protections against member wraparound
have been engaged, or that they have not. In the latter case, once
autovacuum has advanced oldestMultiXid to a sane value, the message
indicating that the guards have been engaged will appear at the next
checkpoint. A few additional messages have also been added at the DEBUG1
level so that the correct operation of this code can be properly audited.
Along the way, this patch fixes another, related bug in TruncateMultiXact
that has existed since PostgreSQL 9.3.0: when no MultiXacts exist at
all, the truncation code looks up NextMultiXactId, which doesn't exist
yet. This can lead to TruncateMultiXact removing every file in
pg_multixact/offsets instead of keeping one around, as it should.
This in turn will cause the database server to refuse to start
afterwards.
Patch by me. Review by Álvaro Herrera, Andres Freund, Noah Misch, and
Thomas Munro.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
add_path_precheck was doing exact comparisons of path costs, but it really
needs to do them fuzzily to be sure it won't reject paths that could
survive add_path's comparisons. (This can only matter if the initial cost
estimate is very close to the final one, but that turns out to often be
true.)
Also, it should ignore startup cost for this purpose if and only if
compare_path_costs_fuzzily would do so. The previous coding always ignored
startup cost for parameterized paths, which is wrong as of commit
3f59be836c555fa6; it could result in improper early rejection of paths that
we care about for SEMI/ANTI joins. It also always considered startup cost
for unparameterized paths, which is just as wrong though the only effect is
to waste planner cycles on paths that can't survive. Instead, it should
consider startup cost only when directed to by the consider_startup/
consider_param_startup relation flags.
Likewise, compare_path_costs_fuzzily should have symmetrical behavior
for parameterized and unparameterized paths. In this case, the best
answer seems to be that after establishing that total costs are fuzzily
equal, we should compare startup costs whether or not the consider_xxx
flags are on. That is what it's always done for unparameterized paths,
so let's make the behavior for parameterized paths match.
These issues were noted while developing the SEMI/ANTI join costing fix
of commit 3f59be836c555fa6, but we chose not to back-patch these fixes,
because they can cause changes in the planner's choices among
nearly-same-cost plans. (There is in fact one minor change in plan choice
within the core regression tests.) Destabilizing plan choices in back
branches without very clear improvements is frowned on, so we'll just fix
this in HEAD.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the inner side of a nestloop SEMI or ANTI join is an indexscan that
uses all the join clauses as indexquals, it can be presumed that both
matched and unmatched outer rows will be processed very quickly: for
matched rows, we'll stop after fetching one row from the indexscan, while
for unmatched rows we'll have an indexscan that finds no matching index
entries, which should also be quick. The planner already knew about this,
but it was nonetheless charging for at least one full run of the inner
indexscan, as a consequence of concerns about the behavior of materialized
inner scans --- but those concerns don't apply in the fast case. If the
inner side has low cardinality (many matching rows) this could make an
indexscan plan look far more expensive than it actually is. To fix,
rearrange the work in initial_cost_nestloop/final_cost_nestloop so that we
don't add the inner scan cost until we've inspected the indexquals, and
then we can add either the full-run cost or just the first tuple's cost as
appropriate.
Experimentation with this fix uncovered another problem: add_path and
friends were coded to disregard cheap startup cost when considering
parameterized paths. That's usually okay (and desirable, because it thins
the path herd faster); but in this fast case for SEMI/ANTI joins, it could
result in throwing away the desired plain indexscan path in favor of a
bitmap scan path before we ever get to the join costing logic. In the
many-matching-rows cases of interest here, a bitmap scan will do a lot more
work than required, so this is a problem. To fix, add a per-relation flag
consider_param_startup that works like the existing consider_startup flag,
but applies to parameterized paths, and set it for relations that are the
inside of a SEMI or ANTI join.
To make this patch reasonably safe to back-patch, care has been taken to
avoid changing the planner's behavior except in the very narrow case of
SEMI/ANTI joins with inner indexscans. There are places in
compare_path_costs_fuzzily and add_path_precheck that are not terribly
consistent with the new approach, but changing them will affect planner
decisions at the margins in other cases, so we'll leave that for a
HEAD-only fix.
Back-patch to 9.3; before that, the consider_startup flag didn't exist,
meaning that the second aspect of the patch would be too invasive.
Per a complaint from Peter Holzer and analysis by Tomas Vondra.
|
|
|
|
| |
Per gripe from Tom Lane.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function is given a fourth parameter, which defaults to true. When
this parameter is true, if the last element of the path is missing
in the original json, jsonb_set creates it in the result and assigns it
the new value. If it is false then the function does nothing unless all
elements of the path are present, including the last.
Based on some original code from Dmitry Dolgov, heavily modified by me.
Catalog version bumped.
|
|
|
|
|
|
|
|
|
|
| |
The argument that this is a sufficiently-expected case to be silently
ignored seems pretty thin. Andres had brought it up back when we were
still considering that most fsync failures should be hard errors, and it
probably would be legit not to fail hard for ETXTBSY --- but the same is
true for EROFS and other cases, which is why we gave up on hard failures.
ETXTBSY is surely not a normal case, so logging the failure seems fine
from here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This undoes a poorly-thought-out choice in commit 970a18687f9b3058, namely
to export guc.c's internal variable data_directory. The authoritative
variable so far as C code is concerned is DataDir; there is no reason for
anything except specific bits of GUC code to look at the GUC variable.
After yesterday's commits fixing the fsync-on-restart patch, the only
remaining misuse of data_directory was in AlterSystemSetConfigFile(),
which would be much better off just using a relative path anyhow: it's
less code and it doesn't break if the DBA moves the data directory of a
running system, which is a case we've taken some pains over in the past.
This is mostly cosmetic, so no need for a back-patch (and I'd be hesitant
to remove a global variable in stable branches anyway).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 2ce439f3379aed857517c8ce207485655000fc8e introduced a rather serious
regression, namely that if its scan of the data directory came across any
un-fsync-able files, it would fail and thereby prevent database startup.
Worse yet, symlinks to such files also caused the problem, which meant that
crash restart was guaranteed to fail on certain common installations such
as older Debian.
After discussion, we agreed that (1) failure to start is worse than any
consequence of not fsync'ing is likely to be, therefore treat all errors
in this code as nonfatal; (2) we should not chase symlinks other than
those that are expected to exist, namely pg_xlog/ and tablespace links
under pg_tblspc/. The latter restriction avoids possibly fsync'ing a
much larger part of the filesystem than intended, if the user has left
random symlinks hanging about in the data directory.
This commit takes care of that and also does some code beautification,
mainly moving the relevant code into fd.c, which seems a much better place
for it than xlog.c, and making sure that the conditional compilation for
the pre_sync_fname pass has something to do with whether pg_flush_data
works.
I also relocated the call site in xlog.c down a few lines; it seems a
bit silly to be doing this before ValidateXLOGDirectoryStructure().
The similar logic in initdb.c ought to be made to match this, but that
change is noncritical and will be dealt with separately.
Back-patch to all active branches, like the prior commit.
Abhijit Menon-Sen and Tom Lane
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ensure that we null-terminate the result string (one place in pg_rewind).
Be paranoid about out-of-range results from readlink() (should not happen,
but there is no good reason for some call sites to be careful about it and
others not). Consistently use the whole buffer, not sometimes one byte
less. Ensure we emit an appropriate errcode() in all cases. Spell the
error messages the same way.
The only serious bug here is the missing null-termination in pg_rewind,
which is new code, so no need for a back-patch.
Abhijit Menon-Sen and Tom Lane
|
|
|
|
|
|
|
| |
Seems to have been an oversight in the original leakproofness patch.
Per report and patch from Jeevan Chalke.
In passing, prettify some awkward leakproof-related code in AlterFunction.
|
|
|
|
|
|
|
| |
This reverts commit 54547bd87f49326d67051254c363e6597d16ffda.
This appears to have been a thinko on my part. I will try to come up
wioth a better solution.
|
|
|
|
|
|
| |
This reverts commit fba12c8c6c4159e1923958a4006b26f3cf873254.
This relied on a commit that is also being reverted.
|
|
|
|
|
| |
This function no longer needs to walk non-scalar structures passed to
it, following commit 54547bd87f49326d67051254c363e6597d16ffda.
|
|
|
|
|
|
|
|
|
|
| |
Commit 9b74f32cdbff8b9be47fc69164eae552050509ff did this for objects of
type jbvBinary, but in trying further to simplify some of the new jsonb
code I discovered that objects of type jbvObject or jbvArray passed as
WJB_ELEM or WJB_VALUE also caused problems. These too are now added
component by component.
Backpatch to 9.4.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
brin_form_tuple calculated an exact tuple size, then palloc'd and
filled just that much. Later, brin_doinsert or brin_doupdate would
MAXALIGN the tuple size and tell PageAddItem that that was the size
of the tuple to insert. If the original tuple size wasn't a multiple
of MAXALIGN, the net result would be that PageAddItem would memcpy
a few more bytes than the palloc request had been for.
AFAICS, this is totally harmless in the real world: the error is a
read overrun not a write overrun, and palloc would certainly have
rounded the request up to a MAXALIGN multiple internally, so there's
no chance of the memcpy fetching off the end of memory. Valgrind,
however, is picky to the byte level not the MAXALIGN level.
Fix it by pushing the MAXALIGN step back to brin_form_tuple. (The other
possible source of tuples in this code, brin_form_placeholder_tuple,
was already producing a MAXALIGN'd result.)
In passing, be a bit more paranoid about internal allocations in
brin_form_tuple.
|
|
|
|
|
|
|
|
|
|
|
| |
Multixact truncation is now handled differently, and this file hadn't
gotten the memo.
Per note from Amit Langote. I didn't use his patch, though.
Also update the description of infomask bits, which weren't completely up
to date either. This commit also propagates b01a4f6838 back to 9.3 and
9.4, which apparently I failed to do back then.
|
|
|
|
|
|
|
| |
Some of this is made possible by commit
9b74f32cdbff8b9be47fc69164eae552050509ff which lets pushJsonbValue
handle binary Jsonb values, meaning that clients no longer have to, and
some is just doing things in simpler and more straightforward ways.
|
|
|
|
|
|
| |
Must reset the "reached end" flag and reorder queue at rescan.
Per report from Regina Obe, bug #13349
|
|
|
|
|
|
| |
Fix some places where pgindent did silly stuff, often because project
style wasn't followed to begin with. (I've not touched the atomics
headers, though.)
|
|
|
|
|
|
|
|
|
|
|
|
| |
The name objectType is widely used as a field name, and it's pure luck that
this conflict has not caused pgindent to go crazy before. It messed up
pg_audit.c pretty good though. Since pg_shdepend.c doesn't export this
typedef and only uses it in three places, changing that seems saner than
changing the field usages.
Back-patch because we're contemplating using the union of all branch
typedefs for future pgindent runs, so this won't fix anything if it
stays the same in back branches.
|
|
|
|
|
|
|
|
|
|
| |
Remove a bunch of "extern Datum foo(PG_FUNCTION_ARGS);" declarations that
are no longer needed now that PG_FUNCTION_INFO_V1(foo) provides that.
Some of these were evidently missed in commit e7128e8dbb305059, but others
were cargo-culted in in code added since then. Possibly that can be blamed
in part on the fact that we'd not fixed relevant documentation examples,
which I've now done.
|
| |
|
|
|
|
|
| |
Maybe we should actually support this, but for the moment let's just
throw an error if the opclass tries it.
|
|
|
|
|
|
|
|
| |
Typo in commit 7cbee7c0a. No practical effect since the buffer should
never actually be overrun, but various compilers and static analyzers will
whine about it.
Petr Jelinek
|
|
|
|
|
|
| |
Fix confusion in documentation, substantial memory leakage if float8 or
float4 are pass-by-reference, and assorted comments that were obsoleted
by commit 98edd617f3b62a02cb2df9b418fcc4ece45c7ec0.
|
|
|
|
|
|
|
| |
Expand testing of rule deparsing a good bit, it's evidently needed.
Author: Peter Geoghegan, Andres Freund
Discussion: CAM3SWZQmXxZhQC32QVEOTYfNXJBJ_Q2SDENL7BV14Cq-zL0FLg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, INSERT with ON CONFLICT DO UPDATE specified used a new
command tag -- UPSERT. It was introduced out of concern that INSERT as
a command tag would be a misrepresentation for ON CONFLICT DO UPDATE, as
some affected rows may actually have been updated.
Alvaro Herrera noticed that the implementation of that new command tag
was incomplete; in subsequent discussion we concluded that having it
doesn't provide benefits that are in line with the compatibility breaks
it requires.
Catversion bump due to the removal of PlannedStmt->isUpsert.
Author: Peter Geoghegan
Discussion: 20150520215816.GI5885@postgresql.org
|
|
|
|
|
|
|
|
|
|
| |
Silly oversight in commit 1dc5ebc9077ab742079ce5dac9a6664248d42916:
when array2 is an expanded array, it might have array2->xpn.dnulls equal
to NULL, indicating the array is known null-free. The code wasn't
expecting that, because it formerly always used deconstruct_array() which
always delivers a nulls array.
Per bug #13334 from Regina Obe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pushJsonbValue was accepting jbvBinary objects passed as WJB_ELEM or
WJB_VALUE data. While this succeeded, when those objects were later
encountered in attempting to convert the result to Jsonb, errors
occurred. With this change we ghuarantee that a JSonbValue constructed
from calls to pushJsonbValue does not contain any jbvBinary objects.
This cures a problem observed with jsonb_delete.
This means callers of pushJsonbValue no longer need to perform this
unpacking themselves. A subsequent patch will perform some cleanup in
that area.
The error was not triggered by any 9.4 code, but this is a publicly
visible routine, and so the error could be exercised by third party
code, therefore backpatch to 9.4.
Bug report from Peter Geoghegan, fix by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With commit de768844, a copy of the partial segment was archived with the
.partial suffix, but the original file was still left in pg_xlog, so it
didn't actually solve the problems with archiving the partial segment that
it was supposed to solve. With this patch, the partial segment is renamed
rather than copied, so we only archive it with the .partial suffix.
Also be more robust in detecting if the last segment is already being
archived. Previously I used XLogArchiveIsBusy() for that, but that's not
quite right. With archive_mode='always', there might be a .ready file for
it, and we don't want to rename it to .partial in that case.
The old segment is needed until we're fully committed to the new timeline,
i.e. until we've written the end-of-recovery WAL record and updated the
min recovery point and timeline in the control file. So move the renaming
later in the startup sequence, after all that's been done.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Paul Ramsey reported that commit 35fcb1b3d038a501f3f4c87c05630095abaaadab
induced a core dump on commuted ORDER BY expressions, because it was
assuming that the indexorderby expression could be found verbatim in the
relevant equivalence class, but it wasn't there. We really don't need
anything that complicated anyway; for the data types likely to be used for
index ORDER BY operators in the foreseeable future, the exprType() of the
ORDER BY expression will serve fine. (The case where we'd have to work
harder is where the ORDER BY expression's result is only binary-compatible
with the declared input type of the ordering operator; long before worrying
about that, one would need to get rid of GiST's hard-wired assumption that
said datatype is float8.)
Aside from fixing that crash and adding a regression test for the case,
I did some desultory code review:
nodeIndexscan.c was likewise overthinking how hard it ought to work to
identify the datatype of the ORDER BY expressions.
Add comments explaining how come nodeIndexscan.c can get away with
simplifying assumptions about NULLS LAST ordering and no backward scan.
Revert no-longer-needed changes of find_ec_member_for_tle(); while the
new definition was no worse than the old, it wasn't better either, and
it might cause back-patching pain.
Revert entirely bogus additions to genam.h.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We want this struct to be exactly a series of 3 int16 words, no more
and no less. Historically, at least, some ARM compilers preferred to
pad it to 8 bytes unless coerced. Our old way of doing that was just
to use __attribute__((packed)), but as pointed out by Piotr Stefaniak,
that does too much: it also licenses the compiler to give the struct
only byte-alignment. We don't want that because it adds access overhead,
possibly quite significant overhead. According to the GCC manual, what
we want requires also specifying __attribute__((align(2))). It's not
entirely clear if all the relevant compilers accept this pragma as well,
but we can hope the buildfarm will tell us if not. We can also add a
static assertion that should fire if the compiler padded the struct.
Since the combination of these pragmas should define exactly what we
want on any compiler that accepts them, let's try using them wherever
we think they exist, not only for __arm__. (This is likely to expose
that the conditional definitions in c.h are inadequate, but finding
that out would be a good thing.)
The immediate motivation for this is that the current definition of
ExecRowMark allows its curCtid field to be misaligned. It is not clear
whether there are any other uses of ItemPointerData with a similar hazard.
We could change the definition of ExecRowMark if this doesn't work, but
it would be far better to have a future-proof fix.
Piotr Stefaniak, some further hacking by me
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously even if recovery_target_action was set to pause and
the recovery target was reached, the recovery could never be paused.
Because the setting of pause was *always* overridden with that of
shutdown unexpectedly. This override is valid and intentional
if hot_standby is not enabled because there is no way to resume
the paused recovery in this case and the setting of pause is
completely useless. But not if hot_standby is enabled.
This patch changes the code so that the setting of pause is overridden
with that of shutdown only when hot_standby is not enabled.
Bug reported by Andres Freund
|
|
|
|
| |
In the spirit of the season.
|
|
|
|
| |
Patch by CharSyam, plus a few more I spotted with grep.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use "a" and "an" correctly, mostly in comments. Two error messages were
also fixed (they were just elogs, so no translation work required). Two
function comments in pg_proc.h were also fixed. Etsuro Fujita reported one
of these, but I found a lot more with grep.
Also fix a few other typos spotted while grepping for the a/an typos.
For example, "consists out of ..." -> "consists of ...". Plus a "though"/
"through" mixup reported by Euler Taveira.
Many of these typos were in old code, which would be nice to backpatch to
make future backpatching easier. But much of the code was new, and I didn't
feel like crafting separate patches for each branch. So no backpatching.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Neither the deparsing of the new alias for INSERT's target table, nor of
the inference clause was supported. Also fixup a typo in an error
message.
Add regression tests to test those code paths.
Author: Peter Geoghegan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Defer lookup of opfamily and input type of a of a user specified opclass
until the optimizer selects among available unique indexes; and store
the opclass in the parse analyzed tree instead. The primary reason for
doing this is that for rule deparsing it's easier to use the opclass
than the previous representation.
While at it also rename a variable in the inference code to better fit
it's purpose.
This is separate from the actual fixes for deparsing to make review
easier.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The point of the assertion is to ensure that the arrays allocated in stack
are large enough, but the check was one item short.
This won't matter in practice because MaxIndexTuplesPerPage is an
overestimate, so you can't have that many items on a page in reality.
But let's be tidy.
Spotted by Anastasia Lubennikova. Backpatch to all supported versions, like
the patch that added the assertion.
|
|
|
|
|
| |
This reverts commit b82a7be603f1811a0a707b53c62de6d5d9431740. There
is a better (less invasive) way to fix it, which I will commit next.
|
|
|
|
|
|
|
|
|
|
| |
The plain C string language name needs to be wrapped in makeString() so
that the parse tree is copyable. This is detectable by
-DCOPY_PARSE_PLAN_TREES. Add a test case for the COMMENT case.
Also make the quoting in the error messages more consistent.
discovered by Tom Lane
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These were "text", but that's a bad idea because it has collation-dependent
ordering. No index in template0 should have collation-dependent ordering,
especially not indexes on shared catalogs. There was general agreement
that provider names don't need to be longer than other identifiers, so we
can fix this at a small waste of table space by changing from text to name.
There's no way to fix the problem in the back branches, but we can hope
that security labels don't yet have widespread-enough usage to make it
urgent to fix.
There needs to be a regression sanity test to prevent us from making this
same mistake again; but before putting that in, we'll need to get rid of
similar brain fade in the recently-added pg_replication_origin catalog.
Note: for lack of a suitable testing environment, I've not really exercised
this change. I trust the buildfarm will show up any mistakes.
|
|
|
|
|
|
|
|
|
|
| |
The previous coding was a leftover from attempting to hang all the on
conflict logic onto modify table's child nodes. It appears to not have
actually caused problems except for explain.
Add test exercising the broken and some other code paths.
Author: Peter Geoghegan and Andres Freund
|
|
|
|
|
|
|
|
|
| |
Commit 83e176ec18d2a91dbea1d0d1bd94c38dc47cd77c removed the longstanding
support functions for block sampling without any consideration of the
impact this would have on third-party FDWs. The new API is not notably
more functional for FDWs than the old, so forcing them to change doesn't
seem like a good thing. We can provide the old API as a wrapper (more
or less) around the new one for a minimal amount of extra code.
|