| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
It does currently, and I don't see us changing that any time soon, but we
don't make that assumption anywhere else.
Per Tom Lane's suggestion. Backpatch to 9.2, like the previous patch that
added this assumption.
|
|
|
|
|
|
|
| |
XLogReaderFree failed to free the per-block data buffers, when they
happened to not be used by the latest read WAL record.
Michael Paquier. Backpatch to 9.5, where the per-block buffers were added.
|
|
|
|
|
|
|
|
|
|
| |
In GIN, an all-zeros page would be leaked forever, and never reused. Just
add them to the FSM in vacuum, and they will be reinitialized when grabbed
from the FSM. On master and 9.5, attempting to access the page's opaque
struct also caused an assertion failure, although that was otherwise
harmless.
Reported by Jeff Janes. Backpatch to all supported versions.
|
|
|
|
|
|
|
|
|
|
|
|
| |
SP-GiST initialized an all-zeros page at vacuum, but that was not
WAL-logged, which is not safe. You might get a torn page write, when it gets
flushed to disk, and end-up with a half-initialized index page. To fix,
leave it in the all-zeros state, and add it to the FSM. It will be
initialized when reused. Also don't set the page-deleted flag when recycling
an empty page. That was also not WAL-logged, and a torn write of that would
cause the page to have an invalid checksum.
Backpatch to 9.2, where SP-GiST indexes were added.
|
|
|
|
|
|
|
| |
That was otherwise harmless, but tripped the new assertion in
PageGetSpecialPointer().
Reported by Amit Langote. Backpatch to 9.5, where the assertion was added.
|
|
|
|
|
|
|
|
| |
There is no full discussion of speculative insertions in the executor
README. There is a high-level explanation in execIndexing.c, but it doesn't
seem necessary to refer it from here.
Peter Geoghegan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I missed a restriction that commit f4abd0241de20d5d6a79b84992b9e88603d44134
should have enforced: we can't pull up an empty-FROM subquery if it's under
an outer join, because then we'd need to wrap its output columns in
PlaceHolderVars. As the code currently stands, the PHVs end up with empty
relid sets, which doesn't work (and is correctly caught by an Assert).
It's possible that this could be fixed by assigning the PHVs the relid
sets of the parent FromExpr/JoinExpr, but getting that to work is more
complication than I care to add right now; indeed it's likely that
we'll never bother, since pulling up empty-FROM subqueries is a rather
marginal optimization anyway.
Per report from Andreas Seltenreich. Back-patch to 9.5 where the faulty
code was added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The planner generally expects that the estimated rowcount of any relation
is at least one row, *unless* it has been proven empty by constraint
exclusion or similar mechanisms, which is marked by installing a dummy path
as the rel's cheapest path (cf. IS_DUMMY_REL). When I split up
allpaths.c's processing of base rels into separate set_base_rel_sizes and
set_base_rel_pathlists steps, the intention was that dummy rels would get
marked as such during the "set size" step; this is what justifies an Assert
in indxpath.c's get_loop_count that other relations should either be dummy
or have positive rowcount. Unfortunately I didn't get that quite right
for append relations: if all the child rels have been proven empty then
set_append_rel_size would come up with a rowcount of zero, which is
correct, but it didn't then do set_dummy_rel_pathlist. (We would have
ended up with the right state after set_append_rel_pathlist, but that's
too late, if we generate indexpaths for some other rel first.)
In addition to fixing the actual bug, I installed an Assert enforcing this
convention in set_rel_size; that then allows simplification of a couple
of now-redundant tests for zero rowcount in set_append_rel_size.
Also, to cover the possibility that third-party FDWs have been careless
about not returning a zero rowcount estimate, apply clamp_row_est to
whatever an FDW comes up with as the rows estimate.
Per report from Andreas Seltenreich. Back-patch to 9.2. Earlier branches
did not have the separation between set_base_rel_sizes and
set_base_rel_pathlists steps, so there was no intermediate state where an
appendrel would have had inconsistent rowcount and pathlist. It's possible
that adding the Assert to set_rel_size would be a good idea in older
branches too; but since they're not under development any more, it's likely
not worth the trouble.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ON CONFLICT unique index inference had a thinko that could affect cases
where the user-supplied inference clause required that an attribute
match a particular (user specified) collation and/or opclass.
infer_collation_opclass_match() has to check for opclass and/or
collation matches and that the attribute is in the list of attributes or
expressions known to be in the definition of the index under
consideration. The bug was that these two conditions weren't necessarily
evaluated for the same index attribute.
Author: Peter Geoghegan
Discussion: CAM3SWZR4uug=WvmGk7UgsqHn2MkEzy9YU-+8jKGO4JPhesyeWg@mail.gmail.com
Backpatch: 9.5, where ON CONFLICT was introduced
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously nested grouping set specifications accidentally weren't
flattened, but instead contained the nested specification as a element
in the outer list.
Fix this by, as actually documented in comments, concatenating the
nested set specification into the outer one. Also add tests to prevent
this from breaking again.
Author: Andrew Gierth, with tests from Jeevan Chalke
Reported-By: Jeevan Chalke
Discussion: CAM2+6=V5YvuxB+EyN4iH=GbD-XTA435TCNvnDFSD--YvXs+pww@mail.gmail.com
Backpatch: 9.5, where grouping sets were introduced
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we disallowed pushing down quals to WHERE in the presence of
grouping sets. That's overly restrictive.
We now instead copy quals to WHERE if applicable, leaving the
one in HAVING in place. That's because, at that stage of the planning
process, it's nontrivial to determine if it's safe to remove the one in
HAVING.
Author: Andrew Gierth
Discussion: 874mkt3l59.fsf@news-spur.riddles.org.uk
Backpatch: 9.5, where grouping sets were introduced. This isn't exactly
a bugfix, but it seems better to keep the branches in sync at this point.
|
|
|
|
|
|
|
|
|
|
| |
Previously GROUPING() was not recognized as a aggregate expression,
erroneously allowing the planner to move it from HAVING to WHERE.
Author: Jeevan Chalke
Reviewed-By: Andrew Gierth
Discussion: CAM2+6=WG9omG5rFOMAYBweJxmpTaapvVp5pCeMrE6BfpCwr4Og@mail.gmail.com
Backpatch: 9.5, where grouping sets were introduced
|
|
|
|
|
|
|
|
|
|
|
| |
The previous coding frequently failed to fail because for one it's
unusual to have rollup clauses with one column, and for another
sometimes the wrong mapping didn't cause obvious problems.
Author: Jeevan Chalke
Reviewed-By: Andrew Gierth
Discussion: CAM2+6=W=9=hQOipH0HAPbkun3Z3TFWij_EiHue0_6UX=oR=1kw@mail.gmail.com
Backpatch: 9.5, where grouping sets were introduced
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some of the older OS X critters in the buildfarm are failing regression,
with symptoms showing that a request for 100% sampling in BERNOULLI or
SYSTEM methods actually gets only around 50% of the table. gdb revealed
that the computation of the "cutoff" number was producing 0x7FFFFFFF
rather than the expected 0x100000000. Inspecting the assembly code,
it looks like gcc is trying to use lrint() instead of rint() and then
fumbling the conversion from long double to uint64. This seems like a
clear compiler bug, but assigning the intermediate result into a plain
double variable works around it, so let's just do that. (Another idea
would be to give up one bit of hash width so that we don't need to use
a uint64 cutoff, but let's see if this is enough.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original implementation of TABLESAMPLE modeled the tablesample method
API on index access methods, which wasn't a good choice because, without
specialized DDL commands, there's no way to build an extension that can
implement a TSM. (Raw inserts into system catalogs are not an acceptable
thing to do, because we can't undo them during DROP EXTENSION, nor will
pg_upgrade behave sanely.) Instead adopt an API more like procedural
language handlers or foreign data wrappers, wherein the only SQL-level
support object needed is a single handler function identified by having
a special return type. This lets us get rid of the supporting catalog
altogether, so that no custom DDL support is needed for the feature.
Adjust the API so that it can support non-constant tablesample arguments
(the original coding assumed we could evaluate the argument expressions at
ExecInitSampleScan time, which is undesirable even if it weren't outright
unsafe), and discourage sampling methods from looking at invisible tuples.
Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
within and across queries, as required by the SQL standard, and deal more
honestly with methods that can't support that requirement.
Make a full code-review pass over the tablesample additions, and fix
assorted bugs, omissions, infelicities, and cosmetic issues (such as
failure to put the added code stanzas in a consistent ordering).
Improve EXPLAIN's output of tablesample plans, too.
Back-patch to 9.5 so that we don't have to support the original API
in production.
|
|
|
|
|
|
|
| |
UPDATE ... WHERE CURRENT OF would not work in conjunction with
RLS. Arrange to allow the CURRENT OF expression to be pushed down.
Issue noted by Peter Geoghegan. Patch by Dean Rasheed. Back patch
to 9.5 where RLS was introduced.
|
|
|
|
|
|
|
|
|
| |
The wrong is_null flag was being passed to datum_to_json. Also, null
object key values are not permitted, and this was not being checked
for. Add regression tests covering these cases, and also add those tests
to the json set, even though it was doing the right thing.
Fixes bug #13514, initially diagnosed by Tom Lane.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handling of assigned-to expressions with indirection (e.g. set f1[1] =
3) was broken for ON CONFLICT DO UPDATE. The problem was that
ParseState was consulted to determine if an INSERT-appropriate or
UPDATE-appropriate behavior should be used when transforming expressions
with indirections. When the wrong path was taken the old row was
substituted with NULL, leading to wrong results..
To fix remove p_is_update and only use p_is_insert to decide how to
transform the assignment expression, and uset p_is_insert while parsing
the on conflict statement. This isn't particularly pretty, but it's not
any worse than before.
Author: Peter Geoghegan, slightly edited by me
Discussion: CAM3SWZS8RPvA=KFxADZWw3wAHnnbxMxDzkEC6fNaFc7zSm411w@mail.gmail.com
Backpatch: 9.5, where the feature was introduced
|
|
|
|
|
|
|
|
|
|
| |
If there were no subtransactions (or multixacts) active, we would calculate
the oldestxid == next xid. That's correct, but if next XID happens to be
on the next pg_subtrans (pg_multixact) page, the page does not exist yet,
and SimpleLruTruncate will produce an "apparent wraparound" warning. The
warning is harmless in this case, but looks very alarming to users.
Backpatch to all supported versions. Patch and analysis by Thomas Munro.
|
|
|
|
|
|
|
|
|
|
|
| |
The TABLESAMPLE and row security patches each overlooked this function,
though their errors of omission were opposite: RLS failed to zero out the
securityQuals field, leading to wasteful copying of useless expression
trees in finished plans, while TABLESAMPLE neglected to add a comment
saying that it intentionally *isn't* deleting the tablesample subtree.
There probably should be a similar comment about ctename, too.
Back-patch as appropriate.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove HeapScanDescData.rs_initblock, which wasn't being used for anything
in the final version of the patch.
Fix IndexBuildHeapScan so that it supports syncscan again; the patch
broke synchronous scanning for index builds by forcing rs_startblk
to zero even when the caller did not care about that and had asked
for syncscan.
Add some commentary and usage defenses to heap_setscanlimits().
Fix heapam so that asking for rs_numblocks == 0 does what you would
reasonably expect. As coded it amounted to requesting a whole-table
scan, because those "--x <= 0" tests on an unsigned variable would
behave surprisingly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was forgotten in cac76582053e (and its fixup ad89a5d115). Since it
seems way too easy to miss this, this commit also introduces a mechanism
to enforce that the array is consistent with the enum.
Problem reported independently by Robert Haas and Jaimin Pan.
Patches proposed by Jaimin Pan, Jim Nasby, Michael Paquier and myself,
though I didn't use any of these and instead went with a cleaner
approach suggested by Tom Lane.
Backpatch to 9.5.
Discussion:
https://www.postgresql.org/message-id/CA+Tgmoa6SgDaxW_n_7SEhwBAc=mniYga+obUj5fmw4rU9_mLvA@mail.gmail.com
https://www.postgresql.org/message-id/29788.1437411581@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was already a sanity-check in the other direction: if a page was
marked with WILL_INIT, it had to be initialized by the redo routine. It's
not strictly necessary for correctness that a page is marked with WILL_INIT
if it's going to be initialized at redo, but it's a missed optimization if
nothing else.
Fix a few instances of this issue in SP-GiST, where a block in WAL record
was not marked with WILL_INIT, but was in fact always initialized at redo.
We were creating a full-page image of the page unnecessarily in those
cases.
Backpatch to 9.5, where the new WILL_INIT flag was added.
|
|
|
|
|
|
|
|
|
| |
Since those role specifiers are checked in the grammar, there's no need
for the old checks to remain in place after 31eae6028ec. Remove them.
Backpatch to 9.5.
Noted and patch by Jeevan Chalke
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes some info about support procedures being used, which was
obsoleted by commit db5f98ab4f, as well as add some more documentation
on how to create new opclasses using the Minmax infrastructure.
(Hopefully we can get something similar for Inclusion as well.)
In passing, fix some obsolete mentions of "mmtuples" in source code
comments.
Backpatch to 9.5, where BRIN was introduced.
|
|
|
|
| |
Defect noticed by Coverity.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous coding, timeout would be noticed and reported only when
poll() or socket() returned zero (or the equivalent behavior on Windows).
Ordinarily that should work well enough, but it seems conceivable that we
could get into a state where poll() always returns a nonzero value --- for
example, if it is noticing a condition on one of the file descriptors that
we do not think is reason to exit the loop. If that happened, we'd be in a
busy-wait loop that would fail to terminate even when the timeout expires.
We can make this more robust at essentially no cost, by deciding to exit
of our own accord if we compute a zero or negative time-remaining-to-wait.
Previously the code noted this but just clamped the time-remaining to zero,
expecting that we'd detect timeout on the next loop iteration.
Back-patch to 9.2. While 9.1 had a version of WaitLatchOrSocket, it was
primitive compared to later versions, and did not guarantee reliable
detection of timeouts anyway. (Essentially, this is a refinement of
commit 3e7fdcffd6f77187, which was back-patched only as far as 9.2.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, there was an inconsistency across json/jsonb operators that
operate on datums containing JSON arrays -- only some operators
supported negative array count-from-the-end subscripting. Specifically,
only a new-to-9.5 jsonb deletion operator had support (the new "jsonb -
integer" operator). This inconsistency seemed likely to be
counter-intuitive to users. To fix, allow all places where the user can
supply an integer subscript to accept a negative subscript value,
including path-orientated operators and functions, as well as other
extraction operators. This will need to be called out as an
incompatibility in the 9.5 release notes, since it's possible that users
are relying on certain established extraction operators changed here
yielding NULL in the event of a negative subscript.
For the json type, this requires adding a way of cheaply getting the
total JSON array element count ahead of time when parsing arrays with a
negative subscript involved, necessitating an ad-hoc lex and parse.
This is followed by a "conversion" from a negative subscript to its
equivalent positive-wise value using the count. From there on, it's as
if a positive-wise value was originally provided.
Note that there is still a minor inconsistency here across jsonb
deletion operators. Unlike the aforementioned new "-" deletion operator
that accepts an integer on its right hand side, the new "#-" path
orientated deletion variant does not throw an error when it appears like
an array subscript (input that could be recognized by as an integer
literal) is being used on an object, which is wrong-headed. The reason
for not being stricter is that it could be the case that an object pair
happens to have a key value that looks like an integer; in general,
these two possibilities are impossible to differentiate with rhs path
text[] argument elements. However, we still don't allow the "#-"
path-orientated deletion operator to perform array-style subscripting.
Rather, we just return the original left operand value in the event of a
negative subscript (which seems analogous to how the established
"jsonb/json #> text[]" path-orientated operator may yield NULL in the
event of an invalid subscript).
In passing, make SetArrayPath() stricter about not accepting cases where
there is trailing non-numeric garbage bytes rather than a clean NUL
byte. This means, for example, that strings like "10e10" are now not
accepted as an array subscript of 10 by some new-to-9.5 path-orientated
jsonb operators (e.g. the new #- operator). Finally, remove dead code
for jsonb subscript deletion; arguably, this should have been done in
commit b81c7b409.
Peter Geoghegan and Andrew Dunstan
|
|
|
|
|
|
|
| |
This tells you what fraction of NOTIFY's queue is currently filled.
Brendan Jurd, reviewed by Merlin Moncure and Gurjeet Singh. A few
further tweaks by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's standard for quicksort implementations, after having partitioned the
input into two subgroups, to recurse to process the smaller partition and
then handle the larger partition by iterating. This method guarantees
that no more than log2(N) levels of recursion can be needed. However,
Bentley and McIlroy argued that checking to see which partition is smaller
isn't worth the cycles, and so their code doesn't do that but just always
recurses on the left partition. In most cases that's fine; but with
worst-case input we might need O(N) levels of recursion, and that means
that qsort could be driven to stack overflow. Such an overflow seems to
be the only explanation for today's report from Yiqing Jin of a SIGSEGV
in med3_tuple while creating an index of a couple billion entries with a
very large maintenance_work_mem setting. Therefore, let's spend the few
additional cycles and lines of code needed to choose the smaller partition
for recursion.
Also, fix up the qsort code so that it properly uses size_t not int for
some intermediate values representing numbers of items. This would only
be a live risk when sorting more than INT_MAX bytes (in qsort/qsort_arg)
or tuples (in qsort_tuple), which I believe would never happen with any
caller in the current core code --- but perhaps it could happen with
call sites in third-party modules? In any case, this is trouble waiting
to happen, and the corrected code is probably if anything shorter and
faster than before, since it removes sign-extension steps that had to
happen when converting between int and size_t.
In passing, move a couple of CHECK_FOR_INTERRUPTS() calls so that it's
not necessary to preserve the value of "r" across them, and prettify
the output of gen_qsort_tuple.pl a little.
Back-patch to all supported branches. The odds of hitting this issue
are probably higher in 9.4 and up than before, due to the new ability
to allocate sort workspaces exceeding 1GB, but there's no good reason
to believe that it's impossible to crash older branches this way.
|
|
|
|
| |
David Rowley
|
|
|
|
| |
David Christensen
|
|
|
|
|
|
|
|
|
| |
This allows PostgreSQL modules and their dependencies to have undefined
symbols, resolved at runtime. Perl module shared objects rely on that
in Perl 5.8.0 and later. This fixes the crash when PL/PerlU loads such
modules, as the hstore_plperl test suite does. Module authors can link
using -Wl,-G to permit undefined symbols; by default, linking will fail
as it has. Back-patch to 9.0 (all supported versions).
|
|
|
|
|
| |
Also, the lock on pg_operator should not be released until end of
transaction.
|
|
|
|
|
|
|
|
|
| |
Other options cannot be changed, as it's not totally clear if cached plans
would need to be invalidated if one of the other options change. Selectivity
estimator functions only change plan costs, not correctness of plans, so
those should be safe.
Original patch by Uriy Zhuravlev, heavily edited by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a column's datatype is changed, ATExecAlterColumnType() rebuilds all
the affected indexes and constraints, and the comments from the old
indexes/constraints were not carried over.
To fix, create a synthetic COMMENT ON command in the work queue, to re-add
any comments on constraints. For indexes, there's a comment field in
IndexStmt that is used.
This fixes bug #13126, reported by Kirill Simonov. Original patch by
Michael Paquier, reviewed by Petr Jelinek and me. This bug is present in
all versions, but only backpatch to 9.5. Given how minor the issue is, it
doesn't seem worth the work and risk to backpatch further than that.
|
|
|
|
|
|
|
|
|
|
|
| |
The code in ATPostAlterTypeParse was very deeply indented, mostly because
there were two nested switch-case statements, which add a lot of
indentation. Use if-else blocks instead, to make the code less indented
and more readable.
This is in preparation for next patch that makes some actualy changes to
the function. These cosmetic parts have been separated to make it easier
to see the real changes in the other patch.
|
|
|
|
|
|
| |
Backpatch to 9.5 where ON CONFLICT was introduced.
Author: Peter Geoghegan
|
|
|
|
| |
Fixes compiler warning induced by 808ea8fc7bb259ddd810353719cac66e85a608c8.
|
|
|
|
|
|
| |
As noted by Noah Misch, CreatePolicy() and AlterPolicy() omit to call
assign_expr_collations() on the node trees. Fix the omission and add
his test case to the rowsecurity regression test.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ordinarily, a failure (unexpected exit status) of the startup subprocess
should be considered fatal, so the postmaster should just close up shop
and quit. However, if we sent the startup process a SIGQUIT or SIGKILL
signal, the failure is hardly "unexpected", and we should attempt restart;
this is necessary for recovery from ordinary backend crashes in hot-standby
scenarios. I attempted to implement the latter rule with a two-line patch
in commit 442231d7f71764b8c628044e7ce2225f9aa43b67, but it now emerges that
that patch was a few bricks shy of a load: it failed to distinguish the
case of a signaled startup process from the case where the new startup
process crashes before reaching database consistency. That resulted in
infinitely respawning a new startup process only to have it crash again.
To handle this properly, we really must track whether we have sent the
*current* startup process a kill signal. Rather than add yet another
ad-hoc boolean to the postmaster's state, I chose to unify this with the
existing RecoveryError flag into an enum tracking the startup process's
state. That seems more consistent with the postmaster's general state
machine design.
Back-patch to 9.0, like the previous patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When enabling wal_compression, there is a risk to leak data similarly to
the BREACH and CRIME attacks on SSL where the compression ratio of
a full page image gives a hint of what is the existing data of this page.
This vulnerability is quite cumbersome to exploit in practice, but doable.
So this patch makes wal_compression PGC_SUSET in order to prevent
non-superusers from enabling it and exploiting the vulnerability while
DBA thinks the risk very seriously and disables it in postgresql.conf.
Back-patch to 9.5 where wal_compression was introduced.
|
| |
|
|
|
|
|
|
|
| |
This formalizes a decision implicit in commit
4ea51cdfe85ceef8afabceb03c446574daa0ac23 and adds clean detection of
affected systems. Vendor updates are available for each such known bug.
Back-patch to 9.5, where the aforementioned commit first appeared.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When spilling transaction data to disk a simple typo caused the output
file to be closed and reopened for every serialized change. That happens
to not have a huge impact on linux, which is why it probably wasn't
noticed so far, but on windows that appears to trigger actual disk
writes after every change. Not fun.
The bug fortunately does not have any impact besides speed. A change
could end up being in the wrong segment (last instead of next), but
since we read all files to the end, that's just ugly, not really
problematic. It's not a problem to upgrade, since transaction spill
files do not persist across restarts.
Bug: #13484
Reported-By: Olivier Gosseaume
Discussion: 20150703090217.1190.63940@wrigleys.postgresql.org
Backpatch to 9.4, where logical decoding was added.
|
|
|
|
| |
Also updated regression expected output to match. Noted and patch by Daniele Varrazzo.
|
|
|
|
|
|
|
|
|
| |
We're interested in the buffer size of the socket that's connected to the
client, not the one that's listening for new connections. It happened to
work, as default buffer size is the same on both, but it was clearly not
wrong.
Spotted by Tom Lane
|
|
|
|
|
|
|
|
|
|
| |
It's unnecessary to set it if the default is higher in the first place.
Furthermore, setting SO_SNDBUF disables the so-called "dynamic send
buffering" feature, which hurts performance further. This can be seen
especially when the network between the client and the server has high
latency.
Chen Huajun
|
|
|
|
|
|
|
|
|
|
| |
"TextDatumGetCString(PG_GETARG_TEXT_P(x))" is formally wrong: a text*
is not a Datum. Although this coding will accidentally fail to fail on
all known platforms, it risks leaking memory if a detoast step is needed,
unlike "TextDatumGetCString(PG_GETARG_DATUM(x))" which is what's used
elsewhere. Make pg_get_object_address() fall in line with other uses.
Noted while reviewing two-arg current_setting() patch.
|
|
|
|
|
|
|
| |
This allows convenient checking for existence of a GUC from SQL, which is
particularly useful when dealing with custom variables.
David Christensen, reviewed by Jeevan Chalke
|