| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We'd already recognized that we can't pass function pointers across process
boundaries for functions in loadable modules, since a shared library could
get loaded at different addresses in different processes. But actually the
practice doesn't work for functions in the core backend either, if we're
using EXEC_BACKEND. This is the cause of recent failures on buildfarm
member culicidae. Switch to passing a string function name in all cases.
Something like this needs to be back-patched into 9.6, but let's see
if the buildfarm likes it first.
Petr Jelinek, with a bunch of basically-cosmetic adjustments by me
Discussion: https://postgr.es/m/548f9c1d-eafa-e3fa-9da8-f0cc2f654e60@2ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
| |
AFAICT, the only actual benefit of closing a bootstrap transaction
is to reclaim transient memory. We can do that a lot more cheaply
by just doing a MemoryContextReset on a suitable context. This
gets the runtime of the "bootstrap" phase of initdb down to the
point where, at least by eyeball, it's quite negligible compared
to the rest of the phases. Per discussion with Andres Freund.
Discussion: https://postgr.es/m/9244.1492106743@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Standardize on testing a hash index page's type by doing
(opaque->hasho_flag & LH_PAGE_TYPE) == LH_xxx_PAGE
Various places were taking shortcuts like
opaque->hasho_flag & LH_BUCKET_PAGE
which while not actually wrong, is still bad practice because
it encourages use of
opaque->hasho_flag & LH_UNUSED_PAGE
which *is* wrong (LH_UNUSED_PAGE == 0, so the above is constant false).
hash_xlog.c's hash_mask() contained such an incorrect test.
This also ensures that we mask out the additional flag bits that
hasho_flag has accreted since 9.6. pgstattuple's pgstat_hash_page(),
for one, was failing to do that and was thus actively broken.
Also fix assorted comments that hadn't been updated to reflect the
extended usage of hasho_flag, and fix some macros that were testing
just "(hasho_flag & bit)" to use the less dangerous, project-approved
form "((hasho_flag & bit) != 0)".
Coverity found the bug in hash_mask(); I noted the one in
pgstat_hash_page() through code reading.
|
|
|
|
|
|
| |
Author: Stas Kelvich <s.kelvich@postgrespro.ru>
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reported-by: Fujii Masao <masao.fujii@gmail.com>
|
|
|
|
|
|
|
| |
This also changes default behaviour of subscription workers to
synchronous_commit = off.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
|
|
|
|
| |
The result of TextDatumGetCString is already palloc'ed.
|
|
|
|
| |
Author: Alexander Law <exclusion@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
regexport.c thought it could just ignore LACON arcs, but the correct
behavior is to treat them as satisfiable while consuming zero input
(rather reminiscently of commit 9f1e642d5). Otherwise, the emitted
simplified-NFA representation may contain no paths leading from initial
to final state, which unsurprisingly confuses pg_trgm, as seen in
bug #14623 from Jeff Janes.
Since regexport's output representation has no concept of an arc that
consumes zero input, recurse internally to find the next normal arc(s)
after any LACON transitions. We'd be forced into changing that
representation if a LACON could be the last arc reaching the final
state, but fortunately the regex library never builds NFAs with such
a configuration, so there always is a next normal arc.
Back-patch to 9.3 where this logic was introduced.
Discussion: https://postgr.es/m/20170413180503.25948.94871@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This contains some protocol changes to SASL authentiation (which is new
in v10):
* For future-proofing, in the AuthenticationSASL message that begins SASL
authentication, provide a list of SASL mechanisms that the server
supports, for the client to choose from. Currently, it's always just
SCRAM-SHA-256.
* Add a separate authentication message type for the final server->client
SASL message, which the client doesn't need to respond to. This makes
it unambiguous whether the client is supposed to send a response or not.
The SASL mechanism should know that anyway, but better to be explicit.
Also, in the server, support clients that don't send an Initial Client
response in the first SASLInitialResponse message. The server is supposed
to first send an empty request in that case, to which the client will
respond with the data that usually comes in the Initial Client Response.
libpq uses the Initial Client Response field and doesn't need this, and I
would assume any other sensible implementation to use Initial Client
Response, too, but let's follow the SASL spec.
Improve the documentation on SASL authentication in protocol. Add a
section describing the SASL message flow, and some details on our
SCRAM-SHA-256 implementation.
Document the different kinds of PasswordMessages that the frontend sends
in different phases of SASL authentication, as well as GSS/SSPI
authentication as separate message formats. Even though they're all 'p'
messages, and the exact format depends on the context, describing them as
separate message formats makes the documentation more clear.
Reviewed by Michael Paquier and Álvaro Hernández Tortosa.
Discussion: https://www.postgresql.org/message-id/CAB7nPqS-aFg0iM3AQOJwKDv_0WkAedRjs1W2X8EixSz+sKBXCQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Formerly, the bootstrap backend looked up the OIDs corresponding to
names in regproc catalog entries using brute-force searches of pg_proc.
It was somewhat remarkable that that worked at all, since it was used
while populating other pretty-fundamental catalogs like pg_operator.
And it was also quite slow, and getting slower as pg_proc gets bigger.
This patch moves the lookup work into genbki.pl, so that the values in
postgres.bki for regproc columns are always numeric OIDs, an option
that regprocin() already supported. Perl isn't the world's speediest
language, so this about doubles the time needed to run genbki.pl (from
0.3 to 0.6 sec on my machine). But we only do that at most once per
build. The time needed to run initdb drops significantly --- on my
machine, initdb --no-sync goes from 1.8 to 1.3 seconds. So this is
a small net win even for just one initdb per build, and it becomes
quite a nice win for test sequences requiring many initdb runs.
Strip out the now-dead code for brute-force catalog searching in
regprocin. We'd also cargo-culted similar logic into regoperin
and some (not all) of the other reg*in functions. That is all
dead code too since we currently have no need to load such values
during bootstrap. I removed it all, reasoning that if we ever
need such functionality it'd be much better to do it in a similar
way to this patch.
There might be some simplifications possible in the backend now that
regprocin doesn't require doing catalog reads so early in bootstrap.
I've not looked into that, though.
Andreas Karlsson, with some small adjustments by me
Discussion: https://postgr.es/m/30896.1492006367@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Free each SASL message after sending it. It's not a lot of wasted memory,
and it's short-lived, but the authentication code in general tries to
pfree() stuff, so let's follow the example.
Adding the pfree() revealed a little bug in build_server_first_message().
It attempts to keeps a copy of the sent message, but it was missing a
pstrdup(), so the pointer started to dangle, after adding the pfree()
into CheckSCRAMAuth().
Reword comments and debug messages slightly, while we're at it.
Reviewed by Michael Paquier.
Discussion: https://www.postgresql.org/message-id/6490b975-5ee1-6280-ac1d-af975b19fb9a@iki.fi
|
|
|
|
|
|
|
|
|
|
| |
It was created as equivalent of pg_stats, but since the code underlying
pg_statistic_ext is more convenient than the one for pg_statistic,
pg_stats_ext is no longer useful.
Author: David Rowley
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/CAKJS1f9zAkPUf9nQrqpFBAsrOHvb5eYa2FVNsmCJy1wegcO_TQ@mail.gmail.com
|
|
|
|
| |
Amit Langote, additional line by me
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 5e6d8d2bb allowed parallel workers to execute parallel-safe
subplans, but it transmitted the query's entire list of subplans to
the worker(s). Since execMain.c blindly does ExecInitNode and later
ExecEndNode on every list element, this resulted in parallel-unsafe plan
nodes nonetheless getting started up and shut down in parallel workers.
That seems mostly harmless as far as core plan node types go (but
maybe not so much for Gather?). But it resulted in postgres_fdw
opening and then closing extra remote connections, and it's likely
that other non-parallel-safe FDWs or custom scan providers would have
worse reactions.
To fix, just make ExecSerializePlan replace parallel-unsafe subplans
with NULLs in the cut-down plan tree that it transmits to workers.
This relies on ExecInitNode and ExecEndNode to do nothing on NULL
input, but they do anyway. If anything else is touching the dropped
subplans in a parallel worker, that would be a bug to be fixed.
(This thus provides a strong guarantee that we won't try to do
something with a parallel-unsafe subplan in a worker.)
This is, I think, the last fix directly occasioned by Andreas Seltenreich's
bug report of a few days ago.
Tom Lane and Amit Kapila
Discussion: https://postgr.es/m/87tw5x4vcu.fsf@credativ.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We'd managed to avoid doing this so far, but it seems pretty obvious
that it would be forced on us some day, and this is much the cleanest
way of approaching the open problem that parallel-unsafe subplans are
being transmitted to parallel workers. Anyway there's no space cost
due to alignment considerations, and the time cost is pretty minimal
since we're just copying the flag from the corresponding Path node.
(At least in most cases ... some of the klugier spots in createplan.c
have to work a bit harder.)
In principle we could perhaps get rid of SubPlan.parallel_safe,
but I thought it better to keep that in case there are reasons to
consider a SubPlan unsafe even when its child plan is parallel-safe.
This patch doesn't actually do anything with the new flags, but
I thought I'd commit it separately anyway.
Note: although this touches outfuncs/readfuncs, there's no need for
a catversion bump because Plan trees aren't stored on disk.
Discussion: https://postgr.es/m/87tw5x4vcu.fsf@credativ.de
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
validateCheckConstraint() shouldn't try to access the storage for
a partitioned table, because it no longer has any. Creating a
_RETURN table on a partitioned table shouldn't be allowed, both
because there's no value in it and because trying to do so would
involve a validation scan against its nonexistent storage.
Amit Langote, reviewed by Tom Lane. Regression test outputs
updated to pass by me.
Discussion: http://postgr.es/m/e5c3cbd3-1551-d6f8-c9e2-51777d632fd2@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
| |
Commit f0e44751d7175fa3394da2c8f85e3ceb3cdbfe63 should have updated
this code, but did not.
Amit Langote
Discussion: http://postgr.es/m/52d9c443-ec78-5c8a-7a77-0f34aad12b82@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To prevent future bugs along the lines of the one corrected by commit
8ff518699f19dd0a5076f5090bac8400b8233f7f, or find any that remain
in the current code, add an Assert() that the difference between
parallel_register_count and parallel_terminate_count is in a sane
range.
Kuntal Ghosh, with considerable tidying-up by me, per a suggestion
from Neha Khatri. Reviewed by Tomas Vondra.
Discussion: http://postgr.es/m/CAFO0U+-E8yzchwVnvn5BeRDPgX2z9vZUxQ8dxx9c0XFGBC7N1Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit b460f5d6693103076dc554aa7cbb96e1e53074f9 failed to contemplate
the possibilit that a parallel worker registered before a crash would
be unregistered only after the crash; if that happened, we'd end up
with parallel_terminate_count > parallel_register_count and the
system would refuse to launch any more parallel workers.
The easiest way to fix that seems to be to forget BGW_NEVER_RESTART
workers in ResetBackgroundWorkerCrashTimes() rather than leaving them
around to be cleaned up after the conclusion of the restart, so that
they go away before rather than after shared memory is reset.
To make sure that this fix is water-tight, don't allow parallel
workers to be anything other than BGW_NEVER_RESTART, so that after
recovering from a crash, 0 is guaranteed to be the correct starting
value for parallel_register_count. The core code wouldn't do this
anyway, but somebody might try to do it in extension code.
Report by Thomas Vondra. Patch by me, reviewed by Kuntal Ghosh.
Discussion: http://postgr.es/m/CAGz5QC+AVEVS+3rBKRq83AxkJLMZ1peMt4nnrQwczxOrmo3CNw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 98e6e89040a0534ca26914c66cae9dd49ef62ad9 made inadequate
provision for the case of a single-page shared tidbitmap. It
allocate space for a shared PagetableEntry, but failed to
initialize it.
Report by Thomas Munro. Patch by Dilip Kumar, with some comment
changes by me.
Discussion: http://postgr.es/m/CAEepm=19Cmnfbi-j2Bw-a6yGPeHE1OVhKvvKz9bRBTJGKfGHMA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit also does
- add REPLICATION_SUBSCRIBERS into config_group
- mark max_logical_replication_workers and max_sync_workers_per_subscription
as REPLICATION_SUBSCRIBERS parameters
- move those parameters into "Subscribers" section in postgresql.conf.sample
Author: Masahiko Sawada, Petr Jelinek and me
Reported-by: Masahiko Sawada
Discussion: http://postgr.es/m/CAD21AoAonSCoa=v=87ZO3vhfUZA1k_E2XRNHTt=xioWGUa+0ug@mail.gmail.com
|
|
|
|
|
|
|
| |
This used to mean "Visual C++ except in those parts where Borland C++
was supported where it meant one of those". Now that we don't support
Borland C++ anymore, simplify by using _MSC_VER which is the normal way
to detect Visual C++.
|
|
|
|
|
|
|
|
|
| |
heap_drop_with_catalog and RangeVarCallbackForDropRelation should
lock the parent before locking the target relation.
Amit Langote
Discussion: http://postgr.es/m/29588799-a8ce-b0a2-3dae-f39ff6d35922@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previously used ShareRowExclusiveLock, while technically probably
more correct, led to deadlocks during seemingly unrelated operations and
thus a poor experience. Use RowExclusiveLock, like for most similar
catalog operations. In some care cases, the user might see an error
from DDL commands.
Discussion: https://www.postgresql.org/message-id/flat/13592.1490851519%40sss.pgh.pa.us
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The backend local copy of dsa_area_control->freed_segment_counter was
not properly initialized / maintained. This could, if unlucky, lead
to keeping attached to a segment for too long.
Found via valgrind bleat on buildfarm animal skink.
Author: Thomas Munro
Discussion: https://postgr.es/m/20170407164935.obsf2jipjfos5zei@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This extends the castNode() notation introduced by commit 5bcab1114 to
provide, in one step, extraction of a list cell's pointer and coercion to
a concrete node type. For example, "lfirst_node(Foo, lc)" is the same
as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode
that have appeared so far include a list extraction call, so this is
pretty widely useful, and it saves a few more keystrokes compared to the
old way.
As with the previous patch, back-patch the addition of these macros to
pg_list.h, so that the notation will be available when back-patching.
Patch by me, after an idea of Andrew Gierth's.
Discussion: https://postgr.es/m/14197.1491841216@sss.pgh.pa.us
|
| |
|
|
|
|
|
|
| |
Author: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reported-by: Fujii Masao <masao.fujii@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We decided in f1b4c771ea74f42447dccaed42ffcdcccf3aa694 to pass the
original slot to ExecConstraints(), but that breaks when there are
BEFORE ROW triggers involved. So we need to do reverse-map the tuples
back to the original descriptor instead, as Amit originally proposed.
Amit Langote, reviewed by Ashutosh Bapat. One overlooked comment
fixed by me.
Discussion: http://postgr.es/m/b3a17254-6849-e542-2353-bde4e880b6a4@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 4deb41381 modified isolationtester's query to see whether a
session is blocked to also check for waits occurring in GetSafeSnapshot.
However, it did that in a way that enormously increased the query's
runtime under CLOBBER_CACHE_ALWAYS, causing the buildfarm members
that use that to run about four times slower than before, and in some
cases fail entirely. To fix, push the entire logic into a dedicated
backend function. This should actually reduce the CLOBBER_CACHE_ALWAYS
runtime from what it was previously, though I've not checked that.
In passing, expose a SQL function to check for safe-snapshot blockage,
comparable to pg_blocking_pids. This is more or less free given the
infrastructure built to solve the other problem, so we might as well.
Thomas Munro
Discussion: https://postgr.es/m/20170407165749.pstcakbc637opkax@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
| |
Commit ac2b09508 was not terribly carefully reviewed. Band-aid it to
not fail on non-RestrictInfo input, per report from Andreas Seltenreich.
Also make it do something more reasonable with variable-free clauses,
and improve nearby comments.
Discussion: https://postgr.es/m/87inmf5rdx.fsf@credativ.de
|
|
|
|
|
|
|
|
|
|
| |
Defaults match the fixed behavior of prior releases, but now DBAs
have better options to tune serializable workloads.
It might be nice to be able to set this per relation, but that part
will need to wait for another release.
Author: Dagfinn Ilmari Mannsåker
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If there can certainly be no more than one matching inner row for a given
outer row, then the executor can move on to the next outer row as soon as
it's found one match; there's no need to continue scanning the inner
relation for this outer row. This saves useless scanning in nestloop
and hash joins. In merge joins, it offers the opportunity to skip
mark/restore processing, because we know we have not advanced past the
first possible match for the next outer row.
Of course, the devil is in the details: the proof of uniqueness must
depend only on joinquals (not otherquals), and if we want to skip
mergejoin mark/restore then it must depend only on merge clauses.
To avoid adding more planning overhead than absolutely necessary,
the present patch errs in the conservative direction: there are cases
where inner_unique or skip_mark_restore processing could be used, but
it will not do so because it's not sure that the uniqueness proof
depended only on "safe" clauses. This could be improved later.
David Rowley, reviewed and rather heavily editorialized on by me
Discussion: https://postgr.es/m/CAApHDvqF6Sw-TK98bW48TdtFJ+3a7D2mFyZ7++=D-RyPsL76gw@mail.gmail.com
|
|
|
|
|
|
|
| |
Using %ld as we were doing raises compiler warnings on 32 bit platforms.
Reported by Andres Freund.
Discussion: https://postgr.es/m/20170407214022.fidezl2e6rk3tuiz@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
| |
Instead of allocating memory in brin_deform_tuple and brin_copy_tuple
over and over during a scan, allow reuse of previously allocated memory.
This is said to make for a measurable performance improvement.
Author: Jinyu Zhang, Álvaro Herrera
Reviewed by: Tomas Vondra
Discussion: https://postgr.es/m/495deb78.4186.1500dacaa63.Coremail.beijing_pg@163.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When adding atomics back in b64d92f1a, I added 64bit support as
optional; there wasn't yet a direct user in sight. That turned out to
be a bit short-sighted, it'd already have been useful a number of times.
Add a fallback implementation of 64bit atomics, just like the one we
have for 32bit atomics.
Additionally optimize reads/writes to 64bit on a number of platforms
where aligned writes of that size are atomic. This can now be tested
with PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY.
Author: Andres Freund
Reviewed-By: Amit Kapila
Discussion: https://postgr.es/m/20160330230914.GH13305@awork2.anarazel.de
|
|
|
|
|
|
| |
on MSVC 2010
Author: Michael Paquier <michael.paquier@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The WAL-writing piece was forgetting to set the pages-per-range value.
Also, fix the declared type of struct member heapBlk, which I mistakenly
set as OffsetNumber rather than BlockNumber.
Problem was introduced by commit c655899ba9ae (April 1st). Any system
that tries to replay the new WAL record written before this fix is
likely to die on replay and require pg_resetwal.
Reported by Tom Lane.
Discussion: https://postgr.es/m/20191.1491524824@sss.pgh.pa.us
|
|
|
|
| |
Discussion: http://postgr.es/m/CA+Tgmobuz2C-YiQ87h8h0gECCV=F+SE=HBNaAU75rR5FEwtEhQ@mail.gmail.com
|
|
|
|
| |
Discussion: http://postgr.es/m/CA+TgmobgWHcXDcChX2+BqJDk2dkPVF85ZrJFhUyHHQmw8diTpA@mail.gmail.com
|
|
|
|
|
|
| |
Noted by Amit Langote.
Discussion: https://postgr.es/m/aad31672-4983-d95d-d24e-6b42fee9b985@lab.ntt.co.jp
|
|
|
|
|
|
| |
Harmless, but clearly wrong.
Kyotaro Horiguchi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As reported by Sean Johnston in bug #14614, since 9.6 the planner can fail
due to trying to look up the referent of a Var with varno 0. This happens
because we generate such Vars in generate_append_tlist, for lack of any
better way to describe the output of a SetOp node. In typical situations
nothing really cares about that, but given nested set-operation queries
we will call estimate_num_groups on the output of the subquery, and that
wants to know what a Var actually refers to. That logic used to look at
subquery->targetList, but in commit 3fc6e2d7f I'd switched it to look at
subroot->processed_tlist, ie the actual output of the subquery plan not the
parser's idea of the result. It seemed like a good idea at the time :-(.
As a band-aid fix, change it back.
Really we ought to have an honest way of naming the outputs of SetOp steps,
which suggests that it'd be a good idea for the parser to emit an RTE
corresponding to each one. But that's a task for another day, and it
certainly wouldn't yield a back-patchable fix.
Report: https://postgr.es/m/20170407115808.25934.51866@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An important step of SASLprep normalization, is to convert the string to
Unicode normalization form NFKC. Unicode normalization requires a fairly
large table of character decompositions, which is generated from data
published by the Unicode consortium. The script to generate the table is
put in src/common/unicode, as well test code for the normalization.
A pre-generated version of the tables is included in src/include/common,
so you don't need the code in src/common/unicode to build PostgreSQL, only
if you wish to modify the normalization tables.
The SASLprep implementation depends on the UTF-8 functions from
src/backend/utils/mb/wchar.c. So to use it, you must also compile and link
that. That doesn't change anything for the current users of these
functions, the backend and libpq, as they both already link with wchar.o.
It would be good to move those functions into a separate file in
src/commmon, but I'll leave that for another day.
No documentation changes included, because there is no details on the
SCRAM mechanism in the docs anyway. An overview on that in the protocol
specification would probably be good, even though SCRAM is documented in
detail in RFC5802. I'll write that as a separate patch. An important thing
to mention there is that we apply SASLprep even on invalid UTF-8 strings,
to support other encodings.
Patch by Michael Paquier and me.
Discussion: https://www.postgresql.org/message-id/CAB7nPqSByyEmAVLtEf1KxTRh=PWNKiWKEKQR=e1yGehz=wbymQ@mail.gmail.com
|
|
|
|
| |
Masahiko Sawada
|
|
|
|
| |
per buildfarm
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this change array fields are populated from json(b) arrays, and
composite fields are populated from json(b) objects.
Along the way, some significant code refactoring is done to remove
redundancy in the way to populate_record[_set] and to_record[_set]
functions operate, and some significant efficiency gains are made by
caching tuple descriptors.
Nikita Glukhov, edited some by me.
Reviewed by Aleksander Alekseev and Tom Lane.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tupconvert.c's functions formerly considered that an explicit tuple
conversion was necessary if the input and output tupdescs contained
different type OIDs. The point of that was to make sure that a composite
datum resulting from the conversion would contain the destination rowtype
OID in its composite-datum header. However, commit 3838074f8 entirely
misunderstood what that check was for, thinking that it had something to do
with presence or absence of an OID column within the tuple. Removal of the
check broke the no-op conversion path in ExecEvalConvertRowtype, as
reported by Ashutosh Bapat.
It turns out that of the dozen or so call sites for tupconvert.c functions,
ExecEvalConvertRowtype is the only one that cares about the composite-datum
header fields in the output tuple. In all the rest, we'd much rather avoid
an unnecessary conversion whenever the tuples are physically compatible.
Moreover, the comments in tupconvert.c only promise physical compatibility
not a metadata match. So, let's accept the removal of the guarantee about
the output tuple's rowtype marking, recognizing that this is a API change
that could conceivably break third-party callers of tupconvert.c. (So,
let's remember to mention it in the v10 release notes.)
However, commit 3838074f8 did have a bit of a point here, in that two
tuples mustn't be considered physically compatible if one has HEAP_HASOID
set and the other doesn't. (Some of the callers of tupconvert.c might not
really care about that, but we can't assume it in general.) The previous
check accidentally covered that issue, because no RECORD types ever have
OIDs, while if two tupdescs have the same named composite type OID then,
a fortiori, they have the same tdhasoid setting. If we're removing the
type OID match check then we'd better include tdhasoid match as part of
the physical compatibility check.
Without that hack in tupconvert.c, we need ExecEvalConvertRowtype to take
responsibility for inserting the correct rowtype OID label whenever
tupconvert.c decides it need not do anything. This is easily done with
heap_copy_tuple_as_datum, which will be considerably faster than a tuple
disassembly and reassembly anyway; so from a performance standpoint this
change is a win all around compared to what happened in earlier branches.
It just means a couple more lines of code in ExecEvalConvertRowtype.
Ashutosh Bapat and Tom Lane
Discussion: https://postgr.es/m/CAFjFpRfvHABV6+oVvGcshF8rHn+1LfRUhj7Jz1CDZ4gPUwehBg@mail.gmail.com
|
|
|
|
| |
Discussion: https://postgr.es/m/CAKJS1f9yurJQW9pdnzL+rmOtsp2vOytkpXKGnMFJEO-qz5O5eA@mail.gmail.com
|
|
|
|
| |
Problems pointed out by Andres Freund and Thomas Munro.
|