| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes a test case added by commit b69ec7cc9, which was intended
to exercise a corner case involving the rule used at that time that
materialized views were unpopulated iff they had physical size zero.
We got rid of that rule very shortly later, in commit 1d6c72a55, but
kept the test case. However, because the case now asks what VACUUM
will do to a zero-sized physical file, it would be pretty surprising
if the answer were ever anything but "nothing" ... and if things were
indeed that broken, surely we'd find it out from other tests. Since
the test involves a table that's fairly large by regression-test
standards (100K rows), it's quite slow to run. Dropping it should
save some buildfarm cycles, so let's do that.
Discussion: https://postgr.es/m/32386.1493831320@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tzparse() would attempt to load the "posixrules" timezone database file on
each call. That might seem like it would only be an issue when selecting a
POSIX-style zone name rather than a zone defined in the timezone database,
but it turns out that each zone definition file contains a POSIX-style zone
string and tzload() will call tzparse() to parse that. Thus, when scanning
the whole timezone file tree as we do in the pg_timezone_names view,
"posixrules" was read repetitively for each zone definition file. Fix
that by caching the file on first use within any given process. (We cache
other zone definitions for the life of the process, so there seems little
reason not to cache this one as well.) This probably won't help much in
processes that never run pg_timezone_names, but even one additional SET
of the timezone GUC would come out ahead.
An even worse problem for pg_timezone_names is that pg_open_tzfile()
has an inefficient way of identifying the canonical case of a zone name:
it basically re-descends the directory tree to the zone file. That's not
awful for an individual "SET timezone" operation, but it's pretty horrid
when we're inspecting every zone in the database. And it's pointless too
because we already know the canonical spelling, having just read it from
the filesystem. Fix by teaching pg_open_tzfile() to avoid the directory
search if it's not asked for the canonical name, and backfilling the
proper result in pg_tzenumerate_next().
In combination these changes seem to make the pg_timezone_names view
about 3x faster to read, for me. Since a scan of pg_timezone_names
has up to now been one of the slowest queries in the regression tests,
this should help some little bit for buildfarm cycle times.
Back-patch to all supported branches, not so much because it's likely
that users will care much about the view's performance as because
tracking changes in the upstream IANA timezone code is really painful
if we don't keep all the branches in sync.
Discussion: https://postgr.es/m/27962.1493671706@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to a missing CommandCounterIncrement() call, parsing of a non-utility
command in an extension script would not see the effects of the immediately
preceding DDL command, unless that command's execution ends with
CommandCounterIncrement() internally ... which some do but many don't.
Report by Philippe Beaudoin, diagnosis by Julien Rouhaud.
Rather remarkably, this bug has evaded detection since extensions were
invented, so back-patch to all supported branches.
Discussion: https://postgr.es/m/2cf7941e-4e41-7714-3de8-37b1a8f74dff@free.fr
|
|
|
|
|
|
| |
Report and fix from Vaishnavi Prabakaran
Backpatch to 9.4 like original.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DST law changes in Chile, Haiti, and Mongolia. Historical corrections for
Ecuador, Kazakhstan, Liberia, and Spain.
The IANA crew continue their campaign to replace invented time zone
abbrevations with numeric GMT offsets. This update changes numerous zones
in South America, the Pacific and Indian oceans, and some Asian and Middle
Eastern zones. I kept these abbreviations in the tznames/ data files,
however, so that we will still accept them for input. (We may want to
start trimming those files someday, but I think we should wait for the
upstream dust to settle before deciding what to do.)
In passing, add MESZ (Mitteleuropaeische Sommerzeit) to the tznames lists;
since we accept MEZ (Mitteleuropaeische Zeit) it seems rather strange not
to take the other one. And fix some incorrect, or at least obsolete,
comments that certain abbreviations are not traceable to the IANA data.
|
|
|
|
|
|
|
|
|
|
|
| |
Currently only provision for running the bin checks in a single step is
provided for. Now these tests can be run individually, as well as tests
in other locations (e.g. src.test/recover).
Also provide for suppressing unnecessary temp installs by setting the
NO_TEMP_INSTALL environment variable just as the Makefiles do.
Backpatch to 9.4.
|
|
|
|
|
|
|
|
|
|
|
| |
zic no longer mishandles some transitions in January 2038 when it
attempts to work around Qt bug 53071. This fixes a bug affecting
Pacific/Tongatapu that was introduced in zic 2016e. localtime.c
now contains a workaround, useful when loading a file generated by
a buggy zic.
There are assorted cosmetic changes as well, notably relocation
of a bunch of #defines.
|
|
|
|
|
|
|
|
|
|
| |
Currently, trying to validate a NO INHERIT constraint on the parent will
search for the constraint in child tables (where it is not supposed to
exist), wrongly causing a "constraint does not exist" error.
Amit Langote, per a report from Hans Buschmann.
Discussion: http://postgr.es/m/20170421184012.24362.19@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Logical decoding stores historical snapshots on disk, so that logical
decoding can restart without having to reconstruct a snapshot from
scratch (for which the resources are not guaranteed to be present
anymore). These serialized snapshots were also used when creating a
new slot via the walsender interface, which can export a "full"
snapshot (i.e. one that can read all tables, not just catalog ones).
The problem is that the serialized snapshots are only useful for
catalogs and not for normal user tables. Thus the use of such a
serialized snapshot could result in an inconsistent snapshot being
exported, which could lead to queries returning wrong data. This
would only happen if logical slots are created while another logical
slot already exists.
Author: Petr Jelinek
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com
Backport: 9.4, where logical decoding was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logical decoding machinery already preserved all the required
catalog tuples, which is sufficient in the course of normal logical
decoding, but did not guarantee that non-catalog tuples were preserved
during computation of the initial snapshot when creating a slot over
the replication protocol.
This could cause a corrupted initial snapshot being exported. The
time window for issues is usually not terribly large, but on a busy
server it's perfectly possible to it hit it. Ongoing decoding is not
affected by this bug.
To avoid increased overhead for the SQL API, only retain additional
tuples when a logical slot is being created over the replication
protocol. To do so this commit changes the signature of
CreateInitDecodingContext(), but it seems unlikely that it's being
used in an extension, so that's probably ok.
In a drive-by fix, fix handling of
ReplicationSlotsComputeRequiredXmin's already_locked argument, which
should only apply to ProcArrayLock, not ReplicationSlotControlLock.
Reported-By: Erik Rijkers
Analyzed-By: Petr Jelinek
Author: Petr Jelinek, heavily editorialized by Andres Freund
Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/9a897b86-46e1-9915-ee4c-da02e4ff6a95@2ndquadrant.com
Backport: 9.4, where logical decoding was introduced.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This corner case didn't behave nicely at all: the postmaster would
(partially) update its state as though the process had started
successfully, and be quite confused thereafter. Fix it to act
like the worker had crashed, instead.
In passing, refactor so that do_start_bgworker contains all the
state-change logic for bgworker launch, rather than just some of it.
Back-patch as far as 9.4. 9.3 contains similar logic, but it's just
enough different that I don't feel comfortable applying the patch
without more study; and the use of bgworkers in 9.3 was so small
that it doesn't seem worth the extra work.
transam/parallel.c is still entirely unprepared for the possibility
of bgworker startup failure, but that seems like material for a
separate patch.
Discussion: https://postgr.es/m/4905.1492813727@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the code would generate incorrect results, assertion
failures, or crashes if given unsortable (but hashable) columns in
grouping sets. Handle by throwing an error instead.
Report and patch by Pavan Deolasee (though I changed the error
wording slightly); regression test by me.
(This affects 9.5 only since the planner was refactored in 9.6.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This seems to be largely cosmetic, avoiding valgrind bleats and the
like. The uninitialized padding influences the CRC of the on-disk
entry, but because it's also used when verifying the CRC, that doesn't
cause spurious failures. Backpatch nonetheless.
It's a bit unfortunate that contrib/test_decoding/sql/replorigin.sql
doesn't exercise the checkpoint path, but checkpoints are fairly
expensive on weaker machines, and we'd have to stop/start for that to
be meaningful.
Author: Andres Freund
Discussion: https://postgr.es/m/20170422183123.w2jgiuxtts7qrqaq@alap3.anarazel.de
Backpatch: 9.5, where replication origins were introduced
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ProcessTwoPhaseBuffer (formerly StandbyRecoverPreparedTransactions)
mixed up the parent and child XIDs when calling SubTransSetParent to
record the transactions' relationship in pg_subtrans.
Remarkably, analysis by Simon Riggs suggests that this doesn't lead to
visible problems (at least, not in non-Assert builds). That might
explain why we'd not noticed it before. Nonetheless, it's surely wrong.
This code was born broken, so back-patch to all supported branches.
Discussion: https://postgr.es/m/20110.1492905318@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The POSIX standard does not say that the success return value for
fcntl(F_SETFD) and fcntl(F_SETFL) is zero; it says only that it's not -1.
We had several calls that were making the stronger assumption. Adjust
them to test specifically for -1 for strict spec compliance.
The standard further leaves open the possibility that the O_NONBLOCK
flag bit is not the only active one in F_SETFL's argument. Formally,
therefore, one ought to get the current flags with F_GETFL and store
them back with only the O_NONBLOCK bit changed when trying to change
the nonblock state. In port/noblock.c, we were doing the full pushup
in pg_set_block but not in pg_set_noblock, which is just weird. Make
both of them do it properly, since they have little business making
any assumptions about the socket they're handed. The other places
where we're issuing F_SETFL are working with FDs we just got from
pipe(2), so it's reasonable to assume the FDs' properties are all
default, so I didn't bother adding F_GETFL steps there.
Also, while pg_set_block deserves some points for trying to do things
right, somebody had decided that it'd be even better to cast fcntl's
third argument to "long". Which is completely loony, because POSIX
clearly says the third argument for an F_SETFL call is "int".
Given the lack of field complaints, these missteps apparently are not
of significance on any common platforms. But they're still wrong,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/30882.1492800880@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were applying the use_physical_tlist optimization to all relation
scan plans, even those implemented by custom scan providers. However,
that's a bad idea for a couple of reasons. The custom provider might
be unable to provide columns that it hadn't expected to be asked for
(for example, the custom scan might depend on an index-only scan).
Even more to the point, there's no good reason to suppose that this
"optimization" is a win for a custom scan; whatever the custom provider
is doing is likely not based on simply returning physical heap tuples.
(As a counterexample, if the custom scan is an interface to a column store,
demanding all columns would be a huge loss.) If it is a win, the custom
provider could make that decision for itself and insert a suitable
pathtarget into the path, anyway.
Per discussion with Dmitry Ivanov. Back-patch to 9.5 where custom scan
support was introduced. The argument that the custom provider can adjust
the behavior by changing the pathtarget only applies to 9.6+, but on
balance it seems more likely that use_physical_tlist will hurt custom
scans than help them.
Discussion: https://postgr.es/m/e29ddd30-8ef9-4da5-a50b-2bb7b8c7198d@postgrespro.ru
|
|
|
|
|
| |
Introduced by 087e696f066d31cb2f1269a1296a13dfe0bf7a11, happens with gcc
4.7.2.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In standard non-Windows builds, there's no particular reason to care what
address the kernel chooses to map the shared memory segment at. However,
when building with EXEC_BACKEND, there's a risk that the chosen address
won't be available in all child processes. Linux with ASLR enabled (which
it is by default) seems particularly at risk because it puts shmem segments
into the same area where it maps shared libraries. We can work around
that by specifying a mapping address that's outside the range where
shared libraries could get mapped. On x86_64 Linux, 0x7e0000000000
seems to work well.
This is only meant for testing/debugging purposes, so it doesn't seem
necessary to go as far as providing a GUC (or any user-visible
documentation, though we might change that later). Instead, it's just
controlled by setting an environment variable PG_SHMEM_ADDR to the
desired attach address.
Back-patch to all supported branches, since the point here is to
remove intermittent buildfarm failures on EXEC_BACKEND animals.
Owners of affected animals will need to add a suitable setting of
PG_SHMEM_ADDR to their build_env configuration.
Discussion: https://postgr.es/m/7036.1492231361@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
regexport.c thought it could just ignore LACON arcs, but the correct
behavior is to treat them as satisfiable while consuming zero input
(rather reminiscently of commit 9f1e642d5). Otherwise, the emitted
simplified-NFA representation may contain no paths leading from initial
to final state, which unsurprisingly confuses pg_trgm, as seen in
bug #14623 from Jeff Janes.
Since regexport's output representation has no concept of an arc that
consumes zero input, recurse internally to find the next normal arc(s)
after any LACON transitions. We'd be forced into changing that
representation if a LACON could be the last arc reaching the final
state, but fortunately the regex library never builds NFAs with such
a configuration, so there always is a next normal arc.
Back-patch to 9.3 where this logic was introduced.
Discussion: https://postgr.es/m/20170413180503.25948.94871@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This extends the castNode() notation introduced by commit 5bcab1114 to
provide, in one step, extraction of a list cell's pointer and coercion to
a concrete node type. For example, "lfirst_node(Foo, lc)" is the same
as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode
that have appeared so far include a list extraction call, so this is
pretty widely useful, and it saves a few more keystrokes compared to the
old way.
As with the previous patch, back-patch the addition of these macros to
pg_list.h, so that the notation will be available when back-patching.
Patch by me, after an idea of Andrew Gierth's.
Discussion: https://postgr.es/m/14197.1491841216@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
HandleFunctionRequest() is no longer responsible for reading the protocol
message from the client, since commit 2b3a8b20c2. Fix the outdated
comments.
HandleFunctionRequest() now always returns 0, because the code that used
to return EOF was moved in 2b3a8b20c2. Therefore, the caller no longer
needs to check the return value.
Reported by Andres Freund. Backpatch to all supported versions, even though
this doesn't have any user-visible effect, to make backporting future
patches in this area easier.
Discussion: https://www.postgresql.org/message-id/20170405010525.rt5azbya5fkbhvrx@alap3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using integer timestamps, the interval-comparison functions tried
to compute the overall magnitude of an interval as an int64 number of
microseconds. As reported by Frazer McLean, this overflows for intervals
exceeding about 296000 years, which is bad since we nominally allow
intervals many times larger than that. That results in wrong comparison
results, and possibly in corrupted btree indexes for columns containing
such large interval values.
To fix, compute the magnitude as int128 instead. Although some compilers
have native support for int128 calculations, many don't, so create our
own support functions that can do 128-bit addition and multiplication
if the compiler support isn't there. These support functions are designed
with an eye to allowing the int128 code paths in numeric.c to be rewritten
for use on all platforms, although this patch doesn't do that, or even
provide all the int128 primitives that will be needed for it.
Back-patch as far as 9.4. Earlier releases did not guard against overflow
of interval values at all (commit 146604ec4 fixed that), so it seems not
very exciting to worry about overly-large intervals for them.
Before 9.6, we did not assume that unreferenced "static inline" functions
would not draw compiler warnings, so omit functions not directly referenced
by timestamp.c, the only present consumer of int128.h. (We could have
omitted these functions in HEAD too, but since they were written and
debugged on the way to the present patch, and they look likely to be needed
by numeric.c, let's keep them in HEAD.) I did not bother to try to prevent
such warnings in a --disable-integer-datetimes build, though.
Before 9.5, configure will never define HAVE_INT128, so the part of
int128.h that exploits a native int128 implementation is dead code in the
9.4 branch. I didn't bother to remove it, thinking that keeping the file
looking similar in different branches is more useful.
In HEAD only, add a simple test harness for int128.h in src/tools/.
In back branches, this does not change the float-timestamps code path.
That's not subject to the same kind of overflow risk, since it computes
the interval magnitude as float8. (No doubt, when this code was originally
written, overflow was disregarded for exactly that reason.) There is a
precision hazard instead :-(, but we'll avert our eyes from that question,
since no complaints have been reported and that code's deprecated anyway.
Kyotaro Horiguchi and Tom Lane
Discussion: https://postgr.es/m/1490104629.422698.918452336.26FA96B7@webmail.messagingengine.com
|
|
|
|
|
|
|
|
|
| |
This backpatches 51e26c9 and 7220c7b, including both documentation
updates clarifying the checkpoints at the beginning of base backups and
the messages in verbose and progress mdoe of pg_basebackup.
Author: Michael Banck
Discussion: https://postgr.es/m/21444.1488142764%40sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On EXEC_BACKEND builds, this can fail if ASLR is in use.
Backpatch to 9.5. On master, completely remove the bgw_main field
completely, since there is no situation in which it is safe for an
EXEC_BACKEND build. On 9.6 and 9.5, leave the field intact to avoid
breaking things for third-party code that doesn't care about working
under EXEC_BACKEND. Prior to 9.5, there are no in-core bgworker
entrypoints.
Petr Jelinek, reviewed by me.
Discussion: http://postgr.es/m/09d8ad33-4287-a09b-a77f-77f8761adb5e@2ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were assigning values near 255 through "char *" pointers. On machines
where char is signed, that's not entirely kosher, and it's reasonable
for compilers to warn about it.
A better solution would be to change the pointer type to "unsigned char *",
but that would be vastly more invasive. For the moment, let's just apply
this simple backpatchable solution.
Aleksander Alekseev
Discussion: https://postgr.es/m/20170220141239.GD12278@e733.localdomain
Discussion: https://postgr.es/m/2839.1490714708@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The compiler is entitled to store a char[] local variable with no
particular alignment requirement. Our RADIUS code cavalierly took such
a local variable and cast its address to a struct type that does have
alignment requirements. On an alignment-picky machine this would lead
to bus errors. To fix, declare the local variable honestly, and then
cast its address to char * for use in the I/O calls.
Given the lack of field complaints, there must be very few if any
people affected; but nonetheless this is a clear portability issue,
so back-patch to all supported branches.
Noted while looking at a Coverity complaint in the same code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 38bdba54a64bacec78e3266f0848b0b4a824132a, "Fix and
simplify check for whether we're running as Windows service". It turns out
that older versions of MinGW - like that on buildfarm member narwhal - do
not support the CheckTokenMembership() function. This replaces the
refactoring with a much smaller fix, to add a check for SE_GROUP_ENABLED to
pgwin32_is_service().
Only apply to back-branches, and keep the refactoring in HEAD. It's
unlikely that anyone is still really using such an old version of MinGW -
aside from narwhal - but let's not change the minimum requirements in
minor releases.
Discussion: https://www.postgresql.org/message-id/16609.1489773427@sss.pgh.pa.us
Patch: https://www.postgresql.org/message-id/CAB7nPqSvfu%3DKpJ%3DNX%2BYAHmgAmQdzA7N5h31BjzXeMgczhGCC%2BQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the process token contains SECURITY_SERVICE_RID, but it has been
disabled by the SE_GROUP_USE_FOR_DENY_ONLY attribute, win32_is_service()
would incorrectly report that we're running as a service. That situation
arises, e.g. if postmaster is launched with a restricted security token,
with the "Log in as Service" privilege explicitly removed.
Replace the broken code with CheckProcessTokenMembership(), which does
this correctly. Also replace similar code in win32_is_admin(), even
though it got this right, for simplicity and consistency.
Per bug #13755, reported by Breen Hagan. Back-patch to all supported
versions. Patch by Takayuki Tsunakawa, reviewed by Michael Paquier.
Discussion: https://www.postgresql.org/message-id/20151104062315.2745.67143%40wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
presence of page pins, which leads to serious estimation errors in the
planner. This particularly affects small heavily-accessed tables,
especially where locking (e.g. from FK constraints) forces frequent
vacuums for mxid cleanup.
Fix by keeping separate track of pages whose live tuples were actually
counted vs. pages that were only scanned for freezing purposes. Thus,
reltuples can only be set to 0 if all pages of the relation were
actually counted.
Backpatch to all supported versions.
Per bug #14057 from Nicolas Baccelli, analyzed by me.
Discussion: https://postgr.es/m/20160331103739.8956.94469@wrigleys.postgresql.org
|
|
|
|
|
|
|
|
|
|
|
| |
The original coding was trying to use a TypeName as a string Value,
which doesn't work; an oversight in my commit a61fd533. Repair.
Also, make sure we cover the broken case in the relevant test script.
Backpatch to 9.5.
Discussion: https://postgr.es/m/20170315151829.bhxsvrp75xdxhm3n@alvherre.pgsql
|
|
|
|
| |
From: Josh Soref <jsoref@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This could result in corruption of the init fork of an unlogged index
if the ambuildempty routine for that index used shared buffers to
create the init fork, which was true for brin, gin, gist, and hash
indexes.
Patch by me, based on an earlier patch by Michael Paquier, who also
reviewed this one. This also incorporates an idea from Artur
Zakirov.
Discussion: http://postgr.es/m/CACYUyc8yccE4xfxhqxfh_Mh38j7dRFuxfaK1p6dSNAEUakxUyQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than waiting around for statement_timeout to expire, we can just
try to take the table's lock in nowait mode. This saves some fraction
under 4 seconds when running this test with prepared xacts available,
and it guards against timeout-expired-anyway failures on very slow
machines when prepared xacts are not available, as seen in a recent
failure on axolotl for instance.
This approach could fail if autovacuum were to take an exclusive lock
on the test table concurrently, but there's no reason for it to do so.
Since the main point here is to improve stability in the buildfarm,
back-patch to all supported branches.
|
|
|
|
|
|
| |
The problem was that "begin transaction" was issued automatically
before executing COMMIT/ROLLBACK PREPARED if not in auto commit. This fix by
Masahiko Sawada fixes this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commits 89e0bac86 et al replaced newlines with spaces in object names
printed in SQL comments, but we neglected to consider that the same
names are also printed by "pg_restore -l", and a newline would render
the output unparseable by "pg_restore -L". Apply the same replacement
in "-l" output. Since "pg_restore -L" doesn't actually examine any
object names, only the dump ID field that starts each line, this is
enough to fix things for its purposes.
The previous fix was treated as a security issue, and we might have
done that here as well, except that the issue was reported publicly
to start with. Anyway it's hard to see how this could be exploited
for SQL injection; "pg_restore -L" doesn't do much with the file
except parse it for leading integers.
Per bug #14587 from Milos Urbanek. Back-patch to all supported versions.
Discussion: https://postgr.es/m/20170310155318.1425.30483@wrigleys.postgresql.org
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The IANA timezone crew continues to chip away at their project of removing
timezone abbreviations that have no real-world currency from their
database. The tzdata2017a update removes all such abbreviations for
South American zones, as well as much of the Pacific. This breaks some
test cases in timestamptz.sql that were expecting America/Santiago and
America/Caracas to have non-numeric abbreviations.
The test cases involving America/Santiago seem to have selected that
zone more or less at random, so just replace it with America/New_York,
which is of similar longitude. The cases involving America/Caracas are
harder since they were chosen to test a time-varying zone abbreviation
around a point where it changed meaning in the backwards direction.
Fortunately, Europe/Moscow has a similar case in 2014, and the MSK/MSD
abbreviations are well enough attested that IANA seems unlikely to
decide to remove them from the database in future.
With these changes, this regression test should pass when using any IANA
zone database from 2015 or later. One could wish that there were a few
years more daylight on how out-of-date your zone database can be ... but
really the --with-system-tzdata option is only meant for use on platforms
where the zone database is kept up-to-date pretty faithfully, so I do not
think this is a big objection.
Discussion: https://postgr.es/m/6749.1489087470@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Large chunks (those too large for any palloc freelist) are managed as
separate blocks. Formerly, realloc'ing or pfree'ing such a chunk required
O(N) time in a context with N blocks, since we had to traipse down the
singly-linked block list to locate the block's predecessor before we could
fix the list links. This can result in O(N^2) runtime in situations where
large numbers of such chunks are manipulated within one context. Cases
like that were not foreseen in the original design of aset.c, and indeed
didn't arise until fairly recently. But such problems can now occur in
reorderbuffer.c and in hash joining, both of which make repeated large
requests without scaling up their request size as they do so, and which
will free their requests in not-necessarily-LIFO order.
To fix, change the block list from singly-linked to doubly-linked.
This adds another 4 or 8 bytes to ALLOC_BLOCKHDRSZ, but that doesn't
seem like unacceptable overhead, since aset.c's blocks are normally
8K or more, and never less than 1K in current practice.
In passing, get rid of some redundant AllocChunkGetPointer() calls in
AllocSetRealloc (the compiler might be smart enough to optimize these
away anyway, but no need to assume that) and improve AllocSetCheck's
checking of block header fields.
Back-patch to 9.4 where reorderbuffer.c appeared. We could take this
further back, but currently there's no evidence that it would be useful.
Discussion: https://postgr.es/m/CAMkU=1x1hvue1XYrZoWk_omG0Ja5nBvTdvgrOeVkkeqs71CV8g@mail.gmail.com
|
|
|
|
| |
fatal_error() already prints out a trailing newline.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We attached no schema label to comments for procedural languages, casts,
transforms, operator classes, operator families, or text search objects.
The first three categories of objects don't really have schemas, but
pg_dump treats them as if they do, and it seems like the TocEntry fields
for their comments had better match the TocEntry fields for the parent
objects. (As an example of a possible hazard, the type names in a CAST
will be formatted with the assumption of a particular search_path, so
failing to ensure that this same path is active for the COMMENT ON command
could lead to an error or to attaching the comment to the wrong cast.)
In the last six cases, this was a flat-out error --- possibly mine to
begin with, but it was a long time ago.
The security label for a procedural language was likewise not correctly
labeled as to schema, and both the comment and security label for a
procedural language were not correctly labeled as to owner.
In simple cases the restore would accidentally work correctly anyway, since
these comments and security labels would normally get emitted right after
the owning object, and so the search path and active user would be correct
anyhow. But it could fail in corner cases; for example a schema-selective
restore would omit comments it should include.
Giuseppe Broccolo noted the oversight, and proposed the correct fix, for
text search dictionary objects; I found the rest by cross-checking other
dumpComment() calls. These oversights are ancient, so back-patch all
the way.
Discussion: https://postgr.es/m/CAFzmHiWwwzLjzwM4x5ki5s_PDMR6NrkipZkjNnO3B0xEpBgJaA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When performing a pg_upgrade, we copy the files behind pg_largeobject
and pg_largeobject_metadata, allowing us to avoid having to dump out and
reload the actual data for large objects and their ACLs.
Unfortunately, that isn't all of the information which can be associated
with large objects. Currently, we also support COMMENTs and SECURITY
LABELs with large objects and these were being silently dropped during a
pg_upgrade as pg_dump would skip everything having to do with a large
object and pg_upgrade only copied the tables mentioned to the new
cluster.
As the file copies happen after the catalog dump and reload, we can't
simply include the COMMENTs and SECURITY LABELs in pg_dump's binary-mode
output but we also have to include the actual large object definition as
well. With the definition, comments, and security labels in the pg_dump
output and the file copies performed by pg_upgrade, all of the data and
metadata associated with large objects is able to be successfully pulled
forward across a pg_upgrade.
In 9.6 and master, we can simply adjust the dump bitmask to indicate
which components we don't want. In 9.5 and earlier, we have to put
explciit checks in in dumpBlob() and dumpBlobs() to not include the ACL
or the data when in binary-upgrade mode.
Adjustments made to the privileges regression test to allow another test
(large_object.sql) to be added which explicitly leaves a large object
with a comment in place to provide coverage of that case with
pg_upgrade.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/20170221162655.GE9812@tamriel.snowman.net
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With RLS active, "COPY tab TO ..." failed under -DRELCACHE_FORCE_RELEASE,
and would sometimes fail without that, because it used the relation name
directly from the relcache as part of the parsetree it's building. That
becomes a potentially-dangling pointer as soon as the relcache entry is
closed, a bit further down. Typical symptom if the relcache entry chanced
to get cleared would be "relation does not exist" error with a garbage
relation name, or possibly a core dump; but if you were really truly
unlucky, the COPY might copy from the wrong table.
Per report from Andrew Dunstan that regression tests fail with
-DRELCACHE_FORCE_RELEASE. The core tests now pass for me (but have
not tried "make check-world" yet).
Discussion: https://postgr.es/m/7b52f900-0579-cda9-ae2e-de5da17090e6@2ndQuadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reliably fails with -DRELCACHE_FORCE_RELEASE, as reported by
Andrew Dunstan, and could sometimes fail in normal operation, resulting
in a wrong persistence value being used for the transient table.
It's not immediately clear to me what effects that might have beyond
the risk of a crash while accessing OldHeap->rd_rel->relpersistence,
but it's probably not good.
Bug introduced by commit f41872d0c, and made substantially worse by
commit 85b506bbf, which added a second such access significantly
later than the heap_close. I doubt the first reference could fail
in a production scenario, but the second one definitely could.
Discussion: https://postgr.es/m/7b52f900-0579-cda9-ae2e-de5da17090e6@2ndQuadrant.com
|
|
|
|
|
|
|
| |
Both datatypes map to the same underlying one which is why it still
worked, but we should use the correct type.
Author: Kyotaro HORIGUCHI
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Walsender uses the local buffers for each outgoing and incoming message.
Previously when creating replication slot, walsender forgot to initialize
one of them and which can cause the segmentation fault error. To fix this
issue, this commit changes walsender so that it always initialize them
before it executes the requested replication command.
Back-patch to 9.4 where replication slot was introduced.
Problem report and initial patch by Stas Kelvich, modified by me.
Report: https://www.postgresql.org/message-id/A1E9CB90-1FAC-4CAD-8DBA-9AA62A6E97C5@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several places in fd.c had badly-thought-through handling of error returns
from lseek() and close(). The fact that those would seldom fail on valid
FDs is probably the reason we've not noticed this up to now; but if they
did fail, we'd get quite confused.
LruDelete and LruInsert actually just Assert'd that lseek never fails,
which is pretty awful on its face.
In LruDelete, we indeed can't throw an error, because that's likely to get
called during error abort and so throwing an error would probably just lead
to an infinite loop. But by the same token, throwing an error from the
close() right after that was ill-advised, not to mention that it would've
left the LRU state corrupted since we'd already unlinked the VFD from the
list. I also noticed that really, most of the time, we should know the
current seek position and it shouldn't be necessary to do an lseek here at
all. As patched, if we don't have a seek position and an lseek attempt
doesn't give us one, we'll close the file but then subsequent re-open
attempts will fail (except in the somewhat-unlikely case that a
FileSeek(SEEK_SET) call comes between and allows us to re-establish a known
target seek position). This isn't great but it won't result in any state
corruption.
Meanwhile, having an Assert instead of an honest test in LruInsert is
really dangerous: if that lseek failed, a subsequent read or write would
read or write from the start of the file, not where the caller expected,
leading to data corruption.
In both LruDelete and FileClose, if close() fails, just LOG that and mark
the VFD closed anyway. Possibly leaking an FD is preferable to getting
into an infinite loop or corrupting the VFD list. Besides, as far as I can
tell from the POSIX spec, it's unspecified whether or not the file has been
closed, so treating it as still open could be the wrong thing anyhow.
I also fixed a number of other places that were being sloppy about
behaving correctly when the seekPos is unknown.
Also, I changed FileSeek to return -1 with EINVAL for the cases where it
detects a bad offset, rather than throwing a hard elog(ERROR). It seemed
pretty inconsistent that some bad-offset cases would get a failure return
while others got elog(ERROR). It was missing an offset validity check for
the SEEK_CUR case on a closed file, too.
Back-patch to all supported branches, since all this code is fundamentally
identical in all of them.
Discussion: https://postgr.es/m/2982.1487617365@sss.pgh.pa.us
|
|
|
|
|
| |
It failed to remove a .o file during "make clean", and it lacked
a .gitignore file entirely.
|
|
|
|
|
|
|
|
|
|
|
| |
One case in the PL/Tcl tests is observed to fail on RHEL5 with a Turkish
time zone setting. It's not clear if this is an old Tcl bug or something
odd about the zone data, but in any case that test is meant to see if the
Tcl [clock] command works at all, not what its corner-case behaviors are.
Therefore we have no need to test exactly which week a Sunday midnight is
considered to fall into. Probe the following Tuesday instead.
Discussion: https://postgr.es/m/797.1487517822@sss.pgh.pa.us
|
|
|
|
|
|
| |
The recovery.conf file that's generated is specifically for
replication, and not needed (or wanted) for regular backup restore, so
indicate that in the message.
|
|
|
|
|
|
|
| |
Also add to the existing rather half-baked description of PROFILE,
which does exactly the same thing, but I think people use it differently.
Discussion: https://postgr.es/m/16461.1487361849@sss.pgh.pa.us
|