| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The commentary in this file was in extremely sad shape. The author(s)
had clearly never heard of the project convention that a function header
comment should provide an API spec of some sort for that function. Much
of it was flat out wrong, too --- maybe it was accurate when written, but
if so it had not been updated to track subsequent code revisions. Rewrite
and rearrange to try to bring it up to speed, and annotate some of the
places where more work is needed. (I've refrained from actually fixing
anything of substance ... yet.)
Also, rename a couple of functions for more clarity as to what they do,
do some very minor code rearrangement, remove some pointless Asserts,
fix an incorrect Assert in readMessageFromPipe, and add a missing socket
close in one error exit from pgpipe(). The last would be a bug if we
tried to continue after pgpipe() failure, but since we don't, it's just
cosmetic at present.
Although this is only cosmetic, back-patch to 9.3 where parallel.c was
added. It's sufficiently invasive that it'll pose a hazard for future
back-patching if we don't.
Discussion: <25239.1464386067@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since we start the worker threads with _beginthreadex(), we should use
_endthreadex() to terminate them. We got this right in the normal-exit
code path, but not so much during an error exit from a worker.
In addition, be sure to apply CloseHandle to the thread handle after
each thread exits.
It's not clear that these oversights cause any user-visible problems,
since the pg_dump run is about to terminate anyway. Still, it's clearly
better to follow Microsoft's API specifications than ignore them.
Also a few cosmetic cleanups in WaitForTerminatingWorkers(), including
being a bit less random about where to cast between uintptr_t and HANDLE,
and being sure to clear the worker identity field for each dead worker
(not that false matches should be possible later, but let's be careful).
Original observation and patch by Armin Schöffmann, cosmetic improvements
by Michael Paquier and me. (Armin's patch also included closing sockets
in ShutdownWorkersHard(), but that's been dealt with already in commit
df8d2d8c4.) Back-patch to 9.3 where parallel pg_dump was introduced.
Discussion: <zarafa.570306bd.3418.074bf1420d8f2ba2@root.aegaeon.de>
|
|
|
|
|
|
|
|
|
|
| |
The IF EXISTS option was documented, and implemented in the grammar, but
it didn't actually work for lack of support in does_not_exist_skipping().
Per bug #14160.
Report and patch by Kouhei Sutou
Report: <20160527070433.19424.81712@wrigleys.postgresql.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If both timeout indicators are set when we arrive at ProcessInterrupts,
we've historically just reported "lock timeout". However, some buildfarm
members have been observed to fail isolationtester's timeouts test by
reporting "lock timeout" when the statement timeout was expected to fire
first. The cause seems to be that the process is allowed to sleep longer
than expected (probably due to heavy machine load) so that the lock
timeout happens before we reach the point of reporting the error, and
then this arbitrary tiebreak rule does the wrong thing. We can improve
matters by comparing the scheduled timeout times to decide which error
to report.
I had originally proposed greatly reducing the 1-second window between
the two timeouts in the test cases. On reflection that is a bad idea,
at least for the case where the lock timeout is expected to fire first,
because that would assume that it takes negligible time to get from
statement start to the beginning of the lock wait. Thus, this patch
doesn't completely remove the risk of test failures on slow machines.
Empirically, however, the case this handles is the one we are seeing
in the buildfarm. The explanation may be that the other case requires
the scheduler to take the CPU away from a busy process, whereas the
case fixed here only requires the scheduler to not give the CPU back
right away to a process that has been woken from a multi-second sleep
(and, perhaps, has been swapped out meanwhile).
Back-patch to 9.3 where the isolationtester timeouts test was added.
Discussion: <8693.1464314819@sss.pgh.pa.us>
|
|
|
|
|
|
|
| |
Getting a synchronized snapshot is not supported on a hot standby node,
and is by default taken when using -j with multiple sessions. Trying to
do so still failed, but with a server error that would also go in the
log. Instead, proprely detect this case and give a better error message.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As part of upper planner pathification (commit 3fc6e2d7f5b652b4) I redid
createplan.c's approach to the physical-tlist optimization, in which scan
nodes are allowed to return exactly the underlying table's columns so as
to save doing a projection step at runtime. The logic was intentionally
more aggressive than before about applying the optimization, which is
generally a good thing, but Andres Freund found a case in which it got
too aggressive. Namely, if any column is referenced more than once in
the parent plan node's sorting or grouping column list, we can't optimize
because then that column would need to have more than one ressortgroupref
label, and we only have space for one.
Add logic to detect this situation in use_physical_tlist(), and also add
some error checking in apply_pathtarget_labeling_to_tlist(), which this
example proves was being overly cavalier about whether what it was doing
made any sense.
The added test case exposes the problem only because we do not eliminate
duplicate grouping keys. That might be something to fix someday, but it
doesn't seem like appropriate post-beta work.
Report: <20160526021235.w4nq7k3gnheg7vit@alap3.anarazel.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For some reason the code to emit a warning and switch to uncompressed
output was placed down in the guts of pg_backup_archiver.c. This is
definitely too late in the case of parallel operation (and I rather
wonder if it wasn't too late for other purposes as well). Put it in
pg_dump.c's option-processing logic, which seems a much saner place.
Also, the default behavior with custom or directory output format was
to emit the warning telling you the output would be uncompressed. This
seems unhelpful, so silence that case.
Back-patch to 9.3 where parallel dump was introduced.
Kyotaro Horiguchi, adjusted a bit by me
Report: <20160526.185551.242041780.horiguchi.kyotaro@lab.ntt.co.jp>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Windows coding of ShutdownWorkersHard() thought that setting termEvent
was sufficient to make workers exit after an error. But that only helps
if a worker is busy and passes through checkAborting(). An idle worker
will just sit, resulting in pg_dump failing to exit until the user gives up
and hits control-C. We should close the write end of the command pipe
so that idle workers will see socket EOF and exit, as the Unix coding was
already doing.
Back-patch to 9.3 where parallel pg_dump was introduced.
Kyotaro Horiguchi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Dating back to commit f10b63923, our grammar has allowed "USING" to
optionally appear before an opclass name in CREATE INDEX (and, lately,
some related places such as ON CONFLICT specifications). Nikolay Shaplov
noticed that this syntax existed but wasn't documented, and proposed
documenting it. But what seems like a better idea is to remove the
production, thereby making the code match the docs not vice versa.
This isn't our usual modus operandi for such cases, but there are a
couple of good reasons to proceed this way:
* So far as I can find, this syntax has never been documented anywhere.
It isn't relied on by any of our own code or test cases, and there seems
little reason to suppose that it's been used in the wild either.
* Documenting it would mean that there would be two separate uses of
USING in the CREATE INDEX syntax, the other being "USING access_method".
That can lead to nothing but confusion.
So, let's just remove it. On the off chance that somebody somewhere
is using it, this isn't something to back-patch, but we can fix it
in HEAD.
Discussion: <1593237.l7oKHRpxSe@nataraj-amd64>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ever since we split the statistics collector's reports into per-database
files (commit 187492b6c2e8cafc), backends have been seeing stale statistics
for shared catalogs. This is because the inquiry message only prompts the
collector to write the per-database file for the requesting backend's own
database. Stats for shared catalogs are in a separate file for "DB 0",
which didn't get updated.
In normal operation this was partially masked by the fact that the
autovacuum launcher would send an inquiry message at least once per
autovacuum_naptime that asked for "DB 0"; so the shared-catalog stats would
never be more than a minute out of date. However the problem becomes very
obvious with autovacuum disabled, as reported by Peter Eisentraut.
To fix, redefine the semantics of inquiry messages so that both the
specified DB and DB 0 will be dumped. (This might seem a bit inefficient,
but we have no good way to know whether a backend's transaction will look
at shared-catalog stats, so we have to read both groups of stats whenever
we request stats. Sending two inquiry messages would definitely not be
better.)
Back-patch to 9.3 where the bug was introduced.
Report: <56AD41AC.1030509@gmx.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the original design for parallel dump, worker processes reported errors
by sending them up to the master process, which would print the messages.
This is unworkably fragile for a couple of reasons: it risks deadlock if a
worker sends an error at an unexpected time, and if the master has already
died for some reason, the user will never get to see the error at all.
Revert that idea and go back to just always printing messages to stderr.
This approach means that if all the workers fail for similar reasons (eg,
bad password or server shutdown), the user will see N copies of that
message, not only one as before. While that's slightly annoying, it's
certainly better than not seeing any message; not to mention that we
shouldn't assume that only the first failure is interesting.
An additional problem in the same area was that the master failed to
disable SIGPIPE (at least until much too late), which meant that sending a
command to an already-dead worker would cause the master to crash silently.
That was bad enough in itself but was made worse by the total reliance on
the master to print errors: even if the worker had reported an error, you
would probably not see it, depending on timing. Instead disable SIGPIPE
right after we've forked the workers, before attempting to send them
anything.
Additionally, the master relies on seeing socket EOF to realize that a
worker has exited prematurely --- but on Windows, there would be no EOF
since the socket is attached to the process that includes both the master
and worker threads, so it remains open. Make archive_close_connection()
close the worker end of the sockets so that this acts more like the Unix
case. It's not perfect, because if a worker thread exits without going
through exit_nicely() the closures won't happen; but that's not really
supposed to happen.
This has been wrong all along, so back-patch to 9.3 where parallel dump
was introduced.
Report: <2458.1450894615@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
| |
When pulling the list of roles to drop, exclude roles whose names
begin with "pg_" (as we do when we are dumping the roles out to
recreate them).
Also add regression tests to cover pg_dumpall -c and this specific
issue.
Noticed by Rushabh Lathia. Patch by me.
|
|
|
|
|
| |
Per buildfarm, this is needed to allow extensions to use XLogIsNeeded()
in Windows builds.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All of the other tables used in the query in dumpTable(), which is
collecting column-level ACLs, are qualified, so we should be qualifying
the pg_init_privs, the related sub-select against pg_class and the
other queries added by the pg_dump catalog ACLs work.
Also, use ::regclass (or ::pg_catalog.regclass, where appropriate)
instead of using a poorly constructed query to get the OID for various
catalog tables.
Issues identified by Noah and Alvaro, patch by me.
|
|
|
|
|
|
|
|
|
|
| |
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid
in-place, it's unsafe to assume that successive reads of those values will
give consistent results. Fetch each one just once to ensure sane behavior
in the minimum calculation. Noted while reviewing Alexander Korotkov's
patch in the same area.
Discussion: <8564.1464116473@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vac_truncate_clog() uses its own transaction ID as the comparison point in
a sanity check that no database's datfrozenxid has already wrapped around
"into the future". That was probably fine when written, but in a lazy
vacuum we won't have assigned an XID, so calling GetCurrentTransactionId()
causes an XID to be assigned when otherwise one would not be. Most of the
time that's not a big problem ... but if we are hard up against the
wraparound limit, consuming XIDs during antiwraparound vacuums is a very
bad thing.
Instead, use ReadNewTransactionId(), which not only avoids this problem
but is in itself a better comparison point to test whether wraparound
has already occurred.
Report and patch by Alexander Korotkov. Back-patch to all versions.
Report: <CAPpHfdspOkmiQsxh-UZw2chM6dRMwXAJGEmmbmqYR=yvM7-s6A@mail.gmail.com>
|
|
|
|
|
|
|
|
| |
Commit 1aba62ec moved the range check of that option form guc.c into
bufmgr.c, but introduced a bug by changing a >= 0.0 to > 0.0, which made
the value 0 no longer accepted. Put it back.
Reported by Jeff Janes, diagnosed by Tom Lane
|
|
|
|
| |
Michael Paquier
|
|
|
|
|
|
|
|
|
|
| |
Commit 65c5fcd353a859da9e61bfb2b92a99f12937de3b broke this by removing a
header include directive that is conditionally required. Add that back
to nbtree.c, with annotation to keep pgrminclude from re-breaking it.
Peter Geoghegan
Report: <CAM3SWZTNjHFYW_UG8bu0BnogqQ2HfsTgkzXLueuUhfTcYbu5HA@mail.gmail.com>
|
|
|
|
|
|
|
|
|
| |
Needed for cases in which INSERT ... ON CONFLICT appears inside a
recursive CTE item. Per bug #14153 from Thomas Alton.
Patch by Peter Geoghegan, slightly adjusted by me
Report: <20160521232802.22598.13537@wrigleys.postgresql.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If RAW_EXPRESSION_COVERAGE_TEST is defined, do a no-op tree walk over
every basic DML statement submitted to parse analysis. If we'd had this
in place earlier, bug #14153 would have been caught by buildfarm testing.
The difficulty is that raw_expression_tree_walker() is only used in
limited cases involving CTEs (particularly recursive ones), so it's
very easy for an oversight in it to not be noticed during testing of a
seemingly-unrelated feature.
The type of error we can expect to catch with this is complete omission
of a node type from raw_expression_tree_walker(), and perhaps also
recursion into a field that doesn't contain a node tree, though that
would be an unlikely mistake. It won't catch failure to add new fields
that need to be recursed into, unfortunately.
I'll go enable this on one or two of my own buildfarm animals once
bug #14153 is dealt with.
Discussion: <27861.1464040417@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
do_text_output_multiline() would fail (typically with a null pointer
dereference crash) if its input string did not end with a newline. Such
cases do not arise in our current sources; but it certainly could happen
in future, or in extension code's usage of the function, so we should fix
it. To fix, replace "eol += len" with "eol = text + len".
While at it, make two cosmetic improvements: mark the input string const,
and rename the argument from "text" to "txt" to dodge pgindent strangeness
(since "text" is a typedef name).
Even though this problem is only latent at present, it seems like a good
idea to back-patch the fix, since it's a very simple/safe patch and it's
not out of the realm of possibility that we might in future back-patch
something that expects sane behavior from do_text_output_multiline().
Per report from Hao Lee.
Report: <CAGoxFiFPAGyPAJLcFxTB5cGhTW2yOVBDYeqDugYwV4dEd1L_Ag@mail.gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was overlooked in commit 473b93287, which introduced DROP ACCESS
METHOD. Although that command is restricted to superusers, we don't want
even superusers dropping the built-in methods; "DROP ACCESS METHOD btree"
in particular is unrecoverable from. Pin these objects in the same way
that other initdb-created objects are pinned.
I chose to bump catversion for this fix. That's not absolutely necessary
perhaps, but it will ensure that no 9.6 production systems are missing
the pin entries.
|
|
|
|
|
|
| |
That reduces number of allocation.
Per gripe from Michael Paquier and Tom Lane suggestion.
|
|
|
|
|
|
|
| |
Page image should be MAXALIGN'ed because existing code could directly align
pointers in page instead of align offset from beginning of page.
Found during play with indexes as extenstion, Alexander Korotkov and me
|
|
|
|
|
| |
Reference to getThreadLocalPQExpBuffer here seems inappropriate, since
we aren't necessarily using that instantiation of getLocalPQExpBuffer.
|
|
|
|
|
|
| |
This makes the feature names match the SQL standard.
From: Alexander Law <exclusion@gmail.com>
|
|
|
|
|
| |
We don't tag the translations repository any more, because the commits
into postgresql contain the git hashes, and that's authoritative.
|
|
|
|
|
|
| |
Some comments mentioned XLogReplayBuffer, but there's no such function:
that was an interim name for a function that got renamed to
XLogReadBufferForRedo, before commit 2c03216d831160 was pushed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While it could be argued that rejecting system column mentions in the
ON CONFLICT list is an unsupported feature, falling over altogether
just because the table has a unique index on OID is indubitably a bug.
As far as I can tell, fixing infer_arbiter_indexes() is sufficient to
make ON CONFLICT (oid) actually work, though making a regression test
for that case is problematic because of the impossibility of setting
the OID counter to a known value.
Minor cosmetic cleanups along with the bug fix.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
subquery_planner() failed to apply expression preprocessing to the
arbiterElems and arbiterWhere fields of an OnConflictExpr. No doubt the
theory was that this wasn't necessary because we don't actually try to
execute those expressions; but that's wrong, because it results in failure
to match to index expressions or index predicates that are changed at all
by preprocessing. Per bug #14132 from Reynold Smith.
Also add pullup_replace_vars processing for onConflictWhere. Perhaps
it's impossible to have a subquery reference there, but I'm not exactly
convinced; and even if true today it's a failure waiting to happen.
Also add some comments to other places where one or another field of
OnConflictExpr is intentionally ignored, with explanation as to why it's
okay to do so.
Also, catalog/dependency.c failed to record any dependency on the named
constraint in ON CONFLICT ON CONSTRAINT, allowing such a constraint to
be dropped while rules exist that depend on it, and allowing pg_dump to
dump such a rule before the constraint it refers to. The normal execution
path managed to error out reasonably for a dangling constraint reference,
but ruleutils.c dumped core; so in addition to fixing the omission, add
a protective check in ruleutils.c, since we can't retroactively add a
dependency in existing databases.
Back-patch to 9.5 where this code was introduced.
Report: <20160510190350.2608.48667@wrigleys.postgresql.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The table-skipping logic in autovacuum would fail to consider that
multiple workers could be processing the same shared catalog in
different databases. This normally wouldn't be a problem: firstly
because autovacuum workers not for wraparound would simply ignore tables
in which they cannot acquire lock, and secondly because most of the time
these tables are small enough that even if multiple for-wraparound
workers are stuck in the same catalog, they would be over pretty
quickly. But in cases where the catalogs are severely bloated it could
become a problem.
Backpatch all the way back, because the problem has been there since the
beginning.
Reported by Ondřej Světlík
Discussion: https://www.postgresql.org/message-id/572B63B1.3030603%40flexibee.eu
https://www.postgresql.org/message-id/572A1072.5080308%40flexibee.eu
|
| |
|
|
|
|
|
| |
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 17bf3e8564abf600274789fcc90e72532d5e7c05
|
|
|
|
|
|
|
|
|
| |
Use disallowed instead of reserved, cannot instead of can not, and
double quotes instead of single quotes.
Also add a test to cover the bug which started this discussion.
Per discussion with Tom.
|
|
|
|
|
|
|
| |
As with CREATE ROLE, disallow users from specifying initial
superuser names which begin with 'pg_' in initdb.
Per discussion with Tom.
|
|
|
|
| |
Euler Taveira
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It emerges that some Perl versions before 5.8.9 have a bug with regexps
that use the /m flag and contain "$". This is the reason why jacana
is still failing on HEAD, and I was able to duplicate the failure on
prairiedog's host. There's no real need for "$" in these patterns,
since they are already matching through the statement-terminating
semicolons (or matching an explicit \n in some cases). So just
remove it.
Note: the reason jacana hasn't actually reported any failures in the
last little while is that the way the pg_dump TAP tests are set up, any
failure of this sort results in echoing the entire pg_dump dump output
to stderr. Since there were about a hundred such failures, that resulted
in a 30MB log file which choked the buildfarm upload script. There is
room for improvement here :-(.
Per off-list discussion with Andrew and Stephen.
|
|
|
|
|
| |
The tmp_check directory needs to be removed by "make clean",
and also ignored by .gitignore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch essentially reverts commit 4c6780fd17aa43ed, in favor of a much
simpler solution for the case where the new cluster would choose to create
a TOAST table but the old cluster doesn't have one: just don't create a
TOAST table.
The existing code failed in at least two different ways if the situation
arose: (1) ALTER TABLE RESET didn't grab an exclusive lock, so that the
lock sanity check in create_toast_table failed; (2) pg_upgrade did not
provide a pg_type OID for the new toast table, so that the crosscheck in
TypeCreate failed. While both these problems were introduced by later
patches, they show that the hack being used to cause TOAST table creation
is overwhelmingly fragile (and untested). I also note that before the
TypeCreate crosscheck was added, the code would have resulted in assigning
an indeterminate pg_type OID to the toast table, possibly causing a later
OID conflict in that catalog; so that it didn't really work even when
committed.
If we simply don't create a TOAST table, there will only be a problem if
the code tries to store a tuple that's wider than a page, and field
compression isn't sufficient to get it under a page. Given that the TOAST
creation threshold is intended to be about a quarter of a page, it's very
hard to believe that cross-version differences in the do-we-need-a-toast-
table heuristic could result in an observable problem. So let's just
follow the old version's conclusion about whether a TOAST table is needed.
(If we ever do change needs_toast_table() so much that this conclusion
doesn't apply, we can devise a solution at that time, and hopefully do
it in a less klugy way than 4c6780fd17aa43ed did.)
Back-patch to 9.3, like the previous patch.
Discussion: <8110.1462291671@sss.pgh.pa.us>
|
|
|
|
|
|
|
|
|
|
| |
Buildfarm member jacana appears to have an issue with running this
test. It's not entirely clear to me why, but rather than try to
fight with it, just disable it for now.
None of the other tests try to write out from psql directly as
this test does, so it seems likely that the rest of the tests will
be fine (as they have been on numerous other systems).
|
|
|
|
|
|
|
|
|
|
|
|
| |
Limit maintenance of time to xid mapping to once per minute. At
least in the tested case this brings performance within 5% of when
the feature is off, compared to several times slower without this
patch.
While there, fix comments and whitespace.
Ants Aasma, with cosmetic adjustments suggested by Andres Freund
Reviewed by Kevin Grittner and Andres Freund
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test_pg_dump extension doesn't have a C component, so we need
to exclude it from the MSVC build system trying to figure out how
to build it.
Also add a "MODULES" line to the Makefile, as test_extensions has.
Might not be necessary, but seems good to keep things consistent.
Lastly, remove the 'installcheck' line from test_pg_dump, as that
was causing redefinition errors, at least on my box. This also
makes test_pg_dump consistent with how commit_ts is set up.
|
|
|
|
|
|
|
|
| |
We need to use a new branch due to the 9.5 addition of bypassrls
when adding in the clause to exclude pg_* roles from being dumped
by pg_dumpall.
Pointed out by Noah, patch by me.
|
|
|
|
|
|
|
|
|
|
| |
The Makefile for test_pg_dump shouldn't have a MODULES_big line
because there's no actual compiled bit for that extension. Hopefully
this will fix the Windows buildfarm members which were complaining.
In passing, also add the 'prove_installcheck' bit to the pg_dump and
test_pg_dump Makefiles, to get the buildfarm members to actually run
those tests.
|
|
|
|
|
|
|
|
| |
Discussion is still underway as to whether to revert the entire patch
that added this function, but that discussion may not conclude before
beta1. So, in the meantime, let's do at least this much.
David Rowley
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new limit affects both the max_parallel_degree GUC and the
parallel_degree reloption. There may some day be a use case for using
more than 1024 CPUs for a single query, but that's surely not the case
right now. Not only do not very many people have that many CPUs, but
the code hasn't been tested at that kind of scale and is very unlikely
to perform well, or even work at all, without a lot more work. The
issue addressed by commit 06bd458cb812623c3f1fdd55216c4c08b06a8447 is
probably just one problem of many.
The idea of a more reasonable limit here was suggested by Tom Lane;
the value of 1024 was suggested by Amit Kapila.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ordinarily, pg_upgrade shouldn't have any difficulty in matching up all
the relations it sees in the old and new databases. If it does, however,
it just goes belly-up with a pretty unhelpful error message. That seemed
fine as long as we expected the case never to occur in the wild, but
Alvaro reported that it had been seen in a database whose pg_largeobject
table had somehow acquired a TOAST table. That doesn't quite seem like
a case that pg_upgrade actually needs to handle, but it would be good if
the report were more diagnosable. Hence, extend the logic to print out
as much information as we can about the mismatch(es) before we quit.
In passing, improve the readability of get_rel_infos()'s data collection
query, which had suffered seriously from lets-not-bother-to-update-comments
syndrome, and generally was unnecessarily disrespectful to readers.
It could be argued that this is a bug fix, but given that we have so few
reports, I don't feel a need to back-patch; at least not before this has
baked awhile in HEAD.
|